Sei sulla pagina 1di 382

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

Published April 2005

The information in this document and any document referenced herein is provided for informational purposes only, is provided AS IS AND WITH ALL FAULTS and cannot be understood as substituting for customized service and information that might be developed by Microsoft Corporation for a particular user based upon that users particular environment. RELIANCE UPON THIS DOCUMENT AND ANY DOCUMENT REFERENCED HEREIN IS AT THE USERS OWN RISK. MICROSOFT CORPORATION PROVIDES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION CONTAINED IN THIS DOCUMENT AND ANY DOCUMENT REFERENCED HEREIN. Microsoft Corporation provides no warranty and makes no representation that the information provided in this document or any document referenced herein is suitable or appropriate for any situation, and Microsoft Corporation cannot be held liable for any claim or damage of any kind that users of this document or any document referenced herein may suffer. Your retention of and/or use of this document and/or any document referenced herein constitutes your acceptance of these terms and conditions. If you do not accept these terms and conditions, Microsoft Corporation does not provide you with any right to use any part of this document or any document referenced herein. Complying with the applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights or other intellectual property rights covering subject matter within this document. Except as provided in any separate written license agreement from Microsoft, the furnishing of this document does not give you, the user, any license to these patents, trademarks, copyrights or other intellectual property. Information in this document, including URL and other Internet Web site references, is subject to change without notice. Unless otherwise noted, the example companies, organizations, products, domain names, e-mail addresses, logos, people, places and events depicted herein are fictitious, and no association with any real company, organization, product, domain name, e-mail address, logo, person, place or event is intended or should be inferred. Microsoft, Visual Basic, Visual Studio, Windows, Windows Server, and Win32 are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. 2005 Microsoft Corporation. All rights reserved. The names of actual companies and products mentioned herein may be the trademarks of their respective owners.

Contents

iii

Contents
Preface........................................................................................................................................... ix
Overview.................................................................................................................................................. ix Technical Subjects Covered in This Guide ........................................................................................x Technical Subjects That Are Not Covered in This Guide...................................................................x Intended Audiences ................................................................................................................................. xi Knowledge Prerequisites ................................................................................................................ xiii How to Use This Solution Guide ............................................................................................................ xiii Organization by Chapter.................................................................................................................. xv Document Conventions ................................................................................................................. xvii Job Aids ................................................................................................................................................ xvii Contributors ........................................................................................................................................... xix Program Manager........................................................................................................................... xix Author ............................................................................................................................................. xix Editors ............................................................................................................................................ xix Architects........................................................................................................................................ xix Test................................................................................................................................................. xix Feedback ............................................................................................................................................... xix

Introduction to the Microsoft Solutions Framework.................................................................. 1


Introduction and Goals............................................................................................................................. 1 Overview of MSF ..................................................................................................................................... 1 MSF Foundational Principles ............................................................................................................ 2 The MSF Team Model Overview.................................................................................................. 2 The MSF Process Model Overview.............................................................................................. 3 The MSF Disciplines Overview .................................................................................................... 5 Overview of MSF Phases ................................................................................................................11

Envisioning Phase....................................................................................................................... 13
Introduction and Goals............................................................................................................................13 Understand the Goals of the Migration Project .......................................................................................14 Define the Business Goals ..............................................................................................................15 Identify the Design Goals.................................................................................................................16 Create the Problem Statement ...............................................................................................................18 Create the Vision Statement ...................................................................................................................18 Define the User Profiles ..........................................................................................................................19 Assess the Current Situation (High Level) ..............................................................................................19 Capture High Level Requirements ..........................................................................................................22 Define the Project Scope ........................................................................................................................22 Define the Solution Concept ...................................................................................................................23 Application and Database Migration Solution Design Strategy........................................................23 Optimal Strategy ..............................................................................................................................30 Set Up a Team........................................................................................................................................30 Special Considerations for Setting Up Your Migration Team...........................................................33 Define the Project Structure....................................................................................................................34 Define Project Communications ......................................................................................................34 Define Change Control ....................................................................................................................35 Assess Risk ............................................................................................................................................36 Manage Risks .........................................................................................................................................37

Planning Phase ............................................................................................................................ 39


Introduction and Goals............................................................................................................................39 Complete a Detailed Assessment of the Existing Environment ..............................................................40 Application .......................................................................................................................................41 Database .........................................................................................................................................42 Application Infrastructure .................................................................................................................42 Develop the Solution Design and Architecture........................................................................................43 Build the Conceptual Design ...........................................................................................................44 Build the Logical Design ..................................................................................................................45

iv

Contents

Build the Physical Design ................................................................................................................48 Incorporate Design Considerations.........................................................................................................50 Hardware Design Considerations ....................................................................................................50 Validate the Technology .........................................................................................................................50 SQL Server Editions and Features ..................................................................................................51 Windows Server 2003......................................................................................................................52 Technical Proof of Concept .............................................................................................................52 Develop the Project Plans.......................................................................................................................53 Development Plan ...........................................................................................................................53 Stabilizing Phase Plans ...................................................................................................................55 Test Plan .........................................................................................................................................55 Pilot Plan .........................................................................................................................................59 Deployment Plan .............................................................................................................................60 Create the Project Schedules .................................................................................................................63 Estimating the Effort ........................................................................................................................63 Set Up the Development and Test Environments ...................................................................................65

Developing: Databases Introduction .................................................................................... 67


Introduction and Goals............................................................................................................................67 Migrating the Database...........................................................................................................................68

Developing: Databases Migrating the Database Architecture ........................................... 71


Introduction and Goals............................................................................................................................71 Build the SQL Server Instance................................................................................................................71 Pre-Installation Planning..................................................................................................................72 Installation .......................................................................................................................................72 Configure the Server...............................................................................................................................73 Configure Memory ...........................................................................................................................74 Set the CPU Affinity .........................................................................................................................76 Configure the Listener .....................................................................................................................77 Migrate the Storage Architecture ............................................................................................................77 Blocks ..............................................................................................................................................77 Extents and Segments.....................................................................................................................78 Tablespaces and Datafiles ..............................................................................................................79 Storage Definition for Tables and Indexes.......................................................................................81 Migrate System Storage Structures .................................................................................................82

Developing: Databases Migrating Schemas ........................................................................ 83


Introduction and Goals............................................................................................................................83 Scripting Migrated Schema Objects........................................................................................................84 Script Everything..............................................................................................................................84 Provide Support Documentation ......................................................................................................84 Protect the Scripts ...........................................................................................................................84 Migrate the Schema................................................................................................................................85 Map the Storage Architecture ..........................................................................................................85 Create Databases for the Schema...................................................................................................87 Create Filegroups for the Tablespaces............................................................................................90 Add Datafiles to Filegroups..............................................................................................................90 Add Transaction Logs......................................................................................................................91 Sample Schema Migration...............................................................................................................91 Migrate the Schema Objects...................................................................................................................98 Create the Schema Owner ..............................................................................................................98 Create the Schema Objects...........................................................................................................102 Comments .....................................................................................................................................111 Constraints ....................................................................................................................................111 Triggers .........................................................................................................................................119 Indexes ..........................................................................................................................................121 Views .............................................................................................................................................123 Stored Programs............................................................................................................................126 Solutions for Objects not Found in SQL Server .............................................................................129 Sample Schema Object Migration .................................................................................................132

Contents

Developing: Databases Migrating the Database Users .................................................... 143


Introduction and Goals..........................................................................................................................143 Create User Accounts...........................................................................................................................143 Create Roles and Grant Privileges .......................................................................................................145 Sample User Migration .........................................................................................................................147

Developing: Databases Migrating the Data........................................................................ 153


Introduction and Goals..........................................................................................................................153 Planning the Data Migration..................................................................................................................153 Options for Migration .....................................................................................................................153 Factors in Migration .......................................................................................................................155 Migration Strategy..........................................................................................................................156 Executing the Data Migration................................................................................................................157 Pre-Implementation Tasks.............................................................................................................158 Implementation Tasks....................................................................................................................160 Post-Implementation Tasks ...........................................................................................................165 Validating the Data Migration.........................................................................................................165

Developing: Databases Unit Testing the Migration........................................................... 167


Introduction and Goals..........................................................................................................................167 Objectives of Testing ............................................................................................................................167 The Testing Process .............................................................................................................................168 Test Database Integrity..................................................................................................................169 Test Security..................................................................................................................................169 Validate Data .................................................................................................................................170 Validate the Migration ....................................................................................................................170

Developing: Applications Introduction............................................................................... 173


Introduction and Goals..........................................................................................................................173 Application Migration Strategies ...........................................................................................................174 Interoperation ................................................................................................................................174 Port or Rewrite to .NET Framework...............................................................................................175 Port or Rewrite to Win32................................................................................................................175 Quick Port Using Windows Services for UNIX...............................................................................176 Scenarios and Cases............................................................................................................................176

Developing: Applications Migrating Oracle SQL and PL/SQL............................................ 177


Introduction ...........................................................................................................................................177 Migrating Data Access ..........................................................................................................................177 Sample Tables...............................................................................................................................179 Migration Process Overview ..........................................................................................................181 Step 1: Extraction of Data Access .................................................................................................182 Step 2: Transaction Management..................................................................................................202 Step 3: Fetch Strategy ...................................................................................................................203 Step 4: Subprograms Conversion..................................................................................................206 Step 5: Job Scheduling..................................................................................................................213 Step 6: Interface File Conversion...................................................................................................217 Step 7: Workflow Automation ........................................................................................................217 Step 8: Performance Tuning ..........................................................................................................218

Developing: Applications Migrating Perl............................................................................ 223


Introduction and Goals..........................................................................................................................223 Introduction to the Perl DBI Architecture...............................................................................................224 Scenario 1: Interoperation of Perl on UNIX with SQL Server................................................................225 Case 1: Interoperating an ODBC DBD Application ........................................................................226 Case 2: Interoperating an Oracle DBD Application........................................................................228 Scenario 2: Porting the Perl Application to Windows ............................................................................232 Case 1: Porting a Perl Application using ODBC DBD....................................................................232 Case 2: Porting a Perl Application Using Oracle DBD ...................................................................234

Developing: Applications Migrating PHP ........................................................................... 237


Introduction and Goals..........................................................................................................................237 PHP Modules........................................................................................................................................238 Scenario 1: Interoperating PHP on UNIX with SQL Server...................................................................240 Case 1: Interoperating a PHP Application Using ORA Functions ..................................................240

vi

Contents

Case 2: Interoperating a PHP Application Using OCI8 Functions.........................................................244 Case 3: Interoperating a PHP Application Using ODBC Functions.......................................................246 Common Function Translation Issues ..................................................................................................249 Handling Transactions ...................................................................................................................249 Migrating Cursors ..........................................................................................................................250 Connection Pooling........................................................................................................................251 Stored Procedures.........................................................................................................................251 Scenario 2: Porting the Application to Win32........................................................................................252 Case 1: Porting a PHP Application using ORA Functions .............................................................252 Case 2: Porting a PHP Application Using OCI8 Functions ............................................................252 Case 3: Porting a PHP Application Using ODBC Functions ..........................................................253

Developing: Applications Migrating Java .......................................................................... 255


Introduction and Goals..........................................................................................................................255 Scenario 1: Interoperating Java on UNIX with SQL Server...................................................................256 Case 1: Interoperating a Java Application Using the JDBC Driver ................................................256 Scenario 2 Porting the Application to Win32 ....................................................................................259

Developing: Applications Migrating Python ...................................................................... 261


Introduction and Goals..........................................................................................................................261 Scenario 1: Interoperating Python on UNIX with SQL Server ...............................................................263 Case 1: Interoperating Using the mxODBC Module ......................................................................263 Scenario 2: Port the Python Application to Win32 ................................................................................266 Case 1: Porting a Python Application using mxODBC...................................................................266

Developing: Applications Migrating Oracle Pro*C ............................................................ 269


Introduction and Goals..........................................................................................................................269 Understanding the Technology .............................................................................................................270 Understanding Pro*C.....................................................................................................................270 Understanding .NET and ADO.NET ..............................................................................................270 Scenario 1: Rewriting Pro*C to the .NET Platform................................................................................271 Case 1: Rewrite the Application using Visual Basic.NET...............................................................271

Developing: Applications Migrating Oracle Forms ........................................................... 279


Introduction and Goals..........................................................................................................................279 Migration Approach...............................................................................................................................279 Examining Oracle Forms ......................................................................................................................281 Object Library ................................................................................................................................284 PL/SQL Library ..............................................................................................................................284 Form Module .................................................................................................................................285 Menu Module .................................................................................................................................288 Windows and Canvases ................................................................................................................288 Understanding Visual Basic .NET.........................................................................................................290 Scenario 1: Rewriting to Visual Basic .NET ..........................................................................................291 Case 1: Using Oracle Designer .....................................................................................................291 Case 2: Manually Redesigning the Application..............................................................................291 Testing the Visual Basic .NET Application ............................................................................................295

Stabilizing Phase ....................................................................................................................... 297


Introduction and Goals..........................................................................................................................297 Testing the Solution ..............................................................................................................................298 Best Practices................................................................................................................................299 Preparing for Testing .....................................................................................................................299 Types of Testing ............................................................................................................................301 Bug Tracking and Reporting ..........................................................................................................305 User Acceptance Testing and Signoff............................................................................................308 Piloting the Solution ..............................................................................................................................308 Preparing for the Pilot ....................................................................................................................309 Conducting the Pilot.......................................................................................................................309 Evaluating the Pilot ........................................................................................................................310 Finalizing the Release...........................................................................................................................310

Deploying Phase........................................................................................................................ 313

Contents

vii

Introduction and Goals..........................................................................................................................313 Deploying the Solution ..........................................................................................................................313 Deploying the Database Server .....................................................................................................314 Deploying the Server Side Applications.........................................................................................317 Deploying the Client Application ....................................................................................................318 Change Management ....................................................................................................................320 Stabilizing the Deployment ...................................................................................................................320 Deployment Checklist ....................................................................................................................320 Quiet Period...................................................................................................................................323 Transferring Ownership to Operations ..................................................................................................324 Project Team Tasks.......................................................................................................................324 Operations Team Tasks ................................................................................................................325 Conducting Project Review and Closure...............................................................................................325

Operations.................................................................................................................................. 327
Introduction and Goals..........................................................................................................................327 Operational Framework ........................................................................................................................327 Windows Environment Operations........................................................................................................328 System Administration ...................................................................................................................328 Security Administration ..................................................................................................................328 Monitoring......................................................................................................................................329 Additional Links .............................................................................................................................329 SQL Server Environment Operations ...................................................................................................329 Administration ................................................................................................................................329 Security..........................................................................................................................................330 Monitoring......................................................................................................................................330 Additional Links .............................................................................................................................330

APPENDICES ............................................................................................................................. 331


Appendix A: SQL Server for Oracle Professionals................................................................................331 Architecture ...................................................................................................................................331 Administration ................................................................................................................................343 Appendix B: Getting the Best Out of SQL Server 2000 and Windows ..................................................348 Performance ..................................................................................................................................348 Scalability ......................................................................................................................................349 High Availability .............................................................................................................................349 Features and Tools........................................................................................................................351 Appendix C: Baselining.........................................................................................................................353 Creating Baselines.........................................................................................................................353 Capturing Statistics........................................................................................................................357 Comparing and Reporting Results.................................................................................................357 Appendix D: Installing Common Drivers and Applications ....................................................................358 Installing FreeTDS ................................................................................................................................358 Configuring FreeTDS.....................................................................................................................358 Testing the FreeTDS Configuration ...............................................................................................358 Installing unixODBC..............................................................................................................................359 Installing ActiveState Perl .....................................................................................................................360 Appendix E: Reference Resources .......................................................................................................362

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

ix

Preface
Overview
The Oracle on UNIX to SQL Server on Windows Migration Guide provides practical guidance on the processes and procedures to be followed while migrating from Oracle databases (versions 8i and later) on UNIX platforms to Microsoft Windows Server 2003 and Microsoft SQL Server 2000. This guide also presents strategies and procedural guidance to convert existing applications for use within the SQL Server 2000 and Microsoft Windows Server environment. This guide provides additional information and references to online resources about migrating other components of your application ecosystem, including the server and network infrastructure, development and test environment, and user accounts. The guide is based on the experience of consultants working in the field and organizations that have successfully migrated from an Oracle on UNIX to a SQL Server on Windows environment, and it comprises the best lab-tested, customer-proven, crossproduct technical guidance from Microsoft on planning, building, deploying, and operating the migration solution. All processes and procedures contained in this guide have been validated through the Microsoft Technology Adoption Program (TAP), which is a beta program that consists of customers and partners who have evaluated and validated the various technologies and practices described in this guide. An important assumption made in this guide is that your organization has decided to migrate all or parts of the Oracle database and application environment from UNIX to SQL Server 2000 in a Windows environment. Because of this assumption, this guide does not present a competitive analysis of Oracle versus SQL Server 2000, or UNIX versus Windows. This guide does help you decide which migration strategy is best for your organization given your specific business and technical requirements, and it explains how to plan for and execute that strategy. This process-driven approach to migration is applied to the Oracle database, the UNIX applications that rely on the database, and the application programming interfaces (APIs) that connect the applications to the database. The strategies and methods for migrating these components vary according to the size and complexity of the database, application, and infrastructure of the Oracle source system. The strategies and methods also vary according to any new business or technical requirements your organization develops as a result of migrating to the Microsoft Windows platform. Because there is so much variance in the possible approaches to migration, this guide provides numerous examples and best practices to help you better understand both the concepts and detailed processes involved in migration. A series of job aids (templates, spreadsheets, and questionnaires) are included as part of this solution guide to help you better analyze, plan, structure, and perform the migration. The information under the following two headings outlines the technical scope of the migration strategies and methods provided in this guide.

Preface

Technical Subjects Covered in This Guide


The following technical topics and subject areas are included in this guide: Migration of a typical Oracle database, including objects (such as tables, indexes, views, triggers, and stored procedures) and the contents of the objects to SQL Server. Methods and procedures that focus on taking advantage of existing applications on UNIX platforms by communicating with SQL Server on Windows. Migration of applications and APIs to work under Microsoft Win32, Microsoft Windows Services for UNIX, and Microsoft .NET. Application and API interoperability with SQL Server, including: Java 2 Enterprise Edition (J2EE) and Java Database Connectivity (JDBC) Open Database Connectivity (ODBC) Perl, including the DBI and DBD modules PHP Python Pro*C Oracle Forms

Microsoft offers the SQL Server Migration Assistant toolset for the migration of Oracle databases to SQL Server, including: Migration Analyzeranalyzes the source Oracle databases and produces vital statistics on the size and complexity of the migration. Schema and Data Migratormigrates the Oracle schemas and moves the data into the new SQL Server objects. SQL Converterconverts code found in Oracle objects to their T-SQL equivalents. Migration Testerautomatically tests the converted code.

These tools can be downloaded from http://www.microsoft.com/sql/migration/. The tools are in beta as of the date of publication for this solution. Version 1 of the tools is slated for release in June 2005 and will be available for download at the same location.

Technical Subjects That Are Not Covered in This Guide


The following technical topics and subject areas are not included in this guide: Oracle versions earlier than 8i and Oracle 10g SQL Server versions earlier than SQL Server 2000 Design of hardware architecture Modeling and design of databases Procedures for database administration functions, such as capacity planning, backup and recovery, performance tuning, and monitoring Aspects of the database that could be rendered inapplicable in the new environment, such as archived data and backups Features available in Oracle that do not have corresponding features in SQL Server, such as spatial data, XML DB, flashback query, SQLJ, and XSQL Installation of application clients, or client connectivity to the application server or database

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Migration of the operating system to Microsoft Windows Server 2003 Support for Open Management Group (OMG), an open-source, java-based standard for metadata Data extraction from enterprise resource planning (ERP) systems

xi

While these technical subjects are not explicitly discussed in this guide, information about many of them is available from Microsoft. Where appropriate throughout this guide, links to more information are provided.

Intended Audiences
Generally, the intended audience for this solution guide includes medium and large-sized IT organizations that want to migrate their database and applications from Oracle on UNIX to SQL Server 2000 on Windows. More specifically, this guide is written for all of the members of the migration team, from business decision makers to development to operations. As a result, not all chapters in this guide are relevant to all team members, and the knowledge prerequisites for using this guide are expressed as an aggregate of the knowledge required of the team. The rest of this section provides brief descriptions of the typical audiences for this guide, followed by a brief statement of the aggregate knowledge prerequisites for developing the solution described in this guide. The intended audience for individual chapters is identified in the "How to Use This Solution Guide" section later in this Preface. The following list organizes common job titles and organizational responsibilities under the Microsoft Solutions Framework (MSF) role most commonly associated with that title or responsibility. The relationship between this solution guide and MSF is discussed in more detail in the "How to Use This Solution Guide" section. In some organizations, a role may be filled by a single team member or many. Stakeholders and High Level Decision Makers. These team members can be subdivided into two groups: Business Decision Makers (BDMs). This group often includes the CIO and IT director of an organization. These personnel decide the business priorities for purchasing and implementing solutions within the organizations. IT BDMs require a high-level understanding of the technical solutions being explored so that they can assess the value that the solution can provide to the organization. Technical Decision Makers (TDMs). The focus of TDMs is to determine the technology used to solve business problems. IT TDMs must understand the business drivers the organizations face along with the functionality delivered by multiple technology scenarios.

Product Management Role. This role usually includes team members with the following job title and responsibilities: Product Manager. This team member acts as a link between the team members of the project and the customers of the solution by handling highlevel communication and managing the customer's expectations. The Product Manager ensures that the team addresses the customer's needs, concerns, questions, and feedback throughout the project. Product managers must understand the business needs of the solution, including operations, business processes, and policies.

User Experience Role. This role usually includes team members with the following job titles and responsibilities:

xii

Preface Usability Expert or Technical Writer/Editor. Ensures that the new solution is easy for customers to use. The Usability Experts ensure that documentation and training is provided to end users to use the new solution effectively.

Program Management Role. This role usually includes team members with the following job title and responsibilities: Project Managers. These personnel are responsible for managing the resources and schedule for the migration project from start to end.

Development Role. This role usually includes team members with the following job titles and responsibilities: IT Architect. The primary role of the IT architect is to carry out decisions made by the TDMs. Team members with this responsibility are most involved during the Planning, Developing, and Stabilizing Phases of the migration project. Oracle Database Administrators. These personnel are knowledgeable about the physical characteristics, logical characteristics, performance, and service requirements of the source database. These personnel also have knowledge of any scripts and tools that are used in database administration. SQL Server Database Administrators. These personnel are expert in architecting and implementing SQL Server databases. These team members will design the target SQL Server database environment and perform key tasks during the actual migration. UNIX System Administrators. These personnel have complete knowledge of the UNIX servers and the operating system installations, such as versions, packages, configuration, and security. Windows System Administrators. These personnel have complete knowledge of the Windows operating system and the Microsoft application product line. These team members should have some experience in defining and implementing server technologies in Windows. UNIX Application Developers. These personnel have knowledge of the applications that are running in the current UNIX environment and their implementation details. Windows Application Developers. These personnel have performed development in the Windows environment for the technologies involved in the solution. Security Specialists. These personnel have a wide base of knowledge about security issues, including how security relates to networks, servers, databases, and applications.

Test Role. This role usually includes team members with the following job titles and responsibilities: Test Engineering Managers. These personnel have knowledge about testing application and database solutions and also have managerial experience. UNIX Database and Applications Testers. These personnel have knowledge about all databases and applications that are running in the current UNIX environment and provide input when test plans are created for the migrated environment. Windows Database and Applications Testers. These team members have knowledge of databases and applications in both UNIX and Windows. These personnel are experienced at creating and executing test plans.

Release Management Role. This role usually includes team members with the following job titles and responsibilities:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Deployment Manager. These personnel have knowledge of managing the deployment of the new database and applications in the production environment.

xiii

Technology Specialist. These personnel have knowledge of the technology and processes in a data center operation. They have intimate knowledge of the solution being developed and will aid in transitioning them into the production environment.

Knowledge Prerequisites
It is assumed that the technically-oriented audience for this guide has, in aggregate, the following specific technology competencies and proficiencies: Expert level of UNIX development and administration Expert level of Windows development and administration Expert level of SQL Server and Oracle administration Knowledgeable staff to maintain the new environment and the technologies involved Knowledgeable staff that understand any interaction between the UNIX and Windows environment when interoperability is created and maintained

How to Use This Solution Guide


The organization of this guide is based on the industry-proven need to manage IT projects according to a disciplined process that improves the likelihood of the project's success. It is designed to be used with a companion guide, the UNIX Migration Project Guide (UMPG). While this guide contains the technical and solution-specific information needed for the project, the UMPG provides the disciplined process steps for using this information in the context of a migration project and a team organization model ("people and process" guidance). To facilitate their side-by-side use, both guides use project phases as an organizational device. Specifically, they follow the structure of the Microsoft Solutions Framework (MSF), which defines five distinct phases for IT projects: Envisioning, Planning, Developing (or Migrating), Stabilizing, and Deploying. Each guide presents the information (process or technical) needed for a phase within chapters named for that phase. For example, in the solution guide, business and technical information needed for the initial decision-making is in the Envisioning chapter and detailed procedures and scripts are in the Migrating chapters. The UMPG is essentially "MSF applied to UNIX migration projects." It begins with an overview of MSF and then describes the processes that belong to each phase and the team roles responsible for them it is not, however, meant to serve as a comprehensive introduction to MSF. In-depth information about MSF is available to interested readers on the Microsoft Solutions Frameworks Web site at http://www.microsoft.com/msf. The reason for separating the process guidance from technical and project-specific guidance is to keep this guide as lean as possible. It is recognized that some readers need to focus narrowly on project tasks, while persons with project management and team lead responsibilities need to digest the UMPG guidance and apply it to the project. Because organizational personnel and project team members tend to have different levels of involvement during different phases, the division of content according to project phase also supports the ability to focus on the material that is most relevant to a particular responsibility.

xiv

Preface

It is important for you to note the content scope of this guidance. The application migration and interoperation-related information in this solution only discusses issues pertaining to the database connectivity logic. It does not discuss general application porting issues between UNIX and Windows such as mapping UNIX system calls to Windows APIs and migrating the graphical user interface (GUI). For detailed information about UNIX to Windows application migration, refer to the UNIX Application Migration Guide (UAMG) available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnucmg/html/ucmglp.asp. Note Although the two guides are designed to be used together, it is not necessary to follow the MSF processes and team guidance described in the UMPG if the organization has an alternative project methodology in place. In that case, the UMPG would be used merely to map the MSF phases and team structure to the elements of the organization's methodology. Readers who will use this guide to implement a project should read at least the overview of MSF in the UMPG to familiarize themselves with the MSF Process Model, the MSF Team Model, and MSF terminology. Figure 0.1 represents the chapters of interest to readers with different areas of responsibility.

Figure 0.1
Chapters of interest by team role

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

xv

Organization by Chapter
The following chapter list provides an overview of the content and structure of this guide. Preface. Chapter 1: Introduction to the Microsoft Solutions Framework. This chapter provides a brief introduction to the Microsoft Solutions Framework (MSF) and how it is applied to migration projects. Chapter 2: Envisioning Phase. This chapter provides guidance to carry out the various activities of the Envisioning Phase in an Oracle on UNIX to SQL Server on Windows migration project. These activities include understanding your business and technical goals, assessing your current situation at a high level, building a high-level solution concept, setting up the project team, and assessing project risks. Chapter 3: Planning Phase. This chapter provides guidance to carry out the various activities of the Planning Phase in an Oracle on UNIX to SQL Server on Windows migration project. These activities include creating a detailed assessment of the current environment, developing the solution design and architecture, and producing detailed project plans and project schedule. Chapter 4: Developing: Databases Introduction. This chapter introduces the different tasks in migrating the Oracle database to SQL Server 2000. Chapter 5: Developing: Databases Migrating the Database Architecture. This chapter describes the steps for creating an instance of SQL Server which is equivalent in architecture to the original Oracle database. Chapter 6: Developing: Databases Migrating Schemas. This chapter shows how to migrate a schema owner and its objects to a SQL Server database. Chapter 7: Developing: Databases Migrating the Database Users. This chapter contains detailed steps for creating users in the SQL Server databases and granting them the same kind of privileges they had in the original Oracle database. Chapter 8: Developing: Databases Migrating the Data. This chapter explores the different options for migrating the application data from Oracle to SQL Server. It provides details in the usage of each of the options. Chapter 9: Developing: Databases Unit Testing the Migration. This chapter contains the processes for testing the migrated database, its objects, and data. Chapter 10: Developing: Applications Introduction. This chapter introduces the procedures to migrate the application and the API (the connectivity tier between the database and the application) from the existing solution to the proposed solution. Chapter 11: Developing: Applications Migrating Oracle SQL and PL/SQL. This chapter provides specific steps and procedures for converting SQL statements and PL/SQL code for use with SQL Server. Chapter 12: Developing: Applications Migrating Perl. This chapter provides specific steps and procedures for migrating Perl applications to the SQL Server on Windows environment. Chapter 13: Developing: Applications Migrating PHP. This chapter provides specific steps and procedures for migrating PHP applications to the SQL Server on Windows environment. Chapter 14: Developing: Applications Migrating Java. This chapter provides specific steps and procedures for migrating Java applications to the SQL Server on Windows environment.

xvi

Preface Chapter 15: Developing: Applications Migrating Python. This chapter provides specific steps and procedures for migrating Python applications to the SQL Server on Windows environment. Chapter 16: Developing: Applications Migrating Oracle Pro*C. This chapter provides specific steps and procedures for migrating Pro*C applications to the SQL Server on Windows environment. Chapter 17: Developing: Applications Migrating Oracle Forms. This chapter discusses the options available for migrating an Oracle Forms-based application. Chapter 18: Stabilizing Phase. In this chapter, the solution is tested in an environment similar to the production environment. This chapter describes the various testing options and processes that can be implemented to ensure the quality of the solution. Chapter 19: Deploying Phase. This chapter describes the procedure for moving the solution into the production environment and transferring ownership of the solution to operations. The chapter also identifies the tests that need to be performed on the solution during deployment. Chapter 20: Operations. This chapter provides references to existing operational guides to administer a Windows-based server infrastructure and SQL Server 200 based databases. Appendix A: SQL Server for Oracle Professionals. This appendix provides information to help Oracle DBAs who do not have SQL Server experience learn about its administration. Appendix B: Getting the Best Out of SQL Server 2000 and Windows. This appendix provides reference to material to optimize and take advantage of SQL Server and Windows. Appendix C: Baselining. This appendix provides detailed information about collecting statistics in SQL Server. Appendix D: Installing Common Applications and Drivers. This appendix provides installation instructions for applications and drivers that may be used while migrating the application.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

xvii

Document Conventions
This guide uses the style conventions shown in Table 0.1. Table 0.1: Document Conventions Text Element
Bold text

Meaning
Bold text is used in the context of paragraphs for commands; literal arguments to commands (including paths when they form part of the command); switches; and programming elements, such as methods, functions, data types, and data structures. User interface elements are also identified with bold font. Italic text is used in the context of paragraphs for variables to be replaced by the user. It is also used to emphasize important information. Used for excerpts from configuration files, code examples, and terminal sessions. Used to represent commands or other text that the user types. Used to represent variables the reader supplies in command line examples and terminal sessions. Transact-SQL keywords and SQL elements. The folder in which Windows Server 2003 is installed.

Italic text

Monospace font

Monospace bold font

Monospace italic font

UPPERCASE %SystemRoot%

Job Aids
The job aids included with this guide can be used to support the various tasks performed during the different phases of the migration. The aids are in the form of questionnaires, tools, and templates used for designing, planning, and managing the migration project. The job aids are either in Microsoft Excel spreadsheet or Microsoft Word document format. They are available in the Tools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289. The purpose of providing job aids is to help you speed up the development of your project documents. New or additional guidance is not provided in the job aids. Instead, the job aids are generic templates that you should customize to fit your requirements.

xviii

Preface

Table 1.2: Job Aids Descriptions Job Aid


Vision/Scope

Description
Use this template to describe the goals of the project and the approaches that the team will use to achieve those goals. Use this template to define how the project is structured and managed and how the solution will be created. This template will help you control and coordinate the entire project. This spreadsheet will help you identify the possible migration risks and their consequences. In addition, this document also helps you to prioritize the risks and prepare a mitigation plan for each risk. This template provides separate spreadsheets to assist in assessing the application, database, and server environments. This template helps you to describe the solution development process used for the project. This plan provides the technical details of what will be built, and it provides consistent guidelines and processes to the teams creating the solution. This template helps you to describe the strategy and approach used to plan, organize, and manage the projects testing activities. It identifies testing objectives, methodologies and tools, expected results, responsibilities, and resource requirements. This template contains information to help prepare and execute a successful pilot. It includes pilot scope, user interactions and pilot evaluation This template helps you to describe the actions necessary for a smooth deployment and transition to ongoing operations. It covers the processes of preparing, installing, testing, training, and transferring the solution to operations.

Project Structure

Migration Risk Exposure Rating Form

Assessing the Current Environment Template

Development Plan

Test Plan

Pilot Plan

Deployment Plan

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

xix

Contributors
The following organizations and individuals contributed to this project.

Program Manager
Dhilip Gopalakrishnan (Microsoft)

Author
Scalability Experts

Editors
Thomas Olsen (Volt Technical Services) Gaile Simmons (Microsoft)

Architects
Peter Skjtt Larsen (Microsoft) Jason Zions (Microsoft)

Test
Sandor Kiss (Microsoft) Infosys Technologies

Feedback
Please direct questions and comments about this guide to cisfdbk@microsoft.com.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

1
Introduction to the Microsoft Solutions Framework
Introduction and Goals
For large-scale projects, such as database migrations, it is vital to have a cohesive and structured project framework in place. This foundation ensures that projects are carefully planned, roles and tasks are clearly identified and defined, and that the thousands of crucial details are addressed for a successful migration. In this chapter, you will learn about Microsoft Solutions Framework (MSF), which has been used successfully on numerous IT projects. This adaptable and robust project framework provides a comprehensive structure that assists in guiding the project from the initial planning stages until after the project has been completed. If you are familiar with MSF, skip this chapter and continue with Chapter 2, "Envisioning Phase."

Overview of MSF
Every successful project follows a methodology to achieve its objectives. The methodology employed in this migration guide is the Microsoft Solutions Framework. A vast amount of information about MSF is available from Microsoft, and the UNIX Migration Project Guide provides a thorough explanation of MSF for specific audiences of this migration guide, such as the team members that fulfill the Program Management role. Yet not every team member on your migration project needs in-depth knowledge of MSF to successfully perform their team function. Because this is the case, the following brief overview is intended to provide all team members with a high-level overview of MSF that will familiarize them with basic concepts and terminology and help them to better understand the specific sections of this migration guide that they will need to read, understand, and execute. MSF was created to maximize the success of IT projects throughout the entire IT life cycle. This information is derived from the experience gained within Microsoft on largescale software development and service operation projects, the experience of Microsofts consultants, and common best practices from the worldwide IT industry. As opposed to a prescriptive methodology, MSF provides a flexible and scalable framework to meet the needs of any sized organization or project team. MSF guidance consists of principles, models, and disciplines for managing the people, process, and

Introduction to the Microsoft Solutions Framework

technology elements that most projects encounter. A detailed introduction to the MSF models and disciplines is available at http://www.microsoft.com/msf.

MSF Foundational Principles


At the core of MSF are eight foundational principles. For detailed information on the eight foundational principles, download the MSF Version 3 Overview from http://msdn.microsoft.com/vstudio/enterprise/msf/. The eight foundational principles of MSF are: Foster open communications. The MSF Process Model enables an open and free flow of information among the team members and key stakeholders to prevent misunderstandings and reduce the probability that work will have to be redone. Documenting the progress of the project and making it available to the team members, stakeholders, and customers can best achieve this. Work toward a shared vision. The MSF Process Model provides a phase (the Envisioning Phase) and a separate milestone (Vision/Scope Approved) for creating a shared vision. A vision includes detailed understanding of the goals and objectives that the solution needs to achieve. A shared vision highlights the assumptions that the team members and customers have for the solution. Empower team members. Empowering the team members implies that the members accept responsibility and ownership of work assigned to them. Team empowerment can be achieved by preparing schedules where the team members commit to complete their work on a fixed date. This makes the team members feel accountable and also provides a method for identifying any potential delays early in the project. Establish clear accountability and shared responsibility. The MSF Team Model is based on the principle that each role is accountable for the quality of the solution. All the team members share the overall responsibility of the project because the project can fail due to a mistake made by a single member. Focus on delivering business value. The solution must deliver value to the organization in the form of business value. This business value is achieved only after the solution is completely deployed into the production environment. Stay agile, expect change. MSF assumes that the solution will encounter continuous changes before being deployed to the production environment. The team should be aware and prepared to manage such changes. Invest in quality. In MSF, each team member is responsible for the quality of the solution. To confirm the quality throughout the project's duration, a test team is formed. This ensures that the solution meets the quality level of the stakeholders. Learn from all experiences. MSF states that the experiences derived from one project should be used and shared with teams in other projects. These experiences can also help to identify the best practices that should be followed in your organization.

The MSF Team Model Overview


The MSF Team Model was developed over a period of several years to compensate for some of the disadvantages imposed by the top-down, hierarchical structure of traditional project teams. Teams organized under the MSF Team Model are small and multidisciplinary teams of peers, although the model is scalable for both small and large projects. Team members share responsibilities and balance each others competencies to keenly focus on the project at hand. They are expected to share a common project

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

vision, a focus on deploying the project, high standards for quality and communication, and a willingness to learn. Figure 1.1 shows the role clusters of the MSF Team Model.

Figure 1.1
The MSF Team Model

The MSF Team Model emphasizes the importance of aligning role clusters (commonly referred to simply as "roles") to business needs. It does this by organizing each role around a quality goal that the project must meet to be successful. Each role aggregates a "cluster" of related functional areas and responsibilities. The functional areas each require a different discipline and focus, but are related in that they contribute toward meeting the quality goal. The result is a well-balanced team whose skills and perspective represent all of the fundamental project goals. For team members, possessing a clearly defined role and owning a clearly defined goal is motivational. It increases the understanding of responsibilities, and ultimately leads to a better solution. Because each goal is critical to the success of a project, the roles that represent these goals are considered to be equally important. Persons filling the roles are given equal say in critical decisions and are thought of as peers. MSF teams are known as "teams of peers." Table 1.1 associates each role cluster with a quality goal. Table 1.1: Role Clusters and Quality Goals Role Cluster
Program Management Development Test User Experience Release Management Product Management

Goal
Deliver the solution within project constraints Build to specification Approve for release only after all product quality issues are identified and addressed Enhance user effectiveness Achieve smooth deployment and ongoing operations Satisfy customers

The MSF Process Model Overview


The MSF Process Model describes a high-level sequence of activities for building and deploying IT solutions. Rather than prescribing a specific series of procedures, it is flexible enough to accommodate a broad range of IT projects. MSF combines two industry standard models: the waterfall model, which emphasizes the achievement of milestones, and the spiral model, which focuses on the continual need to refine the

Introduction to the Microsoft Solutions Framework

requirements and estimates for a project. An innovative aspect of the MSF process model is that it covers the life cycle of a solution from project inception to live deployment. This helps project teams focus on customer business value, because no value is realized until the solution is deployed and in operation.

Figure 1.2
The MSF Process Model showing phases and major milestones

MSF is a milestone-driven process. Milestones fall at the end of each phase and contain criteria for completing the phase. Important deliverables must have been completed and critical questions, such as: Does the team agree on the project scope? Have we planned enough to proceed? Have we built what we said we would build? Is the solution working properly for the customer? must be satisfactorily answered. The project team and key stakeholders review the deliverables and reach agreement that the project can proceed to the next phase in a milestone meeting. MSF is also an iterative process. The process model is designed to accommodate changing project requirements by iterating through short development cycles and incremental versions of the solution. The iterative aspect of the MSF process applies well to migration projects, which are frequently driven by an iterative process. In some cases, the migration task itself is approached iteratively. The first cycle migrates limited, basic functionality to the new platform; and subsequent cycles add additional capabilities to the new environment until it is equivalent to the original, unmigrated technology. In some other migration projects, the first cycle completely migrates some technology to a new environment, while subsequent cycles extend the technology beyond its original capabilities. Iterative approaches to migration projects provide a means to control project risk and create greater flexibility to accommodate changing requirements. The MSF Process Model originated with the process used by Microsoft to develop applications. This model may be applied to traditional application development environments, but is equally appropriate for the development and deployment of enterprise infrastructure solutions, Web development, e-commerce, and distributed applications. Although the Program Management Role orchestrates the overall process within each phase, the successful achievement of each milestone requires special leadership and accountability from each of the other team roles. As a project moves sequentially through each phase, the level of effort for each of the roles varies. The use of milestones helps to manage this ebb and flow of involvement in the project.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

Table 1.2: Major Milestones and Primary Drivers Milestone


Vision/Scope Approved Project Plans Approved Scope Complete Release Readiness Approved Deployment Complete

Primary Driver(s)
Product Management Program Management Development and User Experience Test and Release Management Release Management

Each phase also has interim milestones that lead to the achievement of the final phase milestone. Recommended interim milestones are shown in Figure 1.3, but they may need to be modified for a particular project.

Figure 1.3
MSF Process Model with interim milestones

The MSF Disciplines Overview


MSF makes use of three classic disciplines, Risk Management, Readiness Management, and Project Management, which it has adapted to fit the framework. They are reflected in both the Process Model and the role responsibilities defined in the Team Model. This section describes each discipline briefly. For a thorough discussion of each discipline and its application within MSF, see the respective white papers available at http://msdn.microsoft.com/vstudio/enterprise/msf/.

Risk Management
The MSF Risk Management Discipline advocates proactive risk management, continuous risk assessment, and integration into decision-making throughout the project and operational life cycle. Risks are continuously assessed, monitored, and actively managed until they are either resolved or the possible negative event happens and the risks have

Introduction to the Microsoft Solutions Framework

become real problems to be handled as such. The MSF risk management process defines six logical steps the team uses to manage current risks, plan and execute risk management strategies, and capture knowledge for the enterprise. Figure 1.4 illustrates the relationship between the six steps.

Figure 1.4
The MSF risk management process

The following list provides detailed information about each of the six risk management steps. Identify. The process of risk identification calls for all team members to participate in surfacing risks to make the team aware of potential problems. As the input to the risk management process, risk identification should be undertaken as early as possible and repeated frequently throughout the project life cycle. Analyze and Prioritize. Risk analysis transforms the estimates or data about specific project risks that surface during risk identification into a form that the team can use to make decisions regarding prioritization. Risk prioritization enables the team to commit project resources to manage the most important risks. Plan and Schedule. Risk planning takes the information obtained from risk analysis and prioritization and uses it to formulate strategies, plans, and actions. Risk scheduling ensures that these plans are approved and then incorporated into the project management process and infrastructure to ensure that risk management is carried out as part of the day-to-day activities of the team. Risk scheduling explicitly connects risk planning with project planning. Track and Report. Risk tracking monitors the status of specific risks and the progress in their respective action plans. Risk tracking also includes monitoring the probability, impact, exposure, and other measures of risk for changes that could alter priority, risk plans, and project features, resources, or schedule. Risk tracking enables visibility of the risk management process within the project from the perspective of risk levels as opposed to the task completion perspective of the standard operational project management process. Risk reporting ensures that the team, sponsor, and other stakeholders are aware of the status of project risks and the plans to manage them. Control. Risk control is the process of executing risk action plans and their associated status reporting. Risk control also includes initiation of project change control requests when changes in risk status or risk plans could result in changes in project features, resources, or schedule. Learn. Risk learning formalizes the lessons learned from the project, collects the relevant project artifacts and tools, and captures that knowledge in reusable form.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

Readiness Management
The MSF Readiness Management Discipline defines readiness as a measurement of the current state versus the desired state of knowledge, skills, and abilities (KSAs) of individuals in an organization. This measurement is the real or perceived capabilities of the individuals at any point during the ongoing process of planning, building, and managing solutions. Each role on a project team includes key functional areas that the individuals undertaking those roles must be capable of fulfilling. Individual readiness is the measurement of the state of an individual with regard to the KSAs needed to meet the responsibilities required of their particular role. The MSF Readiness Management Discipline includes a process to help teams prepare for the KSAs needed to build and manage projects and solutions. The most basic approach to the readiness process is simply to assess skills and make appropriate changes through training. On projects that are small or have short timeframes, this streamlined approach is quite effective. For longer term or serial projects, organizations can benefit from performing the steps of defining the skills needed, evaluating the results of change produced by training, and keeping track of KSAs. This allows for the full realization of readiness management, and is typically where organizations reap the rewards of investments in readiness activities. The readiness management process is composed of four steps: Define, Assess, Change, and Evaluate. Each process step includes a series of tasks to help reach the next step.

Figure 1.5
The readiness management process

Define
This step focuses on defining requirements. It identifies and describes the scenarios, competencies, and proficiency levels needed to successfully plan, build, and manage the solutions. It also determines which roles in the team should be proficient in the defined competencies. Depending on the role, the individual filling it may need to be proficient in one or many of the defined competencies. Scenarios describe the different types of activities that occur in a typical enterprise. Scenarios generally fall into one of four categories defined in terms of the business value they offer the organization High Potential, Strategic, Key Operational, and Support. Different scenarios call for different types of skills and knowledge and distinct approaches to obtaining the appropriate resources and skills for that project type.

Introduction to the Microsoft Solutions Framework High Potential. These focus on the situations an IT department encounters when planning and designing to deploy, upgrade, or implement a new product, technology, or service in its organization. These are typically research type situations in which the technology is brand new or in beta form. The organization needs to have a high degree of agility and the capability to investigate and evaluate new technologies. For these scenarios, it needs to be prepared to obtain (for a short period) the best expertise available. Strategic. Scenarios in this category focus on the situations an IT department is likely to encounter when exploiting new technologies, products, or services. These are typically market-leading solutions that could lead to business transformation defining the to-be long-term architecture. Here the organization needs in-house, indepth expertise at the solution architect level and the capability of bridging skills across technology to the business. Key Operational. Scenarios in this category focus on the situations an IT department is likely to encounter once it has deployed, upgraded, or implemented a new product, technology, or service that has to coexist, or continue to seamlessly interact with legacy software and systems. These are typically today's businesscritical systems, aligned with the as-is technology architecture. Quality of technical knowledge and process are critical, as is ready availability of the skills applicable to the relevant technologies. The challenges are easier to plan for when the technologies are proven. Typically, organizations obtain the readiness and skills needed in this scenario by out-sourcing or by developing strong in-house capability. Support. Scenarios in this category focus on situations in which it is necessary to extend the product to fit the needs of a customers environment. These are typically valuable but not business-critical solutions and often involve legacy technology. Here, cost of delivery becomes paramount and the organization may decide to rely on external skills (particularly for legacy) on a reactive basis.

When determining the most appropriate scenario for a migration project, keep in mind that the technology being migrated was initially deployed under one scenario. For example, it might have been implemented when the technology was new, or it might have been a key operational technology. As the new migrated environment is envisioned, though, a different scenario may apply. If the technology has matured, for example, what was a high potential project may be treated, in migration, as a key operational scenario. Alternatively, what might have originally been a classical support-scenario project might involve, as a result of migration to newer technology, something more akin to the high potential scenario. Identifying the most appropriate scenario helps to map the appropriate competencies and proficiencies required for the migration. Competencies describe the measurable objectives, or tasks, in a given scenario that an individual needs to be able to complete with proficiency. A single competency is used to define a major part of an individuals job or job responsibility relating to performance. A competency can be considered a combination of skills, knowledge, and performance requirements Proficiencies are the measure of the capability to execute tasks associated with a particular competency within a given scenario. An individual's proficiency level provides a baseline for analyzing the gap between that individual's current skill set and the necessary skills for completion of the tasks associated with the given scenario.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

Assess
The assess step focuses on the individual team members. It determines the competencies that these individuals currently possess. It is during this step that analysis of the competencies as they relate to the various job roles is undertaken to determine the skills of individuals within each of these roles. The desired competencies are then analyzed against the current competencies the to-be" versus the as-is. This work enables the development of learning plans, so that desired competency levels can be reached. The following tasks need to be performed to complete the assess step: Measure individuals' knowledge, skills, and abilities through self-assessments or skills assessments (tests). Analyze gaps by comparing the individual's KSA measurements to the expected proficiency level for the role. Individuals can then concentrate on bridging these gaps through the use of learning plans. Create learning plans that identify the appropriate resources and methods (such as training materials, courseware, white papers, computer based training, mentoring, on-the-job, and self-directed training) to fill the gaps. Learning plans should consist of both formal and informal learning activities, and guide individuals through the process of moving from one proficiency level to the next.

Change
In this step, individuals advance their skills through learning in order to bridge the gap between their current proficiency and desired proficiency levels. In this step, the following tasks are accomplished: Train individuals using the actual training, hands-on learning, and mentoring techniques described in the learning plans. Track the learning of each individual. Monitor and report the progress of individuals and their skills by scenario and competency. Tracking progress enables individual and overall readiness to be determined at any time in the life cycle.

Evaluate
This step determines whether the learning plans were effective and whether the lessons learned were successfully implemented on the job. At this point it is time to: Review results to see if more training is necessary or if what was learned is being implemented on the job. Manage knowledge to foster the sharing of the new intellectual resources. This sharing of knowledge enhances the collective knowledge of the solution team and the enterprise and fosters a learning community. Additionally, a formal knowledge management system can provide a way to share common and repeatable best practices that help reduce costs and risks for other project teams.

The MSF Readiness Management Discipline is considered an ongoing, iterative approach to readiness. Following the steps in the process helps manage the various tasks required to align individual, project team, or organizational KSAs. It can lead to better individual, project team, and strategic planning success.

10

Introduction to the Microsoft Solutions Framework

Project Management
The third important discipline adopted by MSF is the Project Management Discipline. To deliver a solution within project constraints, strong project management skills are essential. The MSF Team Model does not contain a role known as Project Manager; however, most project management functions are conducted by the MSF Program Management Role. Project management is a set of skills and techniques that include: Integrating planning done for each aspect of the project. Conducting change control. Defining and managing the scope of the project. Preparing a budget and managing costs. Preparing and tracking schedules. Getting the right resources allocated to the project. Managing contracts and vendors, and procuring project resources. Facilitating team and external communications. Facilitating the risk management process. Documenting and tracking the teams quality management process.

Three distinctive characteristics of the MSF approach to project management stand out: Most of the responsibilities of the role commonly known as "Project Manager" are encompassed in the MSF Program Management Role Cluster. As the size and complexity of a project grows, this role cluster is broken out into two branches of specialization: one dealing with architecture and specifications and the other dealing with project management. In larger projects requiring scaled-up MSF teams, project management activities occur at multiple levels. As the teams grow in number, the project management activities become distributed among the team leads for each of the MSF team roles. The Program Management Role is then responsible for management of the rolled up work from the leads in order to manage the entire solution. Some large or complex projects require a specialist Project Manager or project management team. As the size, complexity, or risk becomes very large, often specialist roles or teams are created to focus on one particular area. For the Program Management Role, this can be done to break the many project management tasks into smaller, more manageable responsibility areas. These could include a specialist project manager, risk manager, solution architect, schedule administrator, and so on.

The differentiating factor of the MSF approach to project management is that the project management job function and activities do not impose a hierarchical structure on the decision-making process. MSF advocates against a rigid, dictator project management style because it works against an effective team of peers. The team of peers approach is a key success factor of MSF. All roles in MSF are considered equally important and major decisions are arrived at by consensus of the core team. If that consensus cannot be achieved, the Program Management Role plays a tiebreaker function, making the final decision on the issue by transitioning into a decision leader to drive the project forward. This decision is made from the perspective of meeting the customers requirements and delivering the solution within the constraints. Afterward, the team immediately resumes their normal peer relationships.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

11

Overview of MSF Phases


The information in this section describes the MSF phases that structure the organization of this migration guide. These phases create the structure used for the project throughout the rest of this guide.

Envisioning Phase
In the Envisioning Phase, the team identifies the vision and scope of the project by preparing a vision and scope document. During this phase, goals for the project are formed and a vision statement is created that defines the entire project. This shared vision helps the team to work towards a common objective. The project team is assembled and the team members are empowered by having roles and responsibilities assigned to them. Another important activity in this phase is the identification of risks and preparation of mitigation and contingency plans. The risks identified and the mitigation plans are used throughout the life cycle of the project. Refer to Chapter 2, "Envisioning Phase," for more details on the tasks and deliverables of the Envisioning Phase.

Planning Phase
During this phase, the existing environment is assessed to form a solution design. The main activity of this phase is to do a detailed planning to ensure success of the project. In this phase, a master project plan deliverable is created that consists of all sub-plans, such as the test plan, and development plan. The plan focuses on budgets, quality, schedule, and the technical implementation of the solution. The project team follows these plans in the subsequent phases. Refer to Chapter 3, "Planning Phase," for more details on the tasks and deliverables of the Planning Phase.

Developing Phase
During this phase, the application components undergo transformation based on the development plans. Components that need a rewrite are developed. In components that are ported, changes based on syntax compliancy, performance, and security are carried out. The source code and executables help to create the deliverables of this stage. Refer to Chapter 4, "Developing: Databases Introduction," for more information on how to migrate a database. Chapter 10, "Developing: Applications Introduction," provides different scenarios for migrating an application.

Stabilizing Phase
During this phase, all the components are tested collectively to ensure that the entire solution operates properly. The testing criteria include achieving the desired functionality, security, and performance requirements. To ensure this, both white box and black box testing are performed. The tested database and components form the deliverables of this stage. Once testing is complete, the solution must be stabilized to ensure that it meets defined quality levels. If the migration is complex, a pilot test of the application can also be performed during the Stabilizing Phase. Refer to Chapter 18, "Stabilizing Phase," for more details on the tasks and deliverables of the Stabilizing Phase.

12

Introduction to the Microsoft Solutions Framework

Deploying Phase
In this phase, the team deploys the solution and ensures that it is stable and can be used. The tested solution is moved into production and transferred to operations. During this phase, the solution components are tuned in the production environment. A solution signoff is obtained from all stakeholders that confirms that the application meets the requirements developed during the Envisioning and Planning Phases. Refer to Chapter 19, "Deploying Phase," for more details on the tasks and deliverables of the Deploying Phase. The migration project and the guidance are deeply rooted in the MSF methodology, and the chapters are in chronological order of the MSF phases starting from Chapter 2, "Envisioning Phase, through Chapter 19, "Deploying Phase."

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

13

2
Envisioning Phase
Introduction and Goals
This chapter provides the process and technical information required to complete the Envisioning Phase of an Oracle on UNIX to SQL Server on Windows migration project. The Envisioning Phase is an early stage of planning. This is the period during which the team, the customer, and the sponsors define the high-level requirements and overall goals of the project. The main purpose of the work performed during the phase is to ensure a common vision and reach consensus among the team members that the project is both valuable to the organization and likely to succeed. During the Envisioning Phase, the focus is on creating clear definitions of the problem and understanding the high level approach to solving the problem. This sets a solid foundation from which specific plans can be developed during the Planning Phase and executed during all subsequent phases to achieve the migration objectives. The Envisioning Phase culminates in the Vision/Scope Approved Milestone. The key deliverables for the project team during the Envisioning Phase are: Vision/scope document. The vision/scope document describes the project goals and constraints. It outlines the solution being developed, the requirements the solution will meet, and the approach that the team will take to complete the project. The vision/scope document is an early form of the functional specification that is developed during the Planning Phase. For more information on creating your vision/scope document, refer to the vision/scope template. Project structure document. The project structure document defines the approach the team will take to manage the project. It defines the teams administrative structure, standards, processes (such as version control and change control), project resources, and constraints. The document identifies the team lead responsible for each MSF role. Risk assessment document. The risk assessment document provides an initial identification and analysis of risks associated with a project. This analysis includes mitigation and contingency plans to help the team manage specific risks. Refer to the Risk Assessment template for more information.

Templates for each of these key deliverables are available in the Tools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289. Table 2.1 lists the key activities that the team undertakes in the Envisioning Phase. The results of these activities are documented in this phase's deliverables.

14

Envisioning Phase

Table 2.1: Major Tasks, Deliverables, and Primary Owners Tasks


Understand the goals of the migration project (business and design) Create the problem statement Create the vision statement Define end user profiles Assess the current situation (high level) Identify high level requirements Define the project scope Define the solution concept Set up a team Define the project structure Assess risk

Project Deliverable
Vision/scope document

Primary Owner
Product Management

Vision/scope document Vision/scope document Vision/scope document Vision/scope document Vision/scope document Vision/scope document Vision/scope document Project structure document Project structure document Risk assessment document

Product Management Product Management Product Management Program Management Product Management Program Management Program Management Program Management Product Management Program Management Program Management

Note This chapter provides information about the essential tasks of the Envisioning Phase for an Oracle on UNIX to SQL Server on Windows migration project. The chapter is not exhaustive. For a detailed description of all the activities and deliverables for the MSF Envisioning Phase, refer to the UNIX Migration Project Guide (UMPG) available at http://go.microsoft.com/fwlink/?LinkId=19832.

Understand the Goals of the Migration Project


An early, clear understanding of the goals of the migration project by the project team is imperative for its success. These high level goals are not defined by the project team; they are defined by the business managers and major stakeholders. The reasoning for reaching an early consensus on the goals includes: The migration goals, once identified, are used to create a vision statement for the migration project. This vision statement is the key driver for all actions and decisions made by the team during the course of the migration project. The goals determine the aggregate skill set required by the project team to successfully execute the migration project. The project team uses these goals to scope specific requirements that, in turn, serve as the basis for detailed solution designs and plans. The goals serve as the basis for reconciling conflicts and establishing priorities that guide any trade-off decisions (schedule, features, resources, budgetary restrictions) that need to be made later in the project.

The project goals can be classified as business goals and design goals. Both types of goals are described in detail in the following sections.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

15

Define the Business Goals


Business goals are established either to take advantage of an opportunity or to solve a problem in the current business. Each organization has business goals specific to them. For example, a business goal for one organization could be to consolidate the ERP applications across all worldwide units on one software platform after a merger with another company, while a different organization may have compliance with new federal regulations as its primary business goal. Because business goals can vary widely, this guide does not provide a specific, comprehensive list of business goals. Your organization will have to determine the business goals specific to it. However, the two most common business goals for IT projects are: Reduce total cost of ownership (TCO) of the IT platform Maximize return on investment (ROI)

Because these two business goals are so common, they are discussed in more detail under the following headings.

Reduce TCO of IT Platform


TCO of a system includes not only the price of purchasing software and hardware, but also the cost of training users and supporting and maintaining the system. Therefore, while evaluating the TCO of any system, there are several factors to consider. Cost of hardware. Evaluate the cost of server hardware, networking equipment such as hubs and routers, and other equipment. Also include facilities cost for hosting the hardware. Cost of software. Evaluate the cost of all software including the operating system and applications, such as database systems, developer tools, and management software. Reliability of the system. If the system fails or crashes often, the support cost for such a system would increase. Cost of system maintenance. System administration is single highest cost factor for any IT organization. Evaluate the staff strength needed and associated costs for maintaining the system. Cost of supporting the users. Users may need extensive training and frequent support, significantly adding to the TCO. Projected cost of the migration project itself. Factors such as availability of skills in-house versus hiring external consultants will influence the project cost. You will be able to estimate these costs better after completing the Planning Phase.

Microsoft offers several solutions to help reduce TCO. For example, Microsoft Visual Studio .NET provides an Integrated Development Environment (IDE) for rapid development of applications that reduces the cost of developing and upgrading applications. The Windows graphical user interface (GUI) environment provides an easy-to-use interface for Windows users. The Windows standard GUI and GUI-based management applications simplify server administration and thus contribute to reducing support costs. For more information about TCO on the Windows platform, refer to http://www.microsoft.com/business/reducecosts/efficiency/consolidate/tco.mspx. For customer case studies about reduced TCO on a Windows platform, refer to http://www.microsoft.com/business/reducecosts/efficiency/consolidate/casestudies.mspx #EBAA.

16

Envisioning Phase

Maximize ROI
TCO is concerned with an organizations bottom-line (cutting costs), and ROI is related to the top line (increasing revenue). A lowered TCO and increased ROI scenario is the ultimate goal of any migration project. To estimate the ROI due to the migration effort, you should take into account the following considerations: Creation of new business opportunities. The migration may provide new technological capabilities and drive new business innovations that lead to increased revenue. Opportunities created by improved performance. For example, a faster response time to online users on a Web site can drive more satisfied customers to the site and increase profitability. Total downtime required to complete the migration. System downtime can affect revenue and should be minimized. Reliability of the system. Unplanned system downtime can negatively affect productivity and revenue. Time to recover investment costs. The business needs to understand how long it expects to realize a return on the investment in the project. Impact on user productivity due to the migration. If users are going to take a long time to ramp up on the new technology, it can impact the near term productivity and profitability of the organization. Productivity can also be affected positively by easing the integration between the database application and other office software. Managing risk. The organization needs to evaluate the risk of doing the project as well as not doing the project. Not taking the risk may be the path of least resistance, but it may deprive you of significant revenue opportunities. Proven best practices and processes can minimize risk and make the migration more predictable. Several system integrators (SIs) offer service offerings that can greatly mitigate the risk. The SIs who are known to offer Oracle to SQL migration offerings, as of publishing this solution, are listed in the "Set Up a Team" section later in this chapter.

Some of the questions that result from these considerations can be answered by analysis. However, a proof-of-concept or a limited pilot exercise can help provide quantifiable answers to these questions, including estimating application migration costs and determining the reliability of software. More information is available in the "Technical Proof of Concept" section of Chapter 3, "Planning Phase."

Identify the Design Goals


Design goals are similar to business goals in many ways. The difference is that design goals focus more on the attributes of the solution and less on what the solution will accomplish for the business. As with business goals, an organization's design goals are unique and specific to that organization. For example, one organization may determine that corporate data should be accessible from the mobile devices issued to company employees. A different organization's primary design goal may be to reduce the time and level of effort required for a user to connect to the server and complete a task. This guide does not provide a specific list of design goals. Your organization will have to determine the design goals specific to it. However, the following list provides the six most common and generic design goals for UNIX to Windows migration projects.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows High availability and reliability. The solution should process requests without interruption, even in the event of a failure. For more information on Windows reliability, refer to http://www.microsoft.com/mscorp/twc/reliability/default.mspx SQL Server allows for servers to be clustered into groups of connected nodes. The solution described in this guide provides high availability for business applications. If there is a failure of the operating system, hardware, or any planned upgrade event on a single machine, SQL Server can be configured to fail over to another node, minimizing down time. For more information about SQL Server clustering, refer to http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part4/c1261.mspx. Robust security. In business, data security is vital. Launched under the Trustworthy Computing Initiative, Windows Server 2003 was designed with an emphasis on security. Many new features have been added, including support for strong authentication protocols such as Kerberos, 802.11x (WiFi) and Internet Protocol Security (IPSec). For more information about Windows Server 2003 security, refer to http://www.microsoft.com/windowsserver2003/technologies/security. For more information about SQL Server 2000 security, refer to the following three resources: http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sp3sec01.mspx http://www.microsoft.com/sql/techinfo/administration/2000/security/default.asp http://www.microsoft.com/sql/evaluation/features/security.asp High performance and scalability. The system should show an acceptable level of performance even when the number of users increases. SQL Server 2000 can support multi-terabyte databases served to millions of users. This scalability is achieved by supporting scale up on symmetric multiprocessor (SMP) systems, allowing the installation of additional processors, memory, disks and networking to build a large single node. In addition, SQL Server supports scale out, or allowing large databases to be split between a cluster of servers. Each server stores a portion of the database and assists in processing functions. For more information about Windows performance and scalability, refer to the following two resources: http://www.microsoft.com/windowsserver2003/evaluation/performance/default.msp x http://www.microsoft.com/windowsserver2003/techinfo/serverroles/appserver/scale .mspx For more information about SQL Server performance and scalability, refer to the following two resources: http://www.microsoft.com/technet/prodtechnol/sql/2000/plan/ssmsam.mspx http://www.microsoft.com/sql/evaluation/compare/benchmarks.asp

17

Easy manageability. The system should provide easy ways for the administrators to install, manage, and maintain the operating system, application software, and updates. For more information on management products from Microsoft, refer to http://www.microsoft.com/management/default.mspx. SQL Server provides a number of robust administration tools for managing data. Tools include graphic administration tools, task wizards, and command line utilities. For detailed information about the SQL Server tool set, refer to the following two resources: http://www.microsoft.com/technet/prodtechnol/sql/2000/books/c01ppcsq.mspx

18 http://msdn.microsoft.com/library/default.asp?url=/library/enus/usetools/usetools_4ovn.asp

Envisioning Phase

Consolidation of servers. The new system should allow for the consolidation of existing databases into a small number of servers. If the existing solution is based on older hardware, a migration presents an opportunity to consolidate to fewer and more powerful servers. For more information about Windows Server 2003 Datacenter Edition the Microsoft server operating system optimized for server consolidation refer to http://www.microsoft.com/windowsserver2003/datacenter/default.mspx. Interoperability with the existing UNIX platform. The new system should integrate with the existing applications and infrastructure on the UNIX platform. For more information about Windows interoperability with UNIX, refer to http://www.microsoft.com/sfu. For a detailed comparison of the system abilities of existing UNIX systems and Windows Server 2003 and SQL Server 2000, refer to http://go.microsoft.com/fwlink/?linkid=37907

Create the Problem Statement


A problem statement is usually a short narrative describing the specific issues that the business hopes to address with the project. It relates primarily to the current state of business activities and is a direct result of the business goals or design goals not being met adequately. The more precisely the problem statement is recorded, the easier it will be to measure the success of the project. Because the aim of the project is to solve a problem, the understanding of the problem determines the design of the solution. A good problem statement provides sufficient information about the business problem for a new team member to use the problem statement and the rest of the documentation to put the project into context. The following are some examples of problem statements that may be relevant in an Oracle on UNIX to SQL Server on Windows migration project: Customer support cannot effectively process the high number of customer calls because of the time it takes for them to retrieve customer data from the existing customer database. Product managers cannot make informed decisions about sales promotions because they do not have recent sales trends.

Create the Vision Statement


The vision statement addresses the problem statement and establishes a common vision of the end state that can be shared among team members. While brief, this statement provides a common starting point for future decisions throughout the rest of the project. A good vision statement has the following characteristics: Specific. A vision statement should be specific and include the ideal state of the business problem solution so that the end result is meaningful. Measurable. By creating a vision statement that is measurable, the project team can determine the degree of success at the completion of the project. Achievable. Given the resources, the time frame, and the skills of the team members, the vision statement should be achievable. An achievable and challenging vision statement can motivate team members.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Relevant. The vision statement should relate to the business problem being addressed. If not, the project team might realize that they are solving the wrong problem, or one that does not exist, and in the process lose sponsorship.

19

Time-based. The vision statement should clearly indicate the estimated time frame for the delivery of the solution.

The following two examples are vision statements for an Oracle on UNIX to SQL Server on Windows migration project: Before the end of the year, our customer support staff will be able to retrieve all customer data and account history within three seconds of the customer identifying himself or herself. Before the start of the next fiscal year, our product managers will have access to daily sales reports from all the showrooms in the country.

Define the User Profiles


Understanding the users for whom the solution is being developed is critical to the project's success. To capture a clear description of each user, the team creates profiles of each user class. The process of profiling develops a set of user requirements. The combination of these user requirements with the business and design requirements will help to define the project scope later in the Envisioning Phase. In an Oracle on UNIX to SQL Server on Windows migration project, the most important profiles are that of the system managers and users of the migrated application. Consider the following while profiling the user types: Proficiency of the users in using Microsoft applications and the Windows environment. Administrators largely use command line tools in the Oracle/UNIX world. While Windows also provides some command line interfaces, Windows tools tend to be graphical user interfaces (GUIs). Users new to the Windows environment may need training. The type of tasks that the users will perform on the new system. Essential expectations of the end user from the current application interface that will have to be met in the new environment. The language localization needs for the end users, if necessary. Consider the GUI as well as user documentation. The physical location of the end users. Factors such as locations, number of users at each location, and the bandwidth and usage of network links between the sites should be understood and documented.

Assess the Current Situation (High Level)


Organizations planning an Oracle on UNIX to SQL Server on Windows migration need to consider the following five entities for a successful migration. While these entities are assessed in detail in the Planning Phase of the project, a high level preliminary assessment is essential in the Envisioning Phase to evaluate the feasibility of the project. Application This entity includes the target applications that are migration candidates. While assessing applications, the following questions should be asked: Which are the specific applications that are to be migrated? What languages are these applications written in?

20

Envisioning Phase If there are custom applications to be migrated, is the source code available? Are the developers of the application still a part of the team? Do the applications to be migrated interoperate with other applications or systems?

Database This entity includes the Oracle databases that need to be migrated to SQL Server. The following are some of the high level questions you need to ask: Do your existing databases implement specialized features of Oracle or any other third-party customizations to provide additional database features? Are the databases that need to be migrated shared by multiple applications? What are the sizes of the databases being migrated? What is the user population of the databases?

Application infrastructure This entity includes server and network-related resources, such as compute nodes, storage, file systems, routers, firewalls, hubs, and so on. The following are some of the questions you need to ask to assess the current state: Do you already have an existing infrastructure based on Windows that can be readily used or expanded, or do you have to set up the infrastructure from scratch? What are the other external systems, such as those of your partners or clients, that would be impacted by the migration?

Microsoft provides a standardized infrastructure architecture named Windows Server System Reference Architecture (WSSRA). WSSRA provides lab and real world tested, architectural blueprints and proven best practices to design and implement enterprise infrastructure solutions with minimal risk and cost. The guidance addresses infrastructure issues including availability, security, scalability, and manageability of the platform in a fully integrated fashion. For detailed information on WSSRA, refer to http://www.microsoft.com/windowsserversystem/overview/referencearchitecture.ms px. Development environment This entity includes software construction environments such as the UNIX make environment, developer tools, and so on. Migration of the database and data can be achieved by using tools bundled with Oracle and SQL Server, such as SQL*Plus, Enterprise Manager, and Query Analyzer. Microsoft offers a set of tools called SQL Server Migration Assistant for migrating Oracle databases to SQL Server. Questions to ask while assessing the development environment include: What is the software construction environment that you have to build the UNIX applications? A commonly used software construction mechanism is the use of makefiles. For detailed process and technical guidance on using a makebased build environment to build Windows executables, refer to http://go.microsoft.com/fwlink/?LinkId=22225 Are the required licenses for the appropriate SQL Server edition available? What are the server, storage, and network resources to support the development and testing activities?

Existing documentation

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows This entity includes end user documentation, requirements analysis, functional specifications, design documents, and so on. The existing documentation should be scrutinized as part of the task of assessing your current environment. This is something that is often overlooked. Existing design documentation is a valuable resource for building project documents and plans, especially when a large portion of the existing functionality is carried over to the new solution. Developing test plans and test cases for the new application based upon existing documentation is especially useful. Modifications to existing documentation are almost certainly needed to support the migrated solution. One problem often encountered is that the existing documentation is incorrect or insufficient, which may force the migration project to absorb the effort of bringing it up to date. These are some of the questions that you need to ask: Are the original requirements for each of the applications to be migrated available?

21

Are database and application design (high level as well as detailed) documents available? Are the logical and physical data models available for all databases to be migrated? Is documentation for the application code maintained? Are user manuals available for client applications? Are training documents available for support, end users, and operations personnel?

Application users User accounts that exist in the UNIX domain need to be migrated or recreated in the Windows-based domain along with the necessary permissions for users to start using the application on the Windows domain. Ask the following questions: What security services are in use (examples include local, Kerberos, NIS, NIS+, and LDAP)? What is the authentication method in use (examples include password, digital certificate, and so on)? What authority does the user have on the system? How is authorization implemented? What activities of the users are audited? Are users required to use secure communication methods such as SSH? Are application users authenticated by the operating system as well as by the database?

For prescriptive guidance on how to enable Windows Server 2003 to be used for authentication and as an identity store within Windows and UNIX environments, refer to http://go.microsoft.com/fwlink/?LinkId=23120. Note The primary purpose of this solution is to provide guidance on migrating the applications and databases from an Oracle on UNIX environment to a SQL Server on Windows environment. The discussions about migrating the infrastructure, development environment, user accounts, and documentation are, therefore, limited in this solution. The discussions have been included to provide readers a holistic perspective of the migration. Pointers to other resources that provide more information, including prescriptive guidance, are provided when relevant.

22

Envisioning Phase

Figure 2.1 represents the primary entities involved in an Oracle on UNIX to SQL Server on Windows migration. To reiterate: this guide provides prescriptive guidance for migrating the application, database, and application infrastructure entities only. Information about the other entities is available from the sources cited earlier.

Figure 2.1
Primary Migration Entities

Capture High Level Requirements


An important task in the Envisioning Phase is to document the high-level business, user, and design requirements. These requirements are refined further in the conceptual design stage of the Planning Phase (discussed in Chapter 3, "Planning Phase"). A common way of gathering and analyzing information is through developing use-cases and building usage scenarios to document the business processes and user requirements. While use cases describe the high level interactions between an individual and the system, usage scenarios provide additional information about the activities and task sequences that constitute a process. A detailed discussion of use-case analysis is out of scope of this solution. Please refer to the following resource for additional guidance: Advanced Use Case Modeling (Armour and Miller, 2000).

Define the Project Scope


One of the critical factors in the success of a project is clearly defining the scope of the project. Scope defines what will be and what will not be included in the project. The scope uses the high level requirements gathered earlier in the Envisioning Phase and incorporates the constraints imposed on the project by resources, schedule, budget, and other limiting factors. A good way of scoping is to address use cases and usage scenarios that impact the business the most. As with the business and design goals, the scope for projects is organization-specific, and you will need to determine the scope for your project and document it in the

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

23

vision/scope document. However, when producing the scope for your Oracle on UNIX to SQL Server on Windows migration project, some general guidelines should be followed: Consider a multiphase approach to the migration. In the first phase, consider migrating only the database and reconnect the UNIX applications to connect to SQL Server. Limiting the scope for each phase of the migration increases your ability to monitor and measure the migration project's success. Consider migrating the portable applications first. These include applications written using Java, Perl, PHP, and Python. Portable applications can usually be ported or configured to interoperate with relative ease, and this helps build user and management confidence. Oracle Forms and Pro*C applications are the hardest to migrate because they have to be rewritten; perform them in a later phase, if possible. The migration of code present in the database in the form of triggers, functions, stored procedures, and packages impacts the migration of applications that are dependent on them. As a result, these have to be migrated first. Databases using advanced Oracle functionality, such as advanced queuing, Java packages, objects, and so on are more difficult to migrate, and a proof-of-concept should be created to verify the solution. Smaller databases with less stringent requirements on performance and availability should be migrated before attempting larger ones. To reduce the complexity, migration at the schema level should be considered where feasible instead of entire databases.

Define the Solution Concept


The solution concept outlines the high level approach the team will take to meet the goals of the project and provides the basis for proceeding to the Planning Phase. This approach includes the assessment of technologies, development, test, quality assurance, training, deployment, release, operations and so on. These approaches can be developed into full-fledged plans in the Planning Phase. Because the solution concept focuses on the concepts and not the details, it is not very technical. The solution concept also includes a high-level solution design strategy for all the software and hardware components of the system. Because the primary focus of this solution is migrating the Oracle databases and associated applications to SQL Server on Windows, the rest of this section discusses only solution design strategies for the database and applications.

Application and Database Migration Solution Design Strategy


For the purposes of explanation, the application and database ecosystem can be viewed in terms of three components: Application. This component includes applications written in a variety of programming languages, including Oracle Forms, Perl, Python, and so on. Connectivity interface. This is the intermediate tier between the application and the database. The connectivity tier is also referred to as the application programming interface (API). This layer facilitates the communication between the application and the database. Oracle and SQL Server support different connectivity interfaces, some proprietary and some standard. Oracle Call Interface (OCI) is an example of a proprietary interface for an application to communicate with Oracle. Object

24

Envisioning Phase Database Connectivity (ODBC) is an example of a standards-based interface. The available API also differs from UNIX to Windows. Database. The database tier consists of the database management system and the objects that store and facilitate access to the data in the database.

The relationship between these three components is shown in Figure 2.2. The connectivity interface binds the application to the database and allows information to pass between them.

Figure 2.2
Application, database, and connectivity tiers

Once the database is migrated from Oracle to SQL Server, there are four different approaches to connect the application to the SQL Server database using an appropriate connectivity layer. 1. Interoperation The application components remain on the UNIX platform. 2. Port or rewrite application to the .NET platform The.NET framework is the next generation platform from Microsoft for building, deploying, and running Windows applications. It provides a standards-based, multilanguage environment for integrating existing investments with next-generation applications and services. 3. Port or rewrite application to Win32 platform The Win32 API is one of the primary programming interfaces to the Windows operating system family, including Windows 2003. The Win32 API provides functions for processes, memory management, security, windowing, and graphics. 4. Quick port: port application to the Windows Services for UNIX 3.5 platform Port applications to an environment on Windows named Interix, which is similar to UNIX. Interix is a complete, POSIX-compliant development environment that is tightly integrated with the Windows kernel. It is a part of the Microsoft Windows Services for UNIX 3.5 product suite from Microsoft. Each of these four options constitutes a separate design strategy for your migration project, and each of the four is discussed in more detail under the following headings. One, some, or all of these solution design strategies will form the basis of the solution concept you will produce as a part of completing the Envisioning Phase. Even when the goal is to standardize on the Windows platform, some of these strategies may be employed as intermediary steps in a multi-phased migration. This solution concept documented in the vision/scope document will be used to develop detailed plans during the Planning Phase and code during the Developing Phase. Note Due to the complexities inherent in a migration project, it may not be possible to immediately select a migration strategy. Proof-of-concept testing may need to occur before the design strategy is decided upon.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

25

Solution Design Strategy A: Interoperability


In this strategy, the applications remain on the UNIX platform while the database alone is migrated from Oracle to SQL Server. Interoperation is preferred in the following situations: The migration effort and risks need to be minimized at the expense of not being able to fully realize the benefits of migrating the application to Windows. A multiphased approach for migration is more appropriate. The first phase of migration could involve migrating the database only, and later phases could focus on migrating the applications. The needs served by the application are static; the evolution of the application is not a business or design goal of the project. Maintaining a tight or complex integration of UNIX applications with other services is required.

Interoperation is not recommended in the following situations: The cost of maintaining two environments is prohibitive. The goals of the project include changing the application to take advantage of the greater business value of Windows technologies. Components being on two different platforms do not produce acceptable performance or service levels. The required measure of security cannot be achieved because of disparate technologies. When the interoperation is complex and requires huge effort in development and implementation as compared to employing one of the other methods.

Solution Design Strategy B: Port or Rewrite to .NET Framework


When migrating an application to the .NET framework, there are two different options. One is to perform a full port of the application, and the other is to rewrite the application using the .NET application programming interfaces (APIs). A full port is defined as the capability to execute the application in the Windows environment in its native form. Applications written using languages such as Java, Perl, PHP, and Python, which are available on both the UNIX and Windows platforms, can take advantage of a full port. In most cases, SQL Server database drivers are available for the specific application languages. A full port requires minimum changes to be done to the source code, and a full port uses standard libraries and utilities that exist on the Windows platform. New documentation and user training is not required in the case of a full port because the business logic and the user interface remain the same. A full port to the .NET framework is preferred in the following situations: The programming environment in which the application was written is fully supported in the .NET framework. Appropriate .NET database drivers are available and support all the function calls made in the application code. Rewriting an application consumes too much time and increases cost prohibitively. When ported into the .NET framework, the application successfully interacts with other applications.

A rewrite of the application would be required when there are no equivalents on the .NET platform for the language in use. An example of this would be applications based on

26

Envisioning Phase

Oracle Forms. The best option in this case would be to rewrite the application on Windows using Visual Basic .NET. A rewrite of the application is preferred in the following situations: The cost and complexity of porting is prohibitively high. It may be easier to recreate the application rapidly using Visual Basic.NET, rather than porting the application. The business logic is likely to change significantly in the new environment, which would limit the amount of code reused from the source platform. The application requires new features and code that is specific to the Windows environment. Tight integration with Windows features is required in the new application. The best possible integration with other Windows-based applications is required. A port is not possible either because the language is not supported or API libraries for SQL Server do not exist.

Potentially, rewriting an application can be challenging. It can be a time-consuming, risky, and costly option. The risk inherent in a rewrite is that the business logic or an important functionality is changed while rewriting the code. Careful testing and stabilizing of the rewritten application needs to be conducted to ensure proper business logic and functionality. When porting or rewriting to Windows, migration to the .NET platform is preferred. .NET technology provides the ability to build, deploy, manage, and use connected, securityenhanced solutions with Web services. For more information about the .NET platform and its key capabilities, refer to the following two resources: http://www.microsoft.com/net/basics/ http://msdn.microsoft.com/netframework/programming/fundamentals/default.aspx There are situations when moving to .NET may not be feasible. For instance, at the time that this guide was published no production-ready implementations of Python on the .NET framework existed. However, this technology is currently available in a beta format, and investigation of current technologies is recommended.

Solution Design Strategy C: Port or Rewrite to Win32


The.NET Framework offers the next generation of applications and services. Migration to the Win32 platform should be considered when migration to .NET is not technically feasible for your organization. As with strategy B, two migration options are available when moving to a Win32-based platform. The first is to perform a full port of the application, and the other is to rewrite the application using the Win32 APIs. The rationale for deciding whether to port or rewrite the application to function with Win32 is the same as the rationale for using .NET. Please refer to the description in the "Solution Design Strategy B: Port or Rewrite to .NET Framework" section for more details.

Solution Design Strategy D: Quick Port by Migrating to Windows Services for UNIX
One of the quickest migration paths possible is to port the code directly to Windows Services for UNIX. Windows Services for UNIX includes Microsoft Interix, which provides a UNIX environment that runs on top of the Windows kernel. Interix allows native UNIX applications and scripts to work alongside Windows applications. The best way to view Interix is to understand it as a POSIX-compliant version of UNIX built on top of the Windows kernel. It is not an emulation of a UNIX environment on the Windows APIs. Migration using Windows Services for UNIX involves obtaining the source code,

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

27

installing it in the Interix development environment, modifying the configuration scripts and makefiles, and recompiling the application. This strategy is referred to as a quick port. The ported application may be successful immediately, or it may require modifications to the configuration scripts or makefiles to account for the new hardware platform, the target operating systems, and local configuration information. A detailed assessment of the application has to be conducted to inventory the various APIs and utilities in use. The support for these in the Windows Services for UNIX environment has to be verified. Extensive testing of the application is essential after the port to ensure that all features have been migrated successfully. A quick port is preferred in the following situations: A port for the language the application was written in does not exist in .NET or Win32. An example application is X Windows. Application investments on the UNIX platform need to be reused. Interix can optimize your investments in UNIX applications by reusing code. New functionality and value can be added to your existing UNIX applications by integrating the UNIX code with .NET or Win32 functionality. It is too expensive and time-consuming to do a full port or complete rewrite of the application. A large set of UNIX scripts need to be ported. Interix allows you to get the most out of existing UNIX network administrator tools and skill sets. It provides more than 300 UNIX utilities, and it provides shells and scripting languages (including Perl) that enable you to run existing shell scripts with little or no change on Windows.

A quick port is not recommended in the following situations: A port of the application is available in Win32 or .NET. You can afford the rewrite or port to Win32 or .NET and wants to tightly integrate the application with other Windows technologies. You want to significantly evolve the application by using the capabilities of the Windows environment. A quick port requires huge effort in development and implementation as compared to employing one of the other methods. For example, when the required libraries available on the original UNIX platform are not available on Interix.

Most UNIX database environments use scripts to support databases and client applications. You should evaluate the importance of the services that these scripts perform. Migrating the scripts to Windows Services for UNIX is an option; however, scripts that make use of Oracle utility programs must be modified to use suitable replacements for SQL Server. Using Windows Services for UNIX, organizations can rapidly consolidate diverse platforms, maximizing their previous investments in UNIX infrastructure, applications, and knowledge while capitalizing on Windows innovation. Moreover, Windows Services for UNIX provides a full range of supported and fully integrated, crossplatform network services for blending Windows and UNIX networks. For a detailed discussion of Windows Services for UNIX in migration projects, refer to the UNIX Application Migration Guide available at http://go.microsoft.com/fwlink/?LinkId=30832 Windows Services for UNIX 3.5 is available as a complimentary download from Microsoft for Windows Server 2003, Windows XP Professional, and Windows 2000. Some Windows Services for UNIX 3.5 features include:

28

Envisioning Phase It is designed to work well with other major UNIX platforms and versions. It has been tested specifically with Sun Solaris 7 and 8, HP-UX 11i, AIX 5L 5.2, and Red Hat Linux 8.0. It includes version 5.6 of the ActiveState Perl distribution for native Windows scripting and Perl 5.6.1 for scripting in the Interix environment. It includes more than 300 UNIX utilities and tools that perform on UNIX and Windows systems in a similar manner. In addition, Windows Services for UNIX 3.5 also contains a Software Development Kit (SDK) that supports more than 1,900 UNIX APIs and migration tools, such as make, rcs, yacc, lex, cc, c89, nm, strip, and gbd, as well as the gcc, g++, and g77 compilers.

More information about Windows Services for UNIX is available from the following two resources: http://www.microsoft.com/windows/sfu/productinfo/overview/default.asp http://www.microsoft.com/windows/sfu/

Migration Strategy Scenarios


Tables 2.2 through 2.5 show the migration opportunities for several scenarios of the source UNIX environment. Each scenario is a combination of the three components of database, application, and connectivity (API). The first column of the tables represents the current state in the UNIX environment, and the third column shows the migration possibilities for each of the migration strategies: interoperate, port or rewrite to the .NET Framework, port or rewrite to Win32, and port to Windows Services for UNIX. Table 2.2: Application Interoperation Strategy Oracle/UNIX language
PHP Pro*C Oracle Forms Perl Python Java

API

Interoperate SQL Server/UNIX language


PHP Solution not available or feasible Solution not available or feasible Perl Python Java

API

Oracle or OCI8 or ODBC OCI Forms C or JDAPI Oracle or ODBC Oracle or ODBC JDBC

MS SQL Server or ODBC

ODBC or Sybase ODBC JDBC

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

29

Table 2.3: Application Port to Windows Services for UNIX Strategy Oracle/UNIX language API Port SQL Server/Windows Services for UNIX language
PHP Solution not available or feasible Solution not available or feasible Perl Python Java ODBC or Sybase ODBC JDBC

API

PHP Pro*C Oracle Forms Perl Python Java

Oracle or OCI8 or ODBC OCI Forms C or JDAPI Oracle or ODBC Oracle or ODBC JDBC

MS SQL Server or ODBC

Table 2.4: Application Port or Rewrite to Win32 Strategy Oracle/UNIX language


PHP Pro*C Oracle Forms Perl Python Java

API

Port/rewrite SQL Server/WIN32 language


PHP VB VB Perl Python Java

API

Oracle or OCI8 or ODBC OCI Forms C or JDAPI Oracle or ODBC Oracle or ODBC JDBC

MS SQL Server or ODBC ODBC or ADO ODBC or ADO ODBC or ADO ODBC JDBC

Table 2.5: Application Port or Rewrite to .NET Strategy Oracle/UNIX language


PHP Pro*C Oracle Forms Perl Python Java

API

Port/rewrite SQL Server/.NET language


Solution not available or feasible VB.NET VB.NET Perl Solution not available or feasible Solution not available or feasible

API

Oracle or OCI8 or ODBC OCI Forms C or JDAPI Oracle or ODBC Oracle or ODBC JDBC

ODBC or ADO.NET ODBC or ADO.NET PerlNET

30

Envisioning Phase

Optimal Strategy
The factors that need to be considered while deciding a migration strategy are: Business needs. The migration strategy should meet the present and future needs of the organization. For example, an organization may need to open its existing application to external users. Security needs to be implemented for the external users. The organization may select the rewrite strategy. Technical feasibility. Migrating Oracle Forms to the Windows platform may not be technically feasible. If the new features warrant a lot of change to the current application, the organization may opt to select rewriting as a strategy. Time. The time frame within which the migration has to be planned, developed, tested, and deployed needs to be decided. In addition, the actual time taken in the migration needs to be minimized to reduce the business risk. For example, an organization may plan to provide access to its application to the external users within three months. A quick port to the Windows Services for UNIX platform may be the best option. Budget. The budget available needs to be planned. According to the money available, the infrastructure, human resources, and cost need to be planned. For example, the direct cost of porting may be lower than the cost of a rewrite. Also, porting an application minimizes the cost of training, as the same application is used in the new environment. The indirect cost; such as cost of maintenance, testing, and retraining, will also be lower.

Set Up a Team
To transform overall business and design goals into a clear vision for the project, an organization needs to assemble a multidisciplinary team, based on MSF roles, and with defined skill sets appropriate to the project. These roles are Product Management, Program Management, Development, Test, User Experience, and Release Management. Once assembled, this team defines the vision and scope that together provide a clear direction for the project and set expectations within the organization. During the Envisioning Phase, it is likely that only the lead person or persons for each team role will be determined. During the Planning Phase, the entire team should be assembled if it is necessary for your organization to staff each role with more than one person. More information about the MSF Team Model is available in the UMPG. Table 2.6 lists each role with its goal and identifies its key functional areas and project responsibilities for an Oracle on UNIX to SQL Server on Windows migration project. Table 2.6: Project Requirements and Responsibilities by Task Role
Product Management Role

Project responsibilities and tasks


* Ensure that the team addresses business goals and customer requirements * Manage communications, launch planning, and feedback with customers (both internal and external) * Drive solution design

Knowledge and skill requirements


Understanding of the organization's business priorities and goals

Program

* Project management skills

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

31

Role
Management Role

Project responsibilities and tasks


* Manage projects to meet budget and schedule * Manage scope and track progress * Provide process leadership

Knowledge and skill requirements


* Communication skills * Adequate knowledge of UNIX and Windows environments and of Oracle and SQL Server databases to drive solution design

Development Role

It is recommended that two teams be created for the Development Role. One development team should focus on migrating the database, providing database administration and support. The second development team should focus on migrating the application. * Ensure that requirements of the Windows environment are met for the migrated database (for example, OS support for amount of memory required). * Identify and provide access to shell and other scripts that are used to support the database being migrated. * Migrate user accounts and verifiy that they have been migrated correctly. * Install DBMS software and database applications on servers. * Create database in SQL Server and migrate data from Oracle. * Identify the time and order in which the shell scripts are invoked. * Design and implement the appropriate security model for migrated databases, based on analysis of application and database. * Identify and resolve issues associated with Windows security and connectivity. * Validate that the chosen security configuration provides at least the same security levels as the existing configuration. * Apply appropriate permissions at the database object level. * Work with test groups to build test environment. * Work with client development functional team on issues related to connectivity between database and application * Knowledge of security configuration for current Oracle database and applications running on UNIX. * Knowledge of shell and other scripts used to support the Oracle database. * Knowledge of cron jobs, including times and the order in which scripts are run. * Understanding of security, connectivity, software, and database installation issues in the Windows environment. * Understanding of both the Windows and UNIX environment. * Understanding of Network Library Oracle SQL*Net, Net8, and TCP/IP. * Understanding of the Oracle schema, tables, views, stored procedures and triggers. *Knowledge of SQL Server and ability to perform all these tasks in a Windows environment

Development Functional Team #1 (Database)

32

Envisioning Phase

Role
Development Functional Team #2 (Application)

Project responsibilities and tasks


* Participate in assessment activities for client applications. * Identify and articulate security requirements for client applications. * Retarget or migrate existing client applications. * Work with test team to ensure requirements are met and defects are reduced. * Work with database team on issues related to connectivity or other database-related problems.

Knowledge and skill requirements


* Knowledge of Oracle client applications and their security requirements. * Proficiency in either Perl, Python, PHP, Java, Visual Basic.NET, Windows Services for UNIX, or Windows. * Experience with technologies for application interoperation with SQL Server such as FreeTDS. * Understanding of stored procedures, database logic, and application logic. * Understanding of connectivity issues with database. * Understanding of the applications being migrated. * Experience in developing usage scenarios and use cases. * Understanding of training principles.

User Experience Role

* Manage process of gathering, analyzing, and prioritizing user requirements. * Help develop usage scenarios and use cases. * Provide feedback on the solution design. * Drive the creation of user training materials. * Participate in data gathering relevant to Test during all phases of the project. * Design and develop the test environment in conjunction with the Development functional teams for database and client migrations. * Maintain ownership of the test environment. * Design, create the specification for, and refine the test plan and the user acceptance test plan. * Implement and validate the migration test plan. Implement test cases.

Test Role

* Understanding of the UNIX and Windows operating systems (including Windows Services for UNIX, if appropriate). * Expertise with Oracle databases, application and client development. * Experience with interoperation between UNIX and Windows. * Understanding of tiered database systems in homogeneous and heterogeneous environments.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

33

Role
Release Management

Project responsibilities and tasks


* Act as primary advocate between project development and operations groups. * Manage tool selection for release activities and drive optimizing automation. * Set operations criteria for release to production. * Participate in design, focusing on manageability, supportability, and in deployment. * Drive training for operations. * Drive and set up for pilot deployments. * Plan and manage solution deployment into production. * Ensure that stabilization measurements meet acceptance criteria.

Knowledge and skill requirements


* Understanding of standard operational procedures in the Windows environment. * Understanding of applications being migrated. * Management and communication skills.

Special Considerations for Setting Up Your Migration Team


An Oracle on UNIX to SQL Server on Windows migration project can present some unique issues for staffing your project team. Keep the following considerations in mind when staffing your project team: It is relatively easy to find technical staff with a deep knowledge of Oracle and UNIX or SQL and Windows, but it is not easy to find technical staff that has a deep understanding of all of these areas. Migration involves creating a metaphorical bridge between the two environments. Make sure you have a senior architect on the team who understands both worlds deeply enough to engineer the bridge's creation. If the skill is not available in-house, consider finding a consultant or a consulting organization with experience in this form of migration. The consulting organizations often offer services to complete these migration projects end-to-end. It may be possible for you to successfully implement the migration with lowered risk and reduced cost by engaging a consulting organization. For more information about Microsoft partners that offer Oracle to SQL Server migration services, refer to http://www.microsoft.com/sql/partners/migration.asp Ensure that the hardware and infrastructure experts from the Development Role and Test Role are actively engaged during the Envisioning Phase. The hardware on which databases and applications are run is commonly overlooked until later in the project life cycle than it should be. Involving the experts early in the project will help inform more accurate estimates regarding budget, time, and resources. The Test Role should include, if possible, team members who have worked with the application being migrated. Their experience will help them write better test cases. The User Experience Role should include, if possible, team members who have worked with the application in the UNIX environment. They should have a good understanding of the application's user interface so that they will be able to define and provide the training required for end users after the application migration.

34

Envisioning Phase Consider including in the Test Role team members who have the expertise to test the security of the migrated environment. The security of the migrated application is often overlooked in migration projects. Migration projects are often undertaken using existing technical staff who have day-to-day responsibilities which impact their availability. Resources should be properly planned and allocated to the project to avoid such impact. Technical staff that has experience with the current application and database should be involved in the project from design to deployment. Any required training should be provided to improve their utility in the migration. Security, hardware, and infrastructure experts from Operations should also be involved in the Envisioning Phase because they can provide valuable advice about the technical availability, costs, and the capabilities of the organization. Because moving from Oracle on UNIX to SQL Server on Windows is a major paradigm shift, it could impact the employment and careers of existing personnel. This issue should be considered when forming the team and carrying out the migration. Change is often inevitable and should be perceived as a way to expand and develop skills.

Define the Project Structure


The project structure defines how the team manages and supports the project and describes the administrative structure for the project team going into the subsequent project phases. The main function of the project structure is to define standards the team will use during the project. These include communication standards, documentation standards, and change control procedure standards. Program Management takes the lead in defining the project structure. Refer to the UNIX Migration Project Guide for detailed information about project structure. Use the Project Structure document template to assist you in defining and recording the project structure and protocols for the project team. This template is available in theTools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289.

Define Project Communications


Program Management should use the project structure to define standards for team members to communicate with one another. Among these standards can be a definition of the reporting structure under which team members operate, procedures to elevate project issues, regular mandated status meetings, and any other project-specific communication standards that need to be defined during the Envisioning Phase. The document may also include e-mail names, aliases, telephone lists, mail addresses, server share names, directory structures, and other information critical to team organization. Consider establishing a team collaboration environment where communication can occur and progress be monitored and updated as necessary. The following list of factors needs to be considered for most Oracle on UNIX to SQL Server on Windows migration projects. The project team will likely include members from the UNIX domain of the IT department as well as the Windows domain. There are often work-based cultural differences between members of these two domains. The Program Management role should ensure that this potential difference is identified and extra attention is paid to ensure clear communication between the members of the two domains.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Similarly, the communication tools used by the UNIX and Windows users are often very different. For example, Windows users use the calendaring features of Outlook extensively for team communication. The UNIX users may be accustomed to a different set of tools or clients. Different geographical locations and different cultural backgrounds of team members involved in the migration effort can cause communication issues. Extra attention is required to make sure that there is clear communication between all team members. Determine secure methods to share information about the project among the team members. If your team comprises consultants and in-house employees, there may be corporate policies about what information can be shared with outside consultants.

35

Define Change Control


Change control standards and tools help to manage both project documents and components of the solution that are subject to revision or iteration. The requirements should be clearly defined in the project structure. General information about change control is available in the UMPG. Migration projects have an additional source of change that is not obvious but must be managed: the initial state of the technology being migrated. This could be source code for applications, hardware and software configuration, and versions of application software. Although the initial state can be baselined when the migration project is started, operational needs may require changes to that configuration while the migration is under way. These operational changes are typically subject to Configuration Management standards imposed by an operations team. The Program Manager Role and Release Manager Role should work with operations personnel responsible for the existing technology to minimize and accommodate these changes. Long-lived migration projects may require a formal process to connect operational Configuration Management to project change control. Here are some additional points to remember: The major difference between a migration project and a new development project is that the initial work has already been performed to create the existing solution. The existing solution provides a working example of the goals which the solution must meet. Depending on the long term objectives of the project, an existing change control tool and repository on the UNIX platform may be migrated to tools in the Windows platform. As migration involves making modifications to objects currently in production, care has to be taken in how these objects are shared with the new development effort. As changes are discovered, there is the danger of modifying an object in the production thread. Creation of a separate project is recommended. In this situation, any changes made to the production thread should also be applied to the development (migration) thread. Appropriate controls have to put in place to handle this.

36

Envisioning Phase

Configuration Management
A major task in any database migration is the transformation of database objects, such as sequences and code within the database, including stored procedures. These transformations inherently change the source code. To maintain a known state as work progresses towards migration, two aspects of Configuration Management have to be addressed: As discussed earlier, any change to the original environment after the baseline is created needs to be communicated to the project team and accounted for. Change management in the area of configuration becomes critical if problems are encountered during the migration and you are forced to return production to the pre-migration environment. Any changes to the solution that occur during the Developing Phase and Stabilizing Phase need to be documented and communicated to the entire project team.

During the Stabilizing Phase, knowledge of and control over the test environment configuration is critical to understanding the results that are received. If documentation exists identifying the aspects that have changed, the results of a given test can be validated against both current and future results. Some points to remember include: Every configuration change should have a fallback or rollback path. Configuration management tools, such as bug tracking software, should be able to operate within the Windows environment.

Assess Risk
MSF stresses assessing project risk continuously throughout a project and makes this the responsibility of every team member. Risk is defined as the possibility of suffering a loss or, more specifically, the possibility of a negative outcome that can put the project in peril. The team comes up with an initial set of risks in the Envisioning Phase. This list is constantly updated in later phases of the project as new risks begin to appear and old ones start to lose relevance. It is important to understand that in a migration project issues are sometimes hidden, for example, in the design or implementation of a piece of code, which will appear only as the solution is being transformed. Here are some questions that you need to ask to understand the potential high level risks at the start of the project. These questions can help you generate the risk assessment list that can serve as the basis for your ongoing risk assessment and management for the project. Are the project sponsors and stakeholders committed to support the project? Does the migration team have the aggregate skill set to perform the migration? Are the project team members adequately motivated to perform their role in the project? What will the impact on the business be if there is a delay in the project completion? Do you understand the real problems of the business and users that the project is meant to address? Have you accounted any possible temporary decrease in productivity due to distractions for the project staff from their daily duties to support the migration project or while the customers learn to use the new application?

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows For the application suite to be migrated, do the vendors of any third-party applications provide technical support in the new environment? Is the application dependent on third-party software that is not compatible with Windows? Does the existing Oracle database contain technologies that SQL Server cannot directly implement? Have you fully considered the impact of the migration on other systems or applications that currently rely on or interact with the existing solution? Does documentation exist for all the applications and databases, such as requirements, models (logical, physical), and source code?

37

How will you handle possibly unproductive situations from staff members (technical and operational) who could be displaced by the migration? How will you handle possible scope creep in the migration project? Have you considered the impact of the migrated application failing to meet one or more functional requirements?

Manage Risks
A good way to manage risks is to identify or anticipate potential risks beforehand and come up with a good mitigation and prevention strategy to deal with them in case they occur. The list of potential risks for a project is likely to be long, and not all potential risks can be given the same amount of attention over the course of the project. It is therefore imperative to prioritize all the risks. This allows the team to focus more on the high priority risks, come up with suitable mitigation plans to prevent them from happening, and create contingency plans to deal with them effectively if they materialize. This solution guide provides a risk assessment spreadsheet for you to monitor your project risks based on this methodology.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

39

3
Planning Phase
Introduction and Goals
In the Planning Phase, the team defines the solution in detail what to build, how to build it, who will build it, and when it will be built. During this phase, the initial vision and solution concept defined in the Envisioning Phase is translated into practical implementation designs and plans for how to achieve it. Team members draw upon their expertise to create individual plans and designs in all areas of the project, ranging from security to budget to deployment. Likewise, individual team schedules and associated dependencies are identified. These plans and schedules are rolled into a master project plan by the Program Management Role. The phase concludes when the project team agrees that the plans are sufficiently welldefined to proceed with development, and the team, business sponsor, and key stakeholders approve the functional specification and the master project plan and schedule, usually at a milestone meeting. The formal conclusion is marked by the second major project milestone, Project Plans Approved. The key deliverables for the project team for the Planning Phase are: Functional specification. The functional specification is the virtual repository of project and design related artifacts that are created in the Planning Phase. The artifacts are primarily a result of design activities during the conceptual design, logical design, and physical design processes of the Planning Phase. The artifacts can include models such as case diagrams, usage scenarios, feature list, user interface screen shots, database design, and so on. The key goals of the functional specification are to consolidate a common understanding of the business, design, and user requirements; break down the problem and modularize the solution logically; provide a framework to plan, schedule, and build the solution; and to serve as a contract between the team and the customer and stakeholders. Master project plan. The master project plan is a collection of individual plans that address tasks performed by each of the six team roles to achieve the functionality defined in the functional specification. The master project plan documents the strategies the team roles will use to complete their work. The solution concept that the team developed in the Envisioning Phase provides high-level approaches that are developed into detailed plans in the Planning Phase. See Chapter 2, "Envisioning Phase," for the discussion of team roles. Master project schedule. The individual team schedules apply a time frame to the master plan. The master project schedule synchronizes project schedules across

40

Planning Phase the teams. Aggregating the individual schedules gives the team an overall view of the project schedule and is the first step toward determining a fixed ship date. Updated master risk assessment document. The master risk assessment document that was developed during the Envisioning Phase is reviewed and updated regularly during the Planning Phase.

All of these deliverables are living documents, evolving as the project progresses. Though these documents can be modified, modifications must follow the change management policies set up for the project during the Envisioning Phase. Table 3.1 lists the key activities that the team undertakes in the Planning Phase. The results of these activities are documented in this phase's deliverables. Table 3.1: Major Tasks, Deliverables, Owners, and Job Aids Deliverable where results are documented
Functional specification Functional specification Functional specification Master project plan

Tasks
Complete detailed assessment of the current environment Develop the solution design and architecture Validate the technology Develop the project plans

Primary owner
Program Management Program Management Development Team leads for all roles; plans consolidated by Program Manager Team leads for all roles Development and Test

Job Aids provided in this solution


Assessing the Environment Questionnaire

* Development Plan * Test Plan * Deployment Plan * Pilot Plan

Create the project schedules Set up the development and test environments Reassess current project risks

Master project schedule

Risk assessment document

Program Management

Migration Risk Exposure Rating Form

Complete a Detailed Assessment of the Existing Environment


To begin the migration process, an accurate and detailed assessment of the existing environment of your organization is essential. A high level assessment is carried out in the Envisioning Phase. The detailed assessment in the Planning Phase is a follow up activity. The information collected by this assessment should be comprehensive enough to plan the specific details of the solution migration. For instance, a detailed assessment of the application will provide definitive information on whether existing test plans can be reused

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

41

to test the solution. Without this detailed information about the existing environment, there is no way to accurately estimate the development, test, or deployment plans. The key elements of the environment that need to be assessed in detail include: Application Database Application infrastructure

Refer to the Assessing the Environment Questionnaire job aid to help you assess the main areas of the environment.

Application
A detailed assessment should be performed for any application that is affected by the migration. A database application migration project assessment should not be limited to assessing the application alone; it should also include database connectivity. Assessment activities related to the database should include the following: Note all database interfaces used by the applications and their types (OCI, ODBC, and so on). Note the relationship between applications and databases. For example, note if a database supports multiple applications, or an application connects to multiple databases. Identify all the database, schemas, and schema objects that the application interacts with. Note the protocol that is used by the application to communicate with the database. Oracle supports TCP/IP, TCP/IP with SSL, and named pipes. Identify all encryption methods employed by the applications with the databases. Identify the security features of the database used by the applications, such as authentication, password features, single sign on, and so on. Determine if there are any other network-related security authentication methods used with the current application. Common Oracle authentication methods include Kerberos, Radius, DES Algorithm, RSA, and Cybersafe. Note the data sources in use, such as ODBC, OLE DB, and ADO.

In addition to questions related to database connectivity, here are some general application assessment questions that you may need to ask. Identify the programming language used for the main application as well as any supplementary programs. Identify the platform support (hardware and operating system), architecture, and component interfaces of the existing application. Locate any existing information about the application. Is the source code available? Other items that may help plan for migration include test scripts, documentation, process flow charts, and use cases. Record any dependencies and characteristics, such as third-party libraries, UNIX utilities, or path names. Also, determine if the UNIX application uses X Windows, Motif, or xrt libraries. Determine the security architecture of the application. This information is important because features and implementations are different between Microsoft Windows and UNIX platforms. Record any encryption protocols currently being used. SSL and PGP are two common encryption methods.

42

Planning Phase Verify if the application uses daemon processes. This information is required because UNIX applications can be started in the background and will continue to run even after a user logs off the system. Using application services in Windows 2003 provides similar functionality. Check for any file system dependencies. This information is required because UNIX and Windows use different file access methods.

For more information on assessing the application before migrating from UNIX to Windows, refer to the UNIX Application Migration Guide, available at http://go.microsoft.com/fwlink/?LinkId=30832.

Database
To migrate an Oracle database to Microsoft SQL Server successfully, the project team must assess the technical details of the current system. The following assessment activities should be performed on the existing Oracle database: Identify and record the database name and all configuration information. Determine the type of database being used. Common database types include Decision Support System (DSS), Online Transaction Processing (OLTP), or a combination of the two. Locate any existing design documents, such as data models, database creation scripts, or data definition language (DDL) specifications. Record detailed information on existing database structures, including tables, views, stored procedures, indexes, users, and roles. Identify and record the current location of database files, such as Storage Area Network (SAN), Network Attached Storage (NAS), or local attached storage. Identify and record the disk space consumed by the existing database files. Also, obtain estimates of growth rate of the database files. Evaluate the existing backup solution, including the frequency and type of backup (full, archive log, and so on). Assess the database configuration information, including the sort order and character set. Examine the frequency of data dumps and log dumps. Determine the dependencies of the database on other databases or platforms. Common examples include distributed databases, replication, data marts, and data warehouses. Identify and analyze the server side scripts in use for activities such as data loads, batch processing, reporting, and administration.

The Migration Analyzer tool, which is a part of the SQL Server Migration Assistant (SSMA), analyzes the source Oracle databases and produces vital statistics on the size and complexity of the migration. This tool can be downloaded from http://www.microsoft.com/sql/migration. The beta version of this tool is available as of the date of publishing this solution. Version 1.0 of the tool is slated to be available in June 2005

Application Infrastructure
The application infrastructure can be broadly classified as server-related and networkrelated. Server-related infrastructure relates directly to the hardware and software on the server. Server-related items include operating systems, processors, memory, storage,

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

43

tape drives, and file systems. Examples of network-related infrastructure include routers, firewalls, switches, domain name servers (DNS), and virtual private networks (VPN). Security infrastructure is classified under both server-related and network-related infrastructure. Common items within the server security infrastructure are Network Information Services (NIS and NIS+) and Lightweight Directory Access Protocol (LDAP). The following assessment activities should be performed for server-related infrastructure: Check the capacity and performance of the servers and server clusters of the existing environment. Also verify if the clusters are situated in single or multiple locations. The hardware configuration details need to be gathered for any dependencies that might affect the migration, including architecture (32-bit versus 64-bit, hyperthreading, hybrid), processor speed, number of CPUs, and the number of available sockets for upgrades. Record any pertinent information about the RAM installed in the server, including size, configuration, and number of open slots available for upgrades. Verify disk storage needs for database files and the disk space consumed for the existing database files. Identify the storage requirements, such as: Amount of space needed for current and growth factors Spindle RPM (disks: 15000 RPM versus 10000 RPM; SAN: 1 gbps versus 2 gbps) Throughput (theoretical I/O, current I/O, anticipated I/O) for each element of the storage system Disk size(s) RAID levels Storage Architecture (SAN, NAS, external storage, internal storage)

The following assessment activities should be performed for network-related infrastructure: Identify the existing network protocols and network bandwidth. Examine the routers, switches, and firewalls for the existing solution. Verify the configuration and location of firewalls and proxies. Assess the network topology that exists in the organization. Information about the network infrastructure needs to be identified, including the logical organization of the network, name and address resolution methods, and network naming conventions. Analyze the network sites and the bandwidth between the sites. This information can be used to plan optimum strategies and schedules for installation and deployment. Assess the trust relationships and policy restrictions of the existing solution.

Develop the Solution Design and Architecture


The Planning Phase of the MSF Process Model includes three design processes: conceptual, logical, and physical. These three processes are not performed in parallel. Instead, their starting and ending points are staggered. These processes are dependent on each other. The logical design is dependent on the conceptual design, and the physical design is dependent on the logical design. Table 3.2 compares the three design processes.

44

Planning Phase

Table 3.2: Comparing Conceptual, Logical, and Physical Design Type of Design Perspective Purpose
Conceptual Design Views the problem from the perspective of the user and business Views the solution from the perspective of the design/architecture team Views the solution from the perspective of the developers Defines the problem and solution in terms of usage scenarios and refined requirements Defines the solution as a logical set of co-operating objects and services Defines the solution components and technologies

Logical Design

Physical Design

In a migration project it is quite likely that you already have detailed design documents for the existing system. These documents can provide an excellent starting point for your design process. Enhancements can be made to account for the additional requirements and functionality for the new system.

Build the Conceptual Design


The first design process in the Planning Phase is conceptual design. Once completed, the conceptual design is used in the creation of the logical and physical design processes. Conceptual design is the process of gathering, analyzing, and prioritizing business and user perspectives of the problem and the solution, and then creating a high level representation of the solution in the form of detailed requirements. The team gathers high level requirements during the Envisioning Phase and documents them in the form of use cases and usage scenarios. While producing the conceptual design, the team refines these requirements.

Develop Detailed Requirements


A conceptual design records the system requirements in terms of the following requirements: Business requirements User requirements System requirements Operational requirements

Each of these is described under the following headings

Business Requirements
Business requirements describe the organization's needs and expectations for the solution. These requirements exist at the managerial decision-making level and provide the context in which the solution will operate. Some example business requirements include: The database migration from UNIX to Windows should be completed in the given time frame. The existing business rules and policies should be maintained on the new system. Effort required for migration should justify the cost incurred.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows The existing level of performance and functionality should be maintained after the migration, including disaster recovery, availability, and scalability.

45

User Requirements
User requirements define the nonfunctional aspect of the user's interaction with the solution. They help you to determine the user interface and performance expectations of the solution in terms of its reliability, availability, and accessibility. In addition, the user requirements help you identify the training that the users will need to effectively use the solution. Some example user requirements include: The user should be able to use the same or similar interface as the application running on the UNIX system to minimize user retraining. The time taken to retrieve information from the system should not increase. There should be comprehensive user manuals for the new system. Training manuals should also be considered for new users.

System Requirements
System requirements specify the detailed transactions and their sequence in the system. The requirements help the project team define how the new solution will interact with the existing systems. Some example system requirements include: After the migration, the new system should continue to interoperate with applications as it did on the UNIX platform. The system should not require a user credential other than the credentials passed from logging onto the corporate network (single sign on). The system should support internal and remote users.

Operational Requirements
Operational requirements describe what the solution must deliver to maximize operability and improve service delivery with reduced downtime and risks. Some example operational requirements include: The system should allow administrators to perform their tasks both on-site and remotely. The system should be able to recover from critical failure without major impact and within service levels. The system should include a process for managing the total throughput and response time within the stated service levels. The system should be able to handle varying levels of user load and transactions. In addition, the site should be designed so that it can be modified and upgraded without affecting availability and performance. Large clients should be separated with respect to database infrastructure to minimize the impact on maintenance and availability.

Build the Logical Design


The logical design is the second step in creating the solution design. After identifying the business and user needs in the conceptual design, the logical design defines how the different parts of the solution will coordinate and work together. The logical design defines the parts of the system, provides a framework to hold all the parts of the system together, and illustrates how the system interacts with the users and other systems.

46

Planning Phase

While creating the logical design, the team takes into account all the business, user, operational and system requirements that state the need for security, auditing, logging, scalability, state management, error handling, licensing, globalization, application architecture, integration with other systems, and so on. There are several ways to represent the logical design of a system. A commonly used visual modeling language is Unified Modeling Language (UML). A detailed discussion of UML is out of the scope of this solution. For more information on UML, refer to The Unified Modeling Language User Guide (Booch, Jacobson, and Rumbaugh 1999) and Use Case Driven Object Modeling with UML: A Practical Approach (Rosenberg and Scott 1999). Figure 3.1 shows a sample logical design for a system using the Logical Object Model, as defined by UML. The sample design represents the system in terms of: Objects. These are people or things in the system. "Order" is an example of an object in Figure 3.1. Services. These are the behavior or functionality associated with any object. For example, "Track Status()" is a service provided by the "Order" object in Figure 3.1. Attributes. These are characteristics or properties of the objects. For example, "Order ID" and "Ship Date" are attributes of the "Order" object in Figure 3.1 Relationships. These represent ways in which objects are linked with each other. For example, the object Order and the object Order line item are shown to have an associative relationship in Figure 3.1 by a thick line. This shows that they are related and a change to one affects the other.

The objects, services, attributes and relationships can be obtained from the detailed usage scenarios that were developed in the conceptual design phase. The logical design represents a logical view of the complete system. Individual teams, such as those for User Interface (UI) development, the database development, and the application development can take this representation and start to build the detailed designs for their domain when making the physical design.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

47

Figure 3.1
Creating a logical design for the application

Logical Design Considerations


The logical design for an Oracle database migration project must take into account the changes that will occur as a result of migration from Oracle to SQL Server. The following items should be considered: Are there any dependencies between client applications and database connectivity? Do UNIX scripts support the client applications? How are communications between the application and database secured internally and over the internet? How is data distributed across multiple databases and multiple servers?

48

Planning Phase

Examining the conceptual design of the client application will provide an understanding of how it operates and is structured. This information is used to create the physical design of the solution. Two additional design studies that may assist in creating the physical design are discussed in the following section.

High Level User Interface and Database Design


Using the objects, services, attributes, and relationships identified in the logical design of the system, the team might decide to create a high-level user interface and database design. The list of objects and services gives the team an idea about the kind of functionality expected by the users. The team can use this information to design user interface elements such as buttons, text fields, and menu items. Similarly, the object and attribute information of the logical design can be used to develop an initial database design equivalent to the Oracle database being migrated.

Build the Physical Design


Physical design is the process of describing the components, services, and technologies of the solution from the perspective of the development team. The goal of the design is to provide clarity and detail for each of the development teams to develop their components, be it the user interface, business logic, database, or infrastructure. The physical design should include: Class definition of applications. Database schema for the solution. Baseline deployment model that provides: The network topology, showing hardware locations and interconnections. The data and component technology, which indicates the locations of the solution components, services, and data storage in relation to the network topology.

Component specifications that include internal structure of components and component interfaces. Programming models that identify implementation guidelines for threading, error handling, security, and code documentation.

A detailed discussion of physical design is out of scope of this solution. For detailed guidance on designing the data layer, designing the presentation layer, and designing the security specifications, refer to chapters 7 through 9 of Analyzing Requirements and Defining Microsoft .NET Solution Architectures (Microsoft Press 2003). The physical design diagram allows you to see how all the components connect together. For completeness, it may include infrastructure elements outside of the scope of the project such as firewalls or network connections that are not directly related to the solution. It will help prove the security of the system, and may show potential throughput bottlenecks, or highlight single points of failure.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

49

Figure 3.2 shows a sample physical design diagram. Your design considerations will require you to modify this example.

Figure 3.2
Creating a physical design for the application and database.

The following sections will help you plan for a migration by identifying the various components of the physical design and outlining the typical design considerations that are part of a typical migration.

50

Planning Phase

Incorporate Design Considerations


While migrating, you may want to utilize some of the advanced functions that are incorporated into SQL Server and Windows Server 2003. Some references may exist for Windows 2000 but the information also applies to Windows Server 2003. For more information, refer to the following resources: Designing for scalability http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_sa2_8rnd.asp Designing for availability http://www.microsoft.com/technet/prodtechnol/sql/2000/plan/ssmsam.mspx Designing for reliability http://www.microsoft.com/mscorp/twc/reliability/default.mspx Designing for performance http://www.microsoft.com/sql/evaluation/compare/benchmarks.asp http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ansvcspg.mspx Designing for interoperability http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/sqlorcle.mspx

Hardware Design Considerations


In addition to the advanced features available in SQL Server, there are many software tools available to assist in ensuring that the correct hardware is acquired. Some of these tools are listed here: Microsoft Datasizer Tool. This spreadsheet helps estimate table sizes. For more information, refer to http://www.microsoft.com/downloads/details.aspx?FamilyId=564C5704-D4F54EE8-9F3C-CB429499D075&displaylang=en. Dell Powermatch sizing tools. These tools are available to assist in selecting Dell hardware for use with SQL Server. For more information, refer to http://www1.us.dell.com/content/topics/global.aspx/alliances/en/sizing?c=us&cs=5 55&l=en&s=biz. HP sizing tools. These tools are available to assist in selecting HP servers for use with SQL Server. For more information, refer to http://h71019.www7.hp.com/ActiveAnswers/cache/70729-0-0-0-121.aspx. Unisys System Sizing Tools for SQL Server. These tools are available to assist in configuring Unisys servers for use with SQL Server. For more information, refer to http://unisys.com/products/es7000__servers/business__solutions/oltp__database_ _server/sql__server__resources.htm.

Validate the Technology


Parallel to the design process, the team will often validate the technologies being used in the solution. During technology validation, the team evaluates the products or technologies to ensure that they work according to specifications provided by their vendor and that they will meet the business needs for the specific solution scenario.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

51

SQL Server Editions and Features


While Oracle is available in Personal, Standard, and Enterprise Editions, SQL Server 2000 is available in the following editions: SQL Server 2000 Enterprise Edition (64-bit) SQL Server 2000 Enterprise Edition SQL Server 2000 Standard Edition SQL Server 2000 Developer Edition

All editions of Windows XP and Windows Server 2003 include support for SQL Server. Support exists for many earlier versions of Windows, as well. As with Oracle, each edition is distinguished primarily by the availability of certain features, progressing from Personal Edition to Enterprise Edition. SQL Server also offers several different editions. Each edition provides functionality based on scale as shown in Table 3.3. Table 3.3: SQL Server 2000 Editions Compared OS Requirements and Scalability
Operating System

Enterprise (64bit) Edition


* Windows Server 2003 Enterprise Edition * Windows Server 2003 Datacenter Edition

Enterprise Edition
* Windows Server 2003 Standard Edition * Windows Server 2003 Enterprise Edition * Windows Server 2003 Datacenter Edition * Up to 32 processors * Up to 64 GB of memory * Maximum database size of 1,048,516 TB

Standard Edition
* Windows Server 2003 Standard Edition * Windows Server 2003 Enterprise Edition * Windows Server 2003 Datacenter Edition * Up to 4 processors * Up to 2 GB of memory * Maximum database size of 1,048,516 TB

Scalability

* Up to 64 processors * Up to 512 GB of memory * Maximum database size of 1,048,516 TB

For a detailed preview of all the SQL Server features and supporting tools, refer to http://www.microsoft.com/sql/evaluation/features/byfunction/default.asp. For information on features by edition, refer to http://www.microsoft.com/sql/evaluation/features/choosing.asp. While migrating to SQL Server, the choice of edition of SQL Server that is required to support the demands on the database has to be validated. Refer to the following resource for a whitepaper that can aid in choosing the appropriate edition of SQL Server: http://www.microsoft.com/sql/techinfo/planning/SQLReskChooseEd.asp.

52

Planning Phase

Windows Server 2003


Windows Server 2003 is available in the following editions: Windows Server 2003, Standard Edition. For departmental and standard corporate workloads. Windows Server 2003, Enterprise Edition. For critical or heavy server workloads. Windows Server 2003, Datacenter Edition. For high levels of scalability and reliability. Windows Server 2003, Web Edition. For Web serving and hosting.

The goals of the Windows Server System are to promote operational efficiencies through simplified deployment, management, and security; to ensure high levels of dependability, performance, and productivity for application development; and to seamlessly connect information, people, and systems. For more information on the different editions of Windows Server 2003, refer to http://www.microsoft.com/windowsserver2003/evaluation/features/compareeditions.mspx.

Technical Proof of Concept


After validating the technologies, the team creates a prototype project using a small database that contains a representative sample of the data, tables, and other objects found in the production databases. This process will validate the approach for the migration and provide useful experience when using the various tools that have been selected. This prototype can also be used as a basis for a proof of concept, and ultimately the development of the solution itself. This initial proof-of-concept model often produces both answers and additional questions for the team. This information helps in the risk management process and identifies changes needed to the overall design that must be incorporated into the specifications. Here are some candidates for the proof of concept that are typically part of this type of migration: Performance of binary large objects (BLOB) in storing, accessing, and updating based on differences in their storage architecture between SQL Server and Oracle. Migration of sequences. Sequences are not supported by SQL Server, but the functionality can be duplicated. Replacing reverse key indexes in Oracle with regular indexes in SQL Server. Converting Oracle packages to SQL Server stored procedures. Recreating Oracle's profiles (resource and password) functionality using Windows security. Developing Windows auditing functions to replace specific Oracle functionality (session, privilege, and object auditing).

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

53

Develop the Project Plans


The solution design that was created in the Envisioning Phase is used as the baseline while creating the project plan. The project plan also needs to consider the key success criteria identified in Chapter 2, "Envisioning Phase." The project plans include: Development plan Stabilization plan (includes test plan and pilot plan) Deployment plan Operations plan Budget plan Training plan

In addition to the above plans, a training checklist needs to be created that will identify the existing and required skills. This will help you to plan the training that needs to be provided when you do the work described in Chapter 20, "Operations." The development plan, test plan, and deployment plan are covered in detail under the following headings.

Development Plan
The development plan describes different aspects of the development endeavor, such as tools required, methodologies and practices to be followed, schedule of events, and resources. The primary tasks of the development plan include: Defining team roles Identifying team, hardware, and software resources Providing training to the team members

Refer to the development plan template provided with this solution to assist in creating your development plan. In general, it is recommended that the elements described in the following sections be included in a development plan.

Development Objectives
These define the primary drivers that were used to create the development approach and the key objectives of that approach. The development objectives for a migration project differ from that of a new development. Most commonly, the key objectives are to: Migrate the application with the least amount of change. Create a SQL Server database which is almost identical in design and implementation to the source Oracle database. Migrate the entire solution to a Windows and SQL Server environment with the least amount of change.

Migration List
This provides a detailed listing of the applications and databases that need to be migrated as part of the current project.

54

Planning Phase

Overall Delivery Strategy


This describes the overall approach to delivering the solution. Examples of delivery strategy include staged delivery, depth-first, breadth-first, and features-then-performance. In migrations, the solution already exists and the delivery strategy consists of deploying the replacement solution.

Tradeoff Approach
This defines the approach for making design and implementation tradeoff decisions. For example, you might agree to trade features for schedule improvements, or to trade features for performance.

Key Design Goals


These identify the key design goals and the priority of each goal. Examples of design goals in a migration include interoperability between a UNIX application and the SQL Server database or rewriting the application using the .NET framework.

Development and Build Environment


This describes the development and build environment and how it will be managed. Include information on items such as source code control tools, design tool requirements, operating systems, or other software installed. If a development environment for the existing application does not exist, it will need to be created. This situation is common when the development of the applications to be migrated was originally outsourced.

Development Tools
These tools are used to assist in the development and test environments. For a detailed discussion of tools, see the "Set Up the Development and Test Environment" section later in this chapter.

Guidelines and Standards


These list and provide references to all standards and guidelines to be used for the project. Standards and best practices could differ substantially from the current Oracle/UNIX environment. However, applying new standards will require a tremendous rewrite, which could greatly affect the migration timeline. It is recommended that such standards be applied only to the components that are being rewritten or modified.

Versioning and Source Control


This describes how versioning and source control will be managed. This section includes identification of the specific tools that will be used and how developers are expected to use them. Most source control tools work with code irrespective of platform. Due to the change in operating systems, new software may need to be acquired. For instance, if CVS or any other source control software is currently used in the UNIX environment, consideration should be given to migrating these functions to a Windows-based alternative, such as Visual Source Safe.

Build Process
This describes the incremental and iterative approach for developing code and for builds of hardware and software components. It also describes how the build process will be implemented and how often it will be implemented.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

55

Components
This provides a high-level description of the set of solution components and how they will be migrated. In a migration project, most of the solution components should already exist.

Configuration and Development Management Tools


This identifies all the development tools the team will use during the project. This includes tools for all steps in the project: development, testing, documentation, support, operations, and deployment.

Design Patterns
This identifies the design patterns or templates that the team will use for this project and their sources. The team can acquire design patterns from both external and internal sources or create new design patterns. This information will only be necessary in development plans that involve rewriting the application.

Development Team Training


This identifies the training necessary to ensure that the development team will successfully develop the solution. This is a critical necessity in migrations because it is not common to already have personnel in place that can adequately support both Oracle and SQL Server environments.

Development Team Support


This identifies the various types of support the development team will require, the sources of that support, the amount of support of each type that the team will require, and the estimated schedule for support. Support may not exist for the target Windows environment, and may need to be developed.

Stabilizing Phase Plans


During the Stabilizing Phase, the testing team conducts tests on a solution whose features are complete. Testing during this phase emphasizes usage and operation under realistic environmental conditions. The team focuses on resolving and prioritizing bugs and preparing the solution for release. During the Planning Phase, the team typically creates the following plans that will be used during the Stabilizing Phase: The test plan The pilot plan

Test Plan
The test plan describes the strategy and approach used to plan, organize, and manage the projects testing activities. It identifies testing objectives, methodologies and tools, expected results, responsibilities, and resource requirements. This document is the primary plan for the testing team. A test plan ensures that the testing process will be conducted in a thorough and organized manner and will enable the team to determine the stability of the solution. A continuous understanding of the solutions status builds confidence in team members and stakeholders as the solution is developed and stabilized. The Test Role in the MSF Team Model is responsible for creating the test plan. This team is also responsible for setting the quality expectations and incorporating them into the

56

Planning Phase

testing plan. This template is available in the Tools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289. The key sections of a test plan include:

Forms of Testing to Be Performed


These include: Code component testing In a migration, the scope of changes made to the existing application varies from minor connection string changes to a complete application rewrite. The goal of testing is to ensure that any component modified matches the original features and functionality. Instead of basing the test results on a series of requirements, the existing application can be used as a basis for comparison. Database testing This includes: Physical architecture of database. Have the data files, transaction logs, and other components that comprise the database been created correctly? Logical architecture of the database. Have the tables, views, stored procedures, triggers, and other database objects been created successfully? Data. Have the contents of the tables been transferred correctly? Do tables contain the correct number of rows? Is the data valid? Functionality. Do the stored procedures, triggers, views, and other items comprising T-SQL code operate in the same manner as the original Oracle objects? Performance. Do the response times and throughput meet requirements and match user expectations?

Infrastructure testing Infrastructure testing ncludes the development and production environments, as well as hardware, software, monitoring software, network, backup strategies, and disaster recovery plans. Security testing There are two levels of security testing, network level and application level, that should be tested: Network level. Test access and privileges Application level. Ensure that the correct users and permission levels exist in the application and database.

Integration testing If the solution is deployed in phases, then test the integration of each phase. Each incarnation needs to be tested for acceptable integration with other components. User acceptance and usability testing These tests can often be recreated from the original design solution. The migration should meet the same goals as the existing solution. Any new or modified functionality should also be tested. Stress, capacity, and performance testing These tests are important because the migrated solution will reside in a new environment. The hardware, operating systems, and software will be very different than the existing solution. Complete testing should be performed on the solution. Stress testing can also be used to check database coexistence issues that may occur when there is a heavy load on the system.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Regression testing To ensure that the solution functions correctly while bugs are being corrected, a set of test cases should be available that can be executed as a regression test. Each time the test is run, the results should be logged and compared with the expected results. Testing performed on the original solution can be reused in most scenarios. Using the same data, test cases should perform identically between the existing solution and the new solution being tested.

57

Test Approach and Assumptions


Describe at a high level the approach, activities, and techniques to be followed in testing the solution. If different approaches are required for the solutions various components, you need to specify which components will be tested by which approach. This approach is relevant to migrations, as not all components will have the same amount of change or transformation. The test approach should be tempered based on the amount of change.

Major Test Responsibilities


Identify the teams and individuals who will manage and implement the testing process.

Features and Functionality to Test


Identify at a high level all features and functionality that will be tested. Perform the following tasks on the most common problem areas in an Oracle to SQL migration project: Ensure that the SQL Server edition acquired meets application requirements. For more information, see the "Validate the Technology" section in this chapter. Verify that the database architecture follows best practices and will meet performance goals. Ensure functions that have moved from the database into the application (and vice versa) during the migration are properly documented and tested. Check any table design changes due to restrictions on SQL Server row size. Test triggers that have been converted from Oracle to SQL Server.

Items to include when testing application migration include: Code changes Security requirements User connectivity and remote access Connection string changes Functionality changes. Consider both application and database for specific items. Some database items will affect the application.

Type of Required Hardware


The type of hardware for the test setup should closely resemble the architecture that is proposed for the production environment. The availability of hardware may depend on its type, its cost, and any other environmental factors that need to be considered at the location of the test setup. In situations where data does not exist to make a decision on the production data, the test environment has to be planned to capture such information. In some situations, using virtual machine software (such as Microsoft Virtual Server 2003) to recreate test environments and perform software testing is cost-effective.

58

Planning Phase

Location of Test Setup


Decide on the location of the test setup. If development, testing, and production are in physically distant (possibly offshore) locations, then decisions have to be made on how to set up the environments for optimal performance for each group. For example, if the test and production locations are far apart, creating copies of a large production database across a WAN can be an issue. Also, the test environment should be isolated and should not interfere with any production activity.

Resources Required
Provide for the different equipment required for the setup, such as power points; racks or stands; storage media, such as disks; and a backup system. Software, services, and tools for the test environment have to be procured and set up as per the instructions provided by the test team.

Expected Results of Tests


Describe the results that should be demonstrated by the tests. This information includes expectations of both the solution team and the testers. This section also defines whether the results must be exactly as anticipated or whether a range of results is acceptable. The expected results originate from the existing solution. The migrated solution should perform the same or better than the original system.

Deliverables
Describe the materials that must be made available or created to aid in conducting the tests and for presenting test results. In many migration situations, existing test scripts can be reused.

Testing procedures and walk-through


Describe the steps the testing team will perform to ensure quality tests.

Tracking and Reporting Status


Define the information that test team members will communicate during the testing process. This section defines the specific test status information that will be created and distributed. This information normally includes status information for each test case and the probability of completing the test cycle on schedule.

Bug Reporting Tools and Methods


Describe the overall bug reporting strategy and methodology. This section also defines what will qualify as a bug in the code, product features, and documentation.

Schedules
Identify the major test cycles, tasks, milestones, and deliverables. This section also describes who is responsible for each test cycle and its tasks. In addition, it identifies the expected start and completion date for each test cycle and the tasks within that cycle. When planning for test cases, identify the sections of code that have changed during migration and identify the functionality provided. Create test cases based on these functionality changes. When changes are minimal, such as a Java application that only requires the database connection string to be modified, the test case should ensure that the application is connecting properly to the

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

59

SQL Server database. In most migrations, updates to connectivity and APIs represent a majority of the changes. In situations where the APIs have been changed, test cases should be created to test this component change. These cases should test database queries and updates performed by the application to ensure that results are as expected. Gather various scenarios of usability and user interaction and repeat for each component. Query results and database updates can be checked against the same functionality in the existing solution. Refer to the test plan template to assist in creating your test plan. This template is available in the Tools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289.

Pilot Plan
The pilot plan describes how the team will move the candidate release version of the solution to a staging area and test it. The goal of the pilot is to simulate the equipment, software, and components that the solution will use when it is active. This plan also identifies how issues discovered during the pilot will be solved. The pilot plan includes details about how to evaluate the pilot; the results of the evaluation will facilitate a decision whether to move the solution to production. Project teams often conduct one or more pilots to prove the feasibility of solution approaches, to experiment with different solutions, and to obtain user feedback and acceptance of proposed solutions. Pilot solutions implement only those subsets or segments of requirements of the functional specification that are necessary to validate the solution. Note Some projects may not require conducting a pilot. For example, if there are few changes to an application, the consensus may be not to perform a pilot program. However, even in situations where an application is ported, it is important to remember that the solution include the application and the new SQL Server database back end. Because the back end database, environment, and platform will change in the new solution, a pilot is recommended for highly critical applications. Pilots are also recommended for applications where specific performance criteria, such as response times or throughput, must be met. Pilot programs can also be useful in situations where there is a redesign in the migration and the components work differently on the new platform. In addition, piloting can provide operational and deployment personnel practical experience with new technologies and can benefit the actual deployment schedule and operational transition schedules. The pilot plan provides the means to validate the business requirements and the technical specification prior to deploying the solution into production. Planning the details of the pilot ensures that the participating project teams identify their roles, responsibilities, and resource requirements specific to pilot development, testing, and deployment activities. Refer to the pilot plan template for assistance in creating your pilot plan. This template is available in the Tools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289.

60

Planning Phase

Deployment Plan
During the Deploying Phase, the team deploys the solution technology and components, stabilizes the deployment, transitions the project to operations and support, and obtains final customer approval of the project. Planning for deployment starts in the Planning Phase by developing a detailed deployment plan. This plan should be developed by the Release Management Role with assistance from the rest of the team. See the deployment plan template in the Tools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289. Key elements of the deployment plan are described in the following sections.

Deployment Scope
This section describes the solution architecture and scale of deployment. The deployment team needs to consider deployment of the database, server, and the client applications. Each has its own challenges that require a different process and set of tools. Seats. This describes the magnitude of the deployment in terms of sites, number of workstations, countries and regions, and other relevant size factors. Components. This lists and describes the components to be deployed and any critical dependencies among them. Architecture. This describes the solutions architecture and how it might affect deployment.

Deployment Tools
The database and server deployment will often be on one or a few machines. They can be deployed manually. However, the clients may have to be deployed over hundreds of computers across the corporate network. This may require the need for software distribution and deployment tools. These tools are discussed in the "Deploying the Client Application" section in Chapter 19, "Deploying Phase."

Backups
Backing up of the solution and the source control of scripts must be carried out. The items that need to be backed up for which you need to plan include a snapshot of the source code of the existing solution and the migrated solution. Include the version information and build information. This ensures that the environment can be recreated later, if required. It also aids during problems, especially when a fallback is implemented. In a phased migration, backups of the source should be performed between phases, as well as any other critical junctures, as deemed necessary. If a partial rollback is needed, backups can save time.

Deployment Schedule
This identifies the critical dates and anticipated schedule for the Deploying Phase.

Deployment Resources
This identifies the workforce that will be needed to complete the deployment and the sources of the personnel.

Solution Support
This describes how the users will be supported during the deployment.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Help desk. This describes the support provided to users and applications by the help desk team, including support for direct user questions and application issues, and also in-depth support for new or difficult issues. Desktop. This describes any changes in current workstation application support that might be required during deployment. Servers. This describes any changes in current server support that might be required during deployment. Telecommunications. This describes any changes in current telecommunication support that might be required during deployment.

61

Coordination of Training
This describes how end-user and support staff training is coordinated with the deployment schedule.

Communication
The necessary information, such as the dates for deploying and completing the migration must be communicated to the customers and all those who may directly or indirectly be affected by the migration. The deployment plan should be made available to the entire team so that each person is well informed of what actions they are responsible for and when it should occur. This is especially critical in large migrations that have to be undertaken over a grueling schedule, such as a weekend.

Work Schedules
To prevent the business operations from being affected by the deployment, work schedules should be set accordingly. The work schedules should consider training the operations personnel and end users.

Resource Availability
The resources in the production environment should be prepared for the solution deployment. The infrastructure, servers, and workstations should be prepared for the deployment. The necessary installation files should be ready to conduct a smooth and timely deployment. Spare disk drives, cables, network cards, power points, stands, and backup power supply should be made available in case of failure in hardware during the deployment. In many instances, such precautions are not taken until a solution is already deployed. Hence it is important to produce a plan to capture all these tasks. Assembling the team. The rollout team should be assembled and the training to perform the deployment should be provided. The rollout team members could be from the project team, but the team should also include members from the operations staff because they need to develop a thorough knowledge of the functioning of the solution. Alternate team members should also be selected to prepare for contingencies. Training the team. The deployment team should be given the necessary training and privileges to overcome the difficulties that could arise during the rollout. The different installation manuals and documentation should be provided to the rollout team. A special help desk should be set up for the users and the rollout team to revert to while deploying the solution. Documenting unplanned procedures. During the rollout, some procedures may have been implemented which were a part of the rollout plan. The deployment team may need these procedures to stabilize the deployment or correct unforeseen situations created due to the execution of unplanned procedures.

62

Planning Phase These emergency procedures should be documented and include the cause and effect of the procedure adopted. Because there are risks involved, the risk management plan has to be reviewed and updated. The risk management template is available in the Tools and Templates folder in the zip file download of this guidance at http://go.microsoft.com/fwlink/?LinkId=45289.

Deployment Strategies
The main strategies that need to be considered for deployment include: Cutover strategy. The critical area in a migration project where the existing system is shut down and the new system is ready to be implemented in the production environment. The cutover strategy focuses on the tasks, activities, and time required during the final days of the project. The cutover strategies available are: Straight cutover. This strategy enables migration of the database from one environment to another without taking any intermediary steps. This option is used when downtime does not limit the migration, and the migrated database does not contain critical information. Parallel cutover. Replication is a strategy that is used by organizations that do not have the adequate downtime required to build and migrate a database. A database is created in the new environment and data is migrated on a continual basis. A process is put in place to replicate the current database changes into the new database synchronously or asynchronously. Serial cutover. This option enables the database to be migrated in phases. This approach is usually implemented when multiple databases need to be migrated or data within the database is portioned based on business functions. The serial cutover strategy is difficult to implement when the database is complex and interdependent with other infrastructure elements.

Phase-out strategies. In large, complex migrations, the solution is rolled out in phases. This enables the team to assess and reduce the business risks. Phasing out enables the deployment to take place in small and manageable parts. There are two types of phase-outs: Vertical phase-out. In vertical phase-out, modules are rolled out one at a time. This implies that both the source solution and the target solution interoperate. The advantage is that the solution is rolled out in manageable parts. The disadvantage is that data bridge programs between the two solutions have to be deployed at each phase-out. Horizontal phase-out. The horizontal phase-out covers the rolling out of the solution in phases over the different geographical areas. While implementing a horizontal phase-out, the differences in the environment conditions of the business and applications should be taken care of.

Fallback strategies. A fallback is necessary if the migration encounters difficulties or delays that impacts business. This could result due to the following reasons: Unfeasible migration strategy Undocumented requirements or procedures Insufficient testing Disasters Unforeseen situations

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Each phase and each action in the rollout has to be associated with an impact to determine if a fallback is required whenever that action cannot be successfully completed. Dependencies for every action should also be mapped to foresee the impact on tasks further down the chain. Every precaution to ensure that the fallback is successfully completed has to be undertaken. For example, in case of a fallback, the licenses of the existing solution need to be still valid, and the encryption certificates need to be valid. A fallback plan should enable the solution to be rolled back to a consistent and stable state. Workaround strategies or contingency plans should be tried before deciding to fallback on the original database. The fallback strategies to be implemented depend on the cutover strategies that are selected. The fallback strategy should accommodate for any transactions that were executed since the start of deployment. To select the optimal fallback strategy, a list of the possible failures needs to be documented. The different fallback strategies available are: Fallback after straight cutover. In case of a fallback in straight cutover, the UNIX environment needs to be frozen at the time the cutover started. Thus, in case of a fallback, it is possible to restart the UNIX environment with minimal effort. A fallback may require re-establishment of broken sessions between the customer computers and the database. To reduce downtime and rework, during the fallback, checkpoints have to be established before clients are migrated. Also, a subset of clients should be cut over and tested before the rest of the clients. In addition, adequate time should be provided for a fallback with minimal user disturbance. Fallback after parallel cutover. The fallback process involves reversing the direction of replication that was used in the cutover, that is, SQL Server to Oracle database. Consequently, the Oracle database is updated for the period of the cutover. The client side is the main technical area to be addressed during the fallback. Migrating the clients in smaller batches can ease the pain of reverting to the original environment. Fallback after serial cutover. The fallback is similar to that in a straight cutover, with the entire migration viewed as a collection of smaller cutovers. The serial or phased cutover enables you to fallback to a previous consistent state without having to undo all changes. You are also able to solve problems in small increments.

63

For more information and assistance in developing your deployment plan, refer to the deployment plan template.

Create the Project Schedules


Based on the functional specification and the project plan, the team creates a project schedule that defines the release date of the solution. The team may choose to divide the schedule functionally. For example, different sub test teams will work on testing different features of the solution. Each subteam will create its own schedule. A release date is decided after integrating the master project schedule with the schedules from each team lead. A release date enables the team members to prioritize functions, assess risk, and plan effectively.

Estimating the Effort


According to the various functional components, each task is allotted to a team that is developing the component. Each team prepares plans based on their role and

64

Planning Phase

deliverables. Finally, all the plans are reviewed and dependencies between the plans are outlined. These various plans are then merged into a master project plan. Estimating the teams' effort can be accomplished as follows: Each team leader prioritizes the functions, roles, and responsibilities for the entire team. The team members are required to estimate their performance for each task assigned. The time given to perform an individual task should vary from one half day to one week. If the time duration estimated is short, then the overhead in managing the task may be too high. If the time estimated is too long, then managing the scope of work becomes too difficult. Perform bottom-up estimating to create accurate schedules. This method enables the individual performing the task to estimate the time taken to complete it. The estimate is usually based on the individual's experience of performing a similar task. In addition, individuals that estimate time plans for their work feel more accountable for their work schedules. Implement risk-driven estimating, which enables the team members to perform the difficult and highest risk tasks first. It permits individuals to minimize risk and provides excess time to mitigate risk.

When scheduling a migration, there are many time-related elements to consider. For instance, the development and test environment need to be set up before starting the Developing Phase to maximize team productivity, mitigate risks, and avoid delays. It is also recommended to migrate the database before migrating the application. This is due to object-level differences between the Oracle and SQL Server databases that may need to be modified. Migration environments also impact the schedule. For instance, projects that involve porting the application, such as Java, can be accomplished relatively quickly. By contrast, migration environments that require rewriting the application, such as Oracle Forms and Pro*C, will need a longer development schedule. Also, it is more challenging to migrate data in OLTP environments than in batch processing environments. Skills can also affect the scheduling. In an optimal migration, a database administrator on the team is skilled in both SQL Server and Oracle environments. If current database administrators are only familiar with the Oracle environment, extra time should be allowed for job training and skill acquisition. The same applies to application developers. In addition, pilot testing and migrating smaller applications before the main application may be beneficial to the project by providing the entire project team with experience that may prove helpful while migrating the main application. After you have defined a schedule, you can allocate resources to it. You should: Update the project plan with detailed allocation of resources. Check and configure availability of resources, identifying any possible overuse of resources. Establish a baseline and track progress on tasks and budget.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

65

Set Up the Development and Test Environments


The final tasks for the Planning Phase are setting up the development and test environments. Completing these tasks allow for a smooth transition into the Development Phase. In many ways, the development and test environments are similar. Both are modeled on the solution environment, and both are used to refine the solution. There are also differences between the functions of these two environments. The development environment is used to develop the migration solution for the database, client application, and server applications. In addition, the environment may go through changes as the solution is developed. By contrast, the test environment is more strictly controlled to ensure that it emulates the production environment. The test environment is used to complete all necessary testing for the database and application. The test environment must be as similar as possible to the production environment so that stress and scalability tests will have a measure of accuracy. Often in migration projects, the development environment is used for testing once the Developing Phase has ended. In most situations, the test environment requires a higher level of performance than the development environment, and additional hardware will need to be added. If you are reusing equipment, it is best to reload all operating systems, applications, and data. This ensures that testing results will not be skewed by a system setting or error left from the development environment. The test environment also has different requirements than the development environment. For instance, testing tools should be installed for tasks such as recording performance factors, or to increase the load during stress testing. The tools required for development and testing are: Database modeling tools for reverse engineering existing databases and generating data definition statements for the new environment. The more popular modeling tools are AllFusion ERwin from Computer Associates and ER/Studio from Embarcadero. Software development tools such Visual Studio .NET or Visual Basic .NET, Perl, Python, PHP, and Java Software management tools for source control and bug tracking Visual Source Safe and Concurrent Version Systems (CVS) are the popular ones for source control. PVCS from Merant and ClearQuest from Rational can be used for bug tracking. An application debugger (if one is not bundled with the application development tool). Most applications come with their own built-in debuggers. Software for administering Oracle and SQL Server. Both Oracle and SQL Server offer Enterprise Manager, which contains a comprehensive set of tools for all administrative tasks. There are several third-party tools available, such as Toad and SQL Navigator from Quest, PL/SQL Developer from Allround Automations, Unicenter SQL-Station from Computer Associates, and Rapid SQL from Embarcedero. Database and data migration tools. Tools that are aimed specifically at Oracle to SQL Sever migration are available from Microsoft. UNIX interoperability tools such as SFU, FreeTDS, and so on.

66

Planning Phase Scripting tools for writing scripts for testing. Perl is a popular scripting language that is portable across UNIX and Windows. Windows Script is available from Microsoft. Performance monitoring and analysis tools for the various components of the solution. Perfmon is a versatile tool for monitoring the Windows system and it can capture statistics for any application that runs on a Windows computer. Filemon and Netmon can be used specifically for the file system and internet traffic. Scheduling tool for automating tests and for capturing statistics. Windows scheduler can be used for this purpose. Autosys from Computer Associates and Control-M from BMC are also popular.

For additional information on testing, refer to Chapter 18, "Stabilizing Phase."

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

67

4
Developing: Databases Introduction
Introduction and Goals
For a migration project, the Developing Phase is the time when the team builds the solution components code and infrastructure as well as documentation. Typically, this work consists of modifying existing code in a way that enables it to work within the new environment. When new code is written, some aspect of the original component usually remains unchanged for example, exposed APIs, or specific component behaviors. Both the modification of existing code and development of new code in this context are considered to be migrating activities. Although the development and migration work is the focus of this phase and this guidance, the entire team is active. For example, some team members are creating deliverables such as training materials, rollout and site preparation checklists, and updated pilot and rollout plans while others are doing functional testing. Some development work may continue into the Stabilizing Phase in response to test results. This chapter serves as an introduction to the database development-oriented chapters. Chapters 5 through 8 describe and demonstrate how to implement a Microsoft SQL Server database that resembles the source Oracle database as closely as possible. These chapters also describe how to produce a solution that meets your organization's requirements for performance, functionality, availability, scalability, and recoverability. The following list provides an overview of the content in each of the database development-oriented chapters: Chapter 4: Developing: Database Introduction. This chapter prefaces the different tasks in migrating the Oracle database to SQL Server 2000. Chapter 5: Developing: Database Migrating the Database Architecture. This chapter describes the steps in creating an instance of SQL Server which is equivalent in architecture to the original Oracle database. Chapter 6: Developing: Database Migrating Schemas. This chapter shows how to migrate a schema owner and its objects to a SQL Server database. Chapter 7: Developing: Database Migrating the Database Users. This chapter contains detailed steps in creating users in the SQL Server databases and granting them the same kind of privileges they had in the original Oracle database.

68

Developing: Databases Introduction Chapter 8: Developing: Database Migrating the Data. This chapter explores the different options in migrating the application data from Oracle to SQL Server. It provides details in the usage of each of the options. Chapter 9: Developing: Database Unit Testing the Migration. This chapter contains the processes for testing the migrated database, its objects and data.

The Developing Phase formally ends with the Scope Complete Milestone. At this major milestone, the team gains formal approval from the sponsor or key stakeholders. All solution elements are built and the solution features and functionality are complete according to the functional specifications agreed upon during planning. Table 4.1 describes the major tasks that need to take place while migrating the database. Table 4.1: Major Tasks and Deliverables Tasks
Migrate the Oracle database architecture Migrate the schemas Migrate the users Migrate the data

Deliverables
SQL Server instance that is equivalent in structure to the source SQL Server databases housing application schemas SQL Server users with similar privileges as the source Complete database ready for use

Migrating the Database


Because they are both relational databases, Oracle and SQL Server have more similarities than differences in their features and functions. Both Oracle and SQL Server provide all the core functionality expected of a RDBMS, including concurrency, security, backup, recovery, scalability, availability, session management, transaction management, storage management, resource management, and so on. Both exhibit a moderate degree of compliance to ANSI standards. Hence migrating from Oracle on UNIX to SQL Server on Windows poses less of a challenge than would normally be expected when migrating from one proprietary computing system to another. Because SQL Server and Oracle are relational databases, it is not necessary to redesign the database. It is not necessary to rearchitect the application for performance or security because the two DBMSs are very similar in their architecture. The majority of the work involved is in accommodating the differences in the implementation of items such as indexes, data types, and SQL and DBMS features, such as partitioning. To simplify the migration of the database, the entire process can be organized into four major tasks:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

69

Figure 4.1
Tasks in the migration of a database

Each of the four major tasks includes a set of subtasks as follows: 1. Migrate the database architecture a. Build the SQL Server instance b. Configure the server c. Migrate the storage architecture 2. Migrate the schemas a. Migrate the schema b. Migrate the schema objects 3. Migrate the users a. Create user accounts b. Create roles and grant privileges 4. Migrate the data a. Plan the data migration b. Execute the data migration In Chapters 5 through 8, examples are provided for each of the major tasks and subtasks at the end of the chapter. The discussion of testing the migrated database is in Chapter 9, "Developing: Databases Unit Testing the Migration." Oracle DBAs who are new to SQL Server should also read Appendix A, "SQL Server for Oracle Professionals," which provides a condensed guide for educating Oracle DBAs about the internal architecture of SQL Server by leveraging existing knowledge of Oracle. The appendix also provides information about how to perform some of the common administrative functions in SQL Server.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

71

5
Developing: Databases Migrating the Database Architecture
Introduction and Goals
The first task in the migration of the database is to create an instance of SQL Server that will provide the container (the RDBMS) for creating schemas and schema objects to cater to the needs of applications. Your main focus when creating the database should be on its architecture: the instance architecture involving server resources, such as processor power and memory; and storage architecture, including file systems. Your objective for both of these areas is to derive performance that is equivalent to or better than the original solution. The two most common implementations of databases are Online Transaction Processing (OLTP) systems and Decision Support Systems (DSS). OLTP systems are characterized by large numbers of short transactions that frequently add or modify data. DSS systems, on the other hand, contain large volumes of mostly static data imported into them from OLTP databases, file systems, and other such systems for the purposes of analysis and reporting. There are fewer transactions in DSS systems, but they run much longer. OLTP system performance is influenced by locking. DSS performance is typified by large reads, sorts, computation, and so on. The guidance provided in this solution is focused on OLTP systems. The types of operations performed in OLTP and DSS systems impose different demands on the database resources. While the architecture for OLTP and DSS systems differs, resource considerations are similar. Though there are some objects that are specialized for DSS systems, most objects are common to both systems. As a result, the guidance provided applies to DSS systems, as well.

Build the SQL Server Instance


This section discusses the planning and preparation for installing the SQL Server RDBMS software. A more detailed discussion can be found in SQL Server Books Online under the Installing SQL Server topic.

72

Developing: Databases Migrating the Database Architecture

Pre-Installation Planning
You should follow these design guidelines for the layout of a SQL Server software installation and database files: Independent subdirectories. Files should be separated by categories and instance to minimize effects and ease navigation. When named instances of SQL Server are created, the associated directories are created with instance names for identification purposes. Consistent naming convention for files. Use the following file name extensions: .mdf for primary datafiles, .ndf for secondary datafiles, and .ldf for log files. The default file naming convention is dbname_data.mdf for the datafile and dbname_log.ldf for the log file. Integrity of home directories. Keep the SQL Server software separate from the datafiles. This allows the software to be moved or deleted without affecting the application. Separation of administrative information for each database. Store system data in the master database separate from other data. Separation of tablespace content. Every instance of SQL Server comes with the system databases master, model, msdb, and tempdb. Tune I/O load across all disks. SQL Server has filegroups, each with multiple files that give the same advantages as tablespaces. In addition, storage allocated to an object is distributed evenly across all datafiles belonging to the filegroup.

Installation
The following important installation options have to be determined prior to installing SQL Server: Instance name. SQL Server has a provision for a default instance where the instance takes the network name of the server. Multiple instances with assigned names can also be created. SQL Server naming conventions do not preclude reuse of Oracle instance names. Network libraries. SQL Server can communicate using several network protocols. TCP/IP and named pipes are the most common. To minimize resource utilization, only configure those protocols that are required. Service accounts. Like any other service, both SQL Server and SQL Server Agent (the SQL Server scheduling service) require a Microsoft Windows account. You should run SQL Server under a domain account. It does not have to be a local administrator or a domain account that has local administrative privileges. Authentication mode. Options available in SQL Server are Windows Authentication or Mixed mode. Mixed mode allows login access to SQL Server either by a valid Windows account or a valid SQL Server login. The authentication mode can be changed any time in the future. Changing the authentication mode requires that the database be restarted to take effect. Licensing mode. Indicate whether this SQL Server is licensed by processor or per seat (Client Access License) and the number for the given mode. Location of files. With Oracle, even though the installer provides the option of creating a starter database, installing the software and creation of databases are two independent events. Installing the SQL Server software also creates a "database system" with databases such as master, model, msdb, northwind, pubs, and tempdb created by default. The destination location of SQL Server

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows application files and the datafiles for the default databases can be specified at installation.

73

Configure the Server


A SQL Server instance can be configured similar to an Oracle instance. The areas that are covered here include memory, CPU, and listener. Architecturally, both Oracle and SQL Server can be divided into instance and database. The instance is made up of memory areas and database processes. In SQL Server, the division of memory pool into sub-caches (buffer cache, procedure cache, log cache, connection context, and system data structures) based on function closely resembles the Oracle SGA and its components. SQL Server has threads that perform similar work to the Oracle foreground and background processes. In Oracle, the characteristics and behavior of the database and the instance are determined by a large set of parameters stored in the initialization file (init.ora) or server parameter file (spfile). These parameters cover a diverse set of resources, such as memory, processes, network, disk, I/O, connections, files, character set, and so on. The non-default values of the Oracle initialization parameters can be obtained from the parameter file if one is in use. If a server parameter file is in use, the parameter values can be obtained using one of the following options: Convert the server parameter file (spfile) to an initialization parameter file by executing the following statement:
CREATE pfile FROM spfile

Query the database by executing the following statement:


SELECT name, value FROM sys.v$spparameter WHERE isspecified = 'TRUE'

SQL Server does not have an equivalent for every initialization parameter. The configuration options in SQL Server can be specified using SQL Server Enterprise Manager or using the sp_configure system stored procedure. Syntax for use is shown in the following statement.
sp_configure [option, [value]]

A discussion of every Oracle initialization parameter is beyond the scope of this guide. The configuration options available in SQL Server can be categorized into userconfigurable and advanced. The advanced options, which are similar to Oracles hidden parameters, are either self-configuring or should be manipulated only with the advice of a certified SQL Server technician. Examples of Oracle parameters that have an equivalent configurable parameter in SQL Server are provided in Table 5.1: Table 5.1: Examples of Oracle Initialization Parameters with Equivalent in SQL Server Oracle
Processes fast_start_mttr_target Sessions dml_locks

SQL Server
max worker thread options recovery interval User connections Locks

74

Developing: Databases Migrating the Database Architecture

Sometimes a configuration option can be set at one of several different places. For example, the SQL Server max degree of parallelism parameter can be set using the Oracle DEGREE clause at the object or query level. Certain options which are configurable in one database may have a fixed implementation in the other. For example, the Oracle LOG_CHECKPOINT_INTERVAL and LOG_CHECKPOINT_TIMEOUT parameters used to tune checkpoints do not have equivalents in SQL Server. In SQL Server, checkpoints occur automatically based on the number of redo records in the log. Similarly, UNDO_MANAGEMENT, UNDO_TABLESPACE, and ROLLBACK_SEGMENTS that define the storage for transaction rollback or undo data are implemented in SQL Server as part of the transaction logs of each database.

Configure Memory
SQL Server has far fewer configuration parameters than Oracle. In Oracle, there are several parameters that are used to control memory allocation to the instance and its sub-components. Some of these parameters are SGA_MAX_SIZE DB_CACHE_SIZE SHARED_POOL_SIZE LARGE_POOL_SIZE JAVA_POOL_SIZE LOG_BUFFER SORT_AREA_SIZE

SQL Server offers the parameters min server memory and max server memory, which can be used to limit the amount of server memory that can be utilized by the database system. SQL Server cooperates with the operating system to dynamically adjust the amount of memory used based on the demands on the server from other applications. However, this behavior puts the burden on the dimensioning of the system so it is capable of running the database with known loads and peaks and avoiding competing workloads which might jeopardize the system resources available to the database. All memory areas within the memory pool are dynamically adjusted by the SQL Server code to optimize performance and do not need any administrator input. Hence you will find very few memory-related configuration parameters in SQL Server. The amount of memory that can be utilized by SQL Server can be configured either using Enterprise Manager or through T-SQL. The two options are provided in the following procedures:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

75

To configure the amount of memory available for SQL Server in Enterprise Manager, follow these steps: 1. Expand the server group. 2. Right-click Server (Server Name\Instance Name), and then click Properties. 3. Select the Memory tab. 4. Select Dynamically configure SQL Server memory to set the memory range under which SQL Server should operate. Select Use a fixed memory size (MB) to allocate a fixed amount of memory. The actual values can be selected using the memory slider graphic. The code for setting a range of memory and a fixed amount of memory for SQL Server is provided in the following procedure. To configure a range of memory for SQL Server using T-SQL, enter the following commands:
USE master EXEC sp_configure 'show advanced option', 1 RECONFIGURE GO sp_configure 'min server memory', 100 -- 100MB RECONFIGURE GO sp_configure 'max server memory', 4000 -- 4000 MB RECONFIGURE GO

To configure a fixed amount of memory for SQL Server using T-SQL, enter the following commands:
USE master sp_configure 'show advanced option', 1 RECONFIGURE GO sp_configure 'min server memory', 400 RECONFIGURE GO sp_configure 'max server memory', 400 RECONFIGURE GO

Note A database restart is required to activate the new settings of all advanced options. If the amount of physical memory exceeds 3 GB, then the Address Windowing Extensions (AWE) Windows feature has to be enabled to handle allocation to SQL Server using the configuration parameter awe enabled. When this feature is enabled, SQL Server stops dynamic memory management at the server level and the max server memory parameter has to be set. If the max server memory parameter is set, cache

76

Developing: Databases Migrating the Database Architecture

management is similar to SGA sizing in Oracle but without the complications of tuning the individual cache components. To configure SQL Server to use more than 3 GB of memory using T-SQL, enter the following commands:
USE master sp_configure 'show advanced option', 1 RECONFIGURE GO sp_configure 'awe enabled', 1 RECONFIGURE GO sp_configure 'max server memory', 6000 RECONFIGURE GO

Because the memory requirements of a SQL Server instance are not the same as an equivalent Oracle instance, the SQL Server memory size has to be tuned during a migration. The following Performance Monitor counters can be used to check the amount of memory SQL Server is using: SQL Server: Buffer Manager: Total Pages SQL Server: Memory Manager: Total Server Memory (KB) Process: Working Set

The dbcc memorystatus command can be used to view information about memory allocation. For more information about memory usage and this command, refer to http://support.microsoft.com/?id=271624. The LOCK_SGA initialization parameter can be implemented in SQL Server using set working set size. The allocated memory stays in physical memory and improves performance because swapping is avoided. However, care should be taken that allocating a fixed amount of memory to SQL Server does not impact the needs of other applications running on the same server.

Set the CPU Affinity


SQL Server has the capability of defining CPU affinity where CPUs can be dedicated to an instance. This can be controlled using the affinity mask configuration setting or Enterprise Manager. The priority boost configuration setting can be used to prioritize threads. SQL Server SP1 has added an IO_affinity_mask switch, which can used at instance startup to reserve CPUs to handle disk I/O associated with the instance. Use of the IO_affinity_mask configuration setting is only recommended for machines with more than 16 CPUs. For more information on setting up the I/O affinity mask, refer to http://support.microsoft.com/?kbid=298402.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

77

Configure the Listener


The Oracle listener can operate on any available port. The well-known default selection is 1521. A default installation of SQL Server uses port 1433. If not configured manually, a named instance picks up an unused TCP port during startup. Unlike Oracle, where a single listener is shared by all instances, each SQL Server instance listens on its own port. The SQL Server server name, database name, and port information are required by client applications to configure the data source or the connection string (in non-DNS connections).

Migrate the Storage Architecture


This section discusses the physical and logical storage structures found in SQL Server. This discussion is useful in understanding how to configure storage in SQL Server in a manner similar to the original Oracle database. Database data is physically stored in files. However, allocation of this space to the database objects is done in logical units such as blocks, extents, and segments. Figure 5.1 provides a comparison of the storage architecture in Oracle and SQL Server.

Figure 5.1
Mapping Oracle storage structures to SQL Server

The organization of storage and its allocation has great significance on performance and maintenance of a database. Due to the similarities in the storage architecture, the principles on which the source Oracle database was organized can be carried over into the SQL Server database with relative ease. The following sections discuss each level of the storage hierarchy shown in Figure 5.1.

Blocks
In Oracle, the smallest unit of storage is the data block. All data in the database is retrieved and manipulated in terms of blocks. The SQL Server equivalent of the data block is called the page. In Oracle, the DBA chooses the default page size at database creation to be any of 2K, 4K, 8K, 16K or 32K. In SQL Server, the page size is fixed at 8K. With Oracle 9i, in addition to the default page size, up to five non-standard (non-default)

78

Developing: Databases Migrating the Database Architecture

page sizes can also be used for the tablespaces (and, consequently, the logical storage structures). The SQL Server page has similar composition as the Oracle data block. The data page consists of page header, data rows, and row offset array (row directory). The header leaves 8060 bytes of usable space for data rows and row offset. In Oracle, a data row that is larger than the usable space is broken into several row pieces (row chaining) and stored in multiple blocks. SQL Server does not allow rows larger than 8060 bytes. This restriction in SQL Server, however, does not apply to rows containing large data types, such as text, image, and so on, which are stored differently. Within a SQL Server block, space is managed in a manner similar to that by Oracles Automatic Segment Space Management (ASSM) option, where used and free space is tracked using bitmaps. This has implications with respect to migrating to SQL Server. In Oracle, the choice of block size is driven by the type of database OLTP or DSS/Data Warehouse. The advantage of large block size is mainly in terms of I/O performance because there is less overall cost for setting up fewer I/Os. Better I/O performance can be achieved with smaller block sizes by using other techniques, such as multi-block read (DB_FILE_MULTIBLOCK_READ_COUNT), parallel reads and read ahead, or prefetch. The maximum I/O size, however, is dependent on the operating system. SQL Server also uses a technique called scatter I/O that enables multiple unrelated blocks to be read in a single I/O request. An advantage of larger block size is reduced amount of row chaining. This is a factor when tables contain rows with lengths larger than can be accommodated in a block. Reading of multiple blocks may be necessitated in a SQL Server database as compared to Oracle databases with larger block size, but the performance can be compensated for by better hardware and I/O techniques. Based on the environment, large block sizes can impact cache performance. With large block sizes, for a fixed amount of RAM, the number of blocks in physical memory reduces. For example, if 1 MB of memory can hold 500 blocks of size 2 KB, it can hold only 125 blocks of size 8 KB. So the cache performance is dependent on the density of the block (number of rows per block) and the access pattern (random or sequential). Smaller blocks reduce contention while the overhead becomes a larger portion of the block. In the migration, the move from differing block sizes in Oracle to 8 KB blocks in SQL Server does not pose any challenge. When migrating from Oracle (where multiple block sizes are in use), the rows will be rearranged into 8 KB blocks. However, solutions have to be created to accommodate rows with length greater than 8060 bytes. Such solutions are examined later in this chapter.

Extents and Segments


Data allocation to schema objects (such as tables and indexes) and system data structures (temporary segment, rollback segment) in Oracle and SQL Server is in terms of logical extents. Oracle provides parameters for appropriate sizing of the extents. SQL Server, on the other hand, uses fixed size extents of 8 pages (64 KB). Oracle extents irrespective of whether they belong to dictionary or locally-managed tablespaces are migrated to fixed 64 KB extents. Extents allocated to Oracle objects are dedicated to the object and do not contain blocks from other objects. This is not the case in SQL Server. SQL Server has two types of extents: uniform and mixed. In a uniform extent, all pages are allocated to a single object, while mixed extents can have pages belonging to multiple objects. When a table or index is created in SQL Server, initially it is allocated two pages out of mixed extent. When the

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

79

table or index grows to eight pages, all future allocations use uniform extents. As a result of fixed size extents, fragmentation within the tablespace (filegroup in SQL Server) is eliminated. Because an extent is a contiguous allocation of blocks, large extents aid in better I/O rate while reading huge amounts of data. After moving to SQL Server, physical I/O will have to be configured and tuned, as was the case with the change in block sizes. In Oracle, all extents of an object are collectively called a segment. While the concept of segment existed in version 6.5, SQL Server does not have an equivalent for the segment from version 7.0 and later. The segment has no bearing on migration or performance.

Tablespaces and Datafiles


Datafiles are used to store persistent data in the database. For ease of management, one or more datafiles can be grouped into logical tablespaces. The SQL Server equivalent of the tablespace is the filegroup. While filegroups are similar in function, their organization differs from that of tablespaces. Oracle tablespaces are at the instance level. SQL Server filegroups come under and are associated with the individual databases. Figure 5.2 illustrates the tablespace to datafiles hierarchy:

80

Developing: Databases Migrating the Database Architecture

Figure 5.2
Schematic mapping of Oracle files and tablespaces to SQL Server

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

81

Table 5.2 captures some of the relations between tablespaces in Oracle and databases in SQL Server that are not obvious in Figure 5.2. Table 5.2: Oracle Tablespaces and SQL Server Functional Equivalents Oracle
System tablespace Temporary tablespace Undo (Rollback) tablespace Online Redo log "App Data" tablespace "App Index" tablespace N/A N/A

SQL Server
master database tempdb database Transaction log Transaction log "App" database "Data" filegroup "App" database "Index" filegroup model database msdb database

The following list describes how Oracle functionality is implemented in SQL Server: Each of the SQL Server databases has its own security structures, such as users, privileges (permissions), and roles. Each of the SQL Server databases has its own administrative roles that bestow privileges on the database. In SQL Server, the system catalog which is analogous to the Oracle data dictionary is broken up between the individual databases and the master database. In SQL Server, each of the databases has its own transaction log files, which combine the functions of the Oracle online redo logs and rollback segments. The tempdb database provides temporary storage for the entire SQL Server instance, and the temporary tablespace in Oracle is common to the entire Oracle instance.

Tablespaces and filegroups provide the ability to better distribute data across multiple files for the purposes of performance. The grouping also helps ease administration of backup and recovery, maintenance, availability, and so on. Each SQL Server database is created with a primary file belonging to the default primary filegroup. Optionally, secondary datafiles can be added to the primary filegroup, or additional filegroups can be created. The location of the files belonging to a database is recorded in the master database and the primary file for the database. Data added to objects of a filegroup are proportionally filled across all files belonging to the filegroup. A common concern among users when creating SQL Server databases is the lack of the multiplexing feature available in Oracle for control files and redo logs. This is not a concern because SQL Server recommends the use of striped and mirrored devices (RAID 0+1) for transaction logs to preserve the level of protection against hardware failure obtained by log multiplexing. SQL Server recommends using RAID 0+1 for all database datafiles. If RAID 0+1 cannot be implemented, at a minimum RAID 5 should be used to provide tolerance for hard disk failure. Apart from file systems, SQL Server also supports the use of raw partitions.

Storage Definition for Tables and Indexes


In Oracle, the storage definition is used to specify the characteristics of tables and indexes. The following list provides information on how these characteristics migrate to SQL Server.

82

Developing: Databases Migrating the Database Architecture Physical attributes. These include PCTUSED, PCTFREE, INITRANS, and MAXTRANS. Of these parameters, SQL Server only offers an equivalent for PCTFREE through the FILLFACTOR clause. FILLFACTOR is available for indexes only. A PCTFREE of 10 would correspond to a FILLFACTOR of 90. Tablespace. This is used in a manner similar to tables. The SQL Server equivalent is the ON filegroup_name option. A default filegroup can be specified for each database, and the default filegroup functions similar to the default tablespace of an Oracle user. If a filegroup is not specified during index creation, the index is created in the default filegroup for the database. Storage attributes. These override the defaults specified at the tablespace level, such as INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, PCTINCREASE, and so on. Due to the fixed (64 KB) extents, these attributes have no relevance in SQL Server.

Migrate System Storage Structures


As has already been presented in this chapter, an Oracle database has the following system structures that have a similar role in a SQL Server database. The configuration of these structures in an Oracle database can be used to provide a starting point for configuration in SQL Server. System tablespace. In Oracle, the system tablespace houses the data dictionary and the system rollback segments. The master database in SQL Server has a similar role. The transaction log which also holds the rollback information is in a separate file. Also, the data dictionary in the master database does not grow because user object information is stored in the primary file of the contained database under the decentralized data dictionary model of SQL Server. The master database also has some of the information that is stored in the control file of an Oracle database. Based on these facts, it is appropriate to use the default master database settings. Rollback or undo. In Oracle, a separate tablespace exists for holding rollback segments (manual and auto modes) whose size and number is dependent upon the number of user connections and the amount of changes being made to the database. These factors should be considered in sizing the transaction log in SQL Server. Redo. There are several factors to consider while choosing the size of a transaction log. The source Oracle database has more than one schema that will be migrated into separate SQL Server databases. For the same transaction, the amount of redo and rollback information generated in SQL Server and Oracle is different. The frequency of backup or truncation of the transaction log also influences the size. Hence prototyping is the best way to size the SQL Server transaction log. Temporary. The size of tempdb can be initially set to a conservative size as compared to its Oracle counterpart, the temporary tablespace. The auto-increment feature should be set to ensure that transactions do not fail due to lack of space. Setting very small increments will also affect performance due to constant space operations. The temporary space needs are based on the application needs, which may change a little as applications are migrated to a SQL Server environment.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

83

6
Developing: Databases Migrating Schemas
Introduction and Goals
The next task is to create additional databases to house the application schema objects and the objects that are migrated. Apart from the architectural differences mentioned in Chapter 5, there are also differences in data models between OLTP and DSS systems. OLTP systems have more complex relations, with constraints used to enforce business rules. DSS systems have simpler and far fewer relationships between tables. Despite the differences between systems and schema modeling techniques (such as normalized, star, and snowflake), the implementation of the physical schema uses the same set of objects, such as tables, indexes, and views. As a result, migration of the schemas and data is the same for all these types. As in Chapter 5, the examples used in this chapter focus on OLTP systems, but the guidance is applicable to DSS systems, as well. A Microsoft SQL Server database has the characteristics of an Oracle schema because objects can be created inside the database. In Oracle, the schema and the storage (tablespaces) have independent identities objects of a schema can be created in different tablespaces, and a single tablespace can accommodate objects from multiple schemas. In this context, SQL Server databases are similar to Oracle tablespaces an owner can create objects in different databases, and a database can contain objects from different owners. In spite of this fact, the database provides a higher degree of separation of the application data and security than is provided by schemas. SQL Server has been designed for the isolation of administrative duties at the database level. The system (catalog and roles) has been divided, with centralized functions that have instance-wide authority under the master database and database-specific functions under the individual databases. Creating a SQL Server database for each Oracle schema is the ideal choice for the logical separation of objects based on the business function (application) that is provided by the Oracle schemas. Subsequently, the schema's objects can be created under the database. Tools are available from Microsoft that can assist in this step of the database migration. The Microsoft SQL Server Migration Assistant (SSMA) provides the Schema and Data

84

Developing: Databases Migrating Schemas

Migrator, which specializes in the migration of the Oracle schema to SQL Server. The SSMA also provides the SQL Converter tool, which converts SQL code found in Oracle objects (such as stored procedures) to their T-SQL equivalents. The SSMA tool suite can be downloaded from http://www.microsoft.com/sql/migration. The beta version of SSMA is available as of the date of publishing this solution. Version 1.0 of SSMA is slated to be available in June 2005. Most tools that are employed in the administration of Oracle databases are capable of extracting object definitions. Toad and SQL Navigator from Quest, PL/SQL Developer from Allround Automations, Unicenter SQL-Station from Computer Associates, Rapid SQL from Embarcedero, and Oracle Enterprise Manager from Oracle are examples of such tools. Modeling tools such as AllFusion (Erwin) from Computer Associates and ER/Studio from Embarcadero can also be employed to reverse engineer the Oracle database. The goal of this chapter is to show how an Oracle schema and its objects should be migrated to SQL Server. Scripts should be used for performing most of the tasks in this step, because scripts make the migration easier to execute compared to the use of GUI tools.

Scripting Migrated Schema Objects


The development process is built on the requirement that when the development is complete, the code will be in place for unit testing and implementation. These scripts will allow recreation of the objects in a reliable manner when creating the database in the Stabilizing Phase and the Deploying Phase. There are some basic rules that should be followed when scripting the database for promotion to a staging environment for unit testing: Script everything Provide support documentation Protect the scripts

Each of these basic rules is discussed under the following headings.

Script Everything
Script everything associated with the database implementation. The Generate SQL Script feature of SQL Server Enterprise Manager can be used to script each database object in the migrated database. After the scripts are created, conduct a validation process to verify that the number of objects scripted is the same as the number of objects that exist in the development database. This process is very important because it helps to ensure that the database being promoted to the staging environment is identical to the database in the development environment.

Provide Support Documentation


Provide support documentation for each script. Comment the code to describe what is happening in the script and include any observations gathered when it was last executed, such as how long execution took and how many rows were affected.

Protect the Scripts


Protecting the scripts is very important for the database implementation. Treat the scripts like production objects at this point. To ensure that the appropriate scripts are used when the database is promoted between development, staging, and production environments,

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

85

keep the scripts in a secured directory or under version control. Versioning is very important because an Oracle to SQL Server migration is not a mere mapping of objects from one database to another. The differences between Oracle and SQL Server will require that some of the SQL statements be rewritten for the new environment. These changes in SQL will also prompt changes to the database objects and their design. These changes will be driven by the application development (migration) process and not from within the database migration process described here. Only the development and deployment teams should have access to the scripts.

Migrate the Schema


This section discusses the necessary steps in migrating the schema owner and setting up the storage in preparation for migrating the schema objects and its data. The high-level steps for migrating the storage architecture of a schema are: 1. Map the storage architecture. 2. Create a database for the schema. 3. Create filegroups for the tablespaces. 4. Add datafiles to filegroups. Each of these four high-level steps is discussed in detail under the following subheadings.

Map the Storage Architecture


The first step in migrating an Oracle schema is to understand the storage architecture and how these architectural components will map to their equivalents in SQL Server. Information on the various characteristics of the tablespaces and datafiles in which the Oracle schema is stored has to be gathered. There are several tools, both from Oracle as well as third parties, which are used in administering Oracle databases. Any of these tools can be employed to gather information on the tablespaces and datafiles. In the example migration that is provided at the end of this section, methods for capturing the requisite data purely using SQL are provided.

86

Developing: Databases Migrating Schemas

Figures 6.1 and 6.2 provide a view of how the storage objects will align in Oracle and SQL Server.

Figure 6.1
Oracle schema illustrated

Figure 6.2
SQL Server schema equivalent

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

87

Start by compiling a list of tablespaces that contain objects belonging to the schema and gather the following related information: Tablespace name. Status. The status of an Oracle tablespace may be either ONLINE, OFFLINE, or READ ONLY. If a tablespace is OFFLINE, then a decision has to be made if it is required to be migrated. Data in an offline tablespace cannot be accessed and it has to be brought online for migration. Tablespace contents. Only tablespaces whose tablespace content attribute is set to PERMANENT have to be migrated. Logging attribute. Tablespaces may be set to LOGGING or NOLOGGING, which specifies the default characteristic for bulk operations against the objects in the tablespace.

Note In SQL Server, the attributes of status, tablespace contents, and logging are set at the database level and not the filegroup level. If an Oracle schema has tablespaces with more than one value for these attributes, more than one database will have to be created in SQL Server. For example, if a schema uses two tablespaces (one is ONLINE and the other READ ONLY), then the two tablespaces have to be mapped to two separate databases in SQL Server. The two databases can then be set to ONLINE and READ_ONLY, respectively, using the state options available in SQL Server. Gather the following information for each of the datafiles in the tablespaces: File name File size File autoextensibility Maximum file size File auto extension increment

From the information gathered about tablespaces and their datafiles, produce a map for the databases, filegroups. and datafiles to be created in SQL Server as shown in Figure 6.2.

Create Databases for the Schema


To migrate the schemas from Oracle, databases are created in SQL Server for the schemas. The databases will hold the schema objects and their data. This section provides details on how to create databases in SQL Server. To an Oracle DBA, creating a database means creating an entire database system that contains control files, redo logs, data dictionary and temporary tablespace. In SQL Server, these tasks are accomplished as part of the installation process. Hence creating a database in SQL Server implies adding a user database to the already existing system databases. Members of the fixed server roles sysadmin and dbcreator, or any user with such privileges, can create a database. By default, the creator of the database becomes the owner of the database. Databases can be created either through Enterprise Manager or by using the CREATE DATABASE T-SQL command. Only members of the sysadmin and dbcreator fixed server roles or users who have been granted the CREATE DATABASE permission can create a new database. The following basic characteristics of the database have to be decided before creating a database:

88

Developing: Databases Migrating Schemas Database name. Naming is constrained by the same rules as identifiers. A meaningful name can be used and need not be the same as the name of the Oracle schema being migrated. Database owner. By default, the user who created the database becomes the database owner (dbo). The owner can be changed using sp_changedbowner. To mimic the Oracle schema, a login with the same name as the schema can be made the database owner. A more detailed account of the implications of database owner and database object owner is provided in the "Qualifiers and Name Resolution" section later in this chapter. Filegroups. These are the equivalents of tablespaces. A default primary filegroup is created during the database creation. Primary file. Every database is created with a default primary file belonging to the default primary filegroup identifiable by the .mdf file name extension. The primary file contains startup information for the database and is used to store user data just as any other datafile. Only the size and growth characteristics can be changed. Secondary files. Additional files can be created in a database and associated with the primary file group or any additional filegroup. The recommended file name extension for secondary files is .ndf. Transaction log. Hold rollback and redo information required to recover the database. Notice that, unlike Oracle, where rollback segments and redo logs are central to the instance, every database has at least one transaction log. Transaction logs are not part of any filegroup.

There are two ways to create a new database using Enterprise Manager: using the Database Properties dialog and by using the Create Database Wizard. An example of each of these procedures is provided here. To create a database using the Database Properties dialog, follow these steps: 1. Right-click Databases, and then left-click New Database, or right-click Server (Server Name\Instance Name), then right-click New, and then left-click Database. 2. In the Database Properties window, enter the database name in the General pane, the datafile and filegroup configuration in the Data Files pane, and the transaction log configuration in the Transaction Log pane. To create a new database using the Create Database Wizard, follow these steps: 1. Expand a server group, and then expand the server in which to create a database. 2. On the Tools menu, click Wizards. 3. Expand Database. 4. Double-click Create Database Wizard. 5. Complete the steps in the wizard. SQL Server also offers the CREATE DATABASE statement for creating databases.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

89

To create a database using the T-SQL CREATE DATABASE statement, use the following syntax:
CREATE DATABASE database_name [ ON [ < filespec > [ ,...n ] ] [ , < filegroup > [ ,...n ] ] ] [ LOG ON { < filespec > [ ,...n ] } ] [ COLLATE collation_name ] [ FOR LOAD | FOR ATTACH ] < filespec > ::= [ PRIMARY ] ( [ NAME = logical_file_name , ] FILENAME = 'os_file_name' [ , SIZE = size ] [ , MAXSIZE = { max_size | UNLIMITED } ] [ , FILEGROWTH = growth_increment ] ) [ ,...n ] < filegroup > ::= FILEGROUP filegroup_name < filespec > [ ,...n ]

Configure the Databases


SQL Server is very similar to Oracle in its architecture and can be similarly configured. In the discussion of system configuration options, several options that apply to the individual databases but are set at the instance level are discussed. These options apply uniformly to all databases in the instance. In contrast, the following five different sets of options are set at the database level and apply only to the targeted database. Auto options Cursor options Recovery options SQL options State options

These options are set using the ALTER DATABASE statement. A subset of the available options can be set using Enterprise Manager. For a description of each of these options, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/createdb/cm_8_des_03_6ohf.asp. If the source Oracle database is in NOARCHIVELOG mode, then the Simple Recovery model is chosen for the database and the Full Recovery model is used if the Oracle database is in ARCHIVELOG mode. If the Oracle tablespace is in read-only mode, then the corresponding database created in SQL Server for that database has to be set to read-only mode. Similarly, if the Oracle tablespace is offline, the corresponding database can be taken offline after the migration is complete. If logging has been turned off for the Oracle tablespace, then set the recovery model for the database to BULK_LOGGED. An overview of the SQL Server recovery models is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnsqlmag2k/html/dbRecovery.asp.

90

Developing: Databases Migrating Schemas

Create Filegroups for the Tablespaces


Filegroups in SQL Server are similar to tablespaces in Oracle. They are used to logically group storage. When a database is created in SQL Server, it has one filegroup by default. Based on the number of tablespaces that the schema uses, additional filegroups have to be created in the newly created database. Every database is created with a default primary filegroup that cannot be renamed or dropped. As a result, one of the tablespaces will have to map to the primary filegroup during migration. Additional secondary or user filegroups can be created with userspecified names. In Oracle, tablespaces are created as locally managed or dictionary managed. The type of tablespace determines what storage parameters can be used. Creating filegroups is similar to creating tablespaces under Oracle except that a filegroup is added to a specific SQL Server database. Filegroups added to a database are called secondary filegroups, and they can be added to any database. Datafiles are added separately from the definition of a filegroup. The following two options are available for creating a filegroup. To create a filegroup using Enterprise Manager, follow these steps: 1. Expand Databases and right-click Properties of the database to which the filegroup has to be added. 2. In the Database Properties window, left-click the Filegroups pane. 3. Type the filegroup name in the next available empty line that follows the filegroup listing. To add a filegroup to a database using T-SQL, execute the following statement:
ALTER DATABASE database_name ADD FILEGROUP filegroup_name

Add Datafiles to Filegroups


Storage has to be allocated in the SQL Server database to meet the requirements of the schema being migrated. Datafiles are added to filegroups much as datafiles are added to tablespaces in Oracle. Adding a datafile in SQL Server is similar to the process used in Oracle. The characteristics that can be set are related to auto-growth and are similar in function to Oracle's AUTOEXTEND feature. An additional characteristic that does not exist in Oracle is the logical file name. Datafiles can be added to the primary filegroup as well as any secondary filegroups either through Enterprise Manager or using T-SQL. To add a datafile by using Enterprise Manager, follow these steps: 1. Expand Databases and right-click Properties of the database to which the filegroup has to be added. 2. In the Database Properties window, left-click the Data Files pane. 3. Add file information in the next available empty line in the file listing. In the filegroup column, select the filegroup name from the drop-down list. Also set the growth characteristics for the file.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

91

To add a datafile to the primary filegroup or secondary filegroups using TSQL, use the following syntax:
ALTER DATABASE database { ADD FILE < filespec > [ ,...n ] [ TO FILEGROUP filegroup_name ] where <filespec> ::= ( NAME = logical_file_name [ , FILENAME = 'os_file_name' ] [ , SIZE = size ] [ , MAXSIZE = { max_size | UNLIMITED } ] [ , FILEGROWTH = growth_increment ] )

The TO FILEGROUP clause is not required to add datafiles to the primary filegroup.

Add Transaction Logs


In Oracle, information on transactions and the changes they make is recorded in redo logs. The redo logs are common to the entire instance. In SQL Server, transactional changes are logged in the transaction log for the database whose objects are involved in the transaction. A database is created with a single default transaction log. The default transaction log has to be sized or new ones added based on the update activity against the database. Transaction logs can be added using a process similar to that used for adding datafiles, as shown in the following procedure. To add a datafile by using Enterprise Manager, follow these steps: 1. Expand Databases and right-click Properties of the database to which the transaction log has to be added. 2. In the Database Properties window, left-click the Transaction Log pane. 3. Add the transaction log file information in the next available empty line in the file listing. Also set the growth characteristics. To add a transaction log to a database using T-SQL, use the following syntax:
ALTER DATABASE database { ADD LOG FILE < filespec > [ ,...n ] where <filespec> ::= ( NAME = logical_file_name [ , FILENAME = 'os_file_name' ] [ , SIZE = size ] [ , MAXSIZE = { max_size | UNLIMITED } ] [ , FILEGROWTH = growth_increment ] )

Sample Schema Migration


This section describes an example schema migration using the four steps that have been described earlier. The HR schema that is bundled with the Oracle installation software is used to illustrate the steps in migrating a schema. The storage characteristics of the tablespaces used by the HR schema have been modified to demonstrate some of the

92

Developing: Databases Migrating Schemas

steps that would be common in migrations. This example is conducted using the features and tools offered by Oracle and SQL Server and no third-party tool is employed. 1. Extract the schema storage requirements by following these substeps: a. Produce a listing of tablespaces in which the schema has objects by using the following statement:
SELECT DISTINCT tablespace_name "TABLESPACE NAME" FROM dba_segments WHERE owner = 'HR'

The owner in the statement is the name of the schema to be migrated. The output is as follows:
TABLESPACE NAME ____________________________________________________________ EXAMPLE INDX

b. Obtain the characteristics of each of the tablespaces using the following statement:
SELECT tablespace_name, status, contents, logging FROM dba_tablespaces WHERE tablespace_name IN (SELECT DISTINCT tablespace_name FROM dba_segments WHERE owner = 'HR')

The owner in the statement is the name of the schema to be migrated. The output is as follows:
TABLESPACE_NAME EXAMPLE INDX STATUS ONLINE ONLINE CONTENTS PERMANENT PERMANENT LOGGING LOGGING LOGGING _______________________________________________________

c. Find the datafiles associated with the tablespaces using the following statement:
SELECT tablespace_name "TS NAME", file_name "FILE NAME", bytes/1024/1024 "SIZE MB", autoextensible, maxbytes/1024/1024 "MAX SIZE MB", increment_by*8192/1024/1024 "INCR SIZE MB" FROM dba_data_files WHERE tablespace_name IN (SELECT DISTINCT tablespace_name FROM dba_segments WHERE owner = 'HR')

The owner in the statement is the name of the schema to be migrated. The output is as follows:
TS NAME FILE NAME EXAMPLE /u02/oradata/oracle92/example01.dbf EXAMPLE /u02/oradata/oracle92/example02.dbf INDX INDX /u03/oradata/oracle92/indx01.dbf /u03/oradata/oracle92/indx02.dbf SIZE 1000 100 1000 50 AUTO YES YES YES YES MAX SZ 1000 1000 1000 1000 INCR 100 100 50 50 ________________________________________________________________________

Note The output has been reformatted to fit the page.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Figures 6.1 and 6.2 provide schematics for the hierarchical organization of storage in Oracle and SQL Server. Terminologies such as schema, database, tablespace, filegroup, datafile and transaction log are used here. These figures are modified in Figure 6.3 and Figure 6.4 to show instances for the terminologies based on the HR schema:

93

Figure 6.3
Oracle storage map for sample HR schema

Figure 6.4
Equivalent SQL Server storage map for sample HR schema

Note Instead of putting the HR_DATA files under the required PRIMARY filegroup, a separate HR_DATA filegroup can be created. This helps make the filegroup and datafiles correlation easier. 2. Create and configure a new database in the SQL Server instance to hold the schema objects. a. Create the database.

94

Developing: Databases Migrating Schemas Create the database with storage attributes selected using the information gathered in step 1. The location of the files is dependent on the file system of the target Windows server on which SQL Server is installed. Create the database using the following commands:
USE master GO CREATE DATABASE HRDATA ON PRIMARY ( NAME='HRDATA_01', FILENAME='E:\mssql\Mssql$corp1\data\hr_data_01.mdf', SIZE=1000MB, FILEGROWTH=0) LOG ON ( NAME='HRDATA_LOG_01', FILENAME='G:\mssql\Mssql$corp1\log\hr_log_01.ldf', SIZE=10MB, MAXSIZE=100MB, FILEGROWTH=10) GO

All filegroups and files (data and transaction log) can be added at the time of creation of the database. The following is the syntax for doing this:
USE master GO CREATE DATABASE HRDATA ON PRIMARY ( NAME='HRDATA_01', FILENAME='E:\mssql\Mssql$corp1\data\hr_data_01.mdf', SIZE=1000MB, FILEGROWTH=0), ( NAME='HRDATA_02', FILENAME='E:\mssql\Mssql$corp1\data\hr_data_02.ndf', SIZE=100MB, MAXSIZE=1000MB, FILEGROWTH=100MB), FILEGROUP HR_INDX ( NAME='HRINDX_01', FILENAME='F:\mssql\Mssql$corp1\data\hr_indx_01.ndf', SIZE=1000MB, FILEGROWTH=0), ( NAME='HRINDX_02', FILENAME='F:\mssql\Mssql$corp1\data\hr_indx_02.ndf', SIZE=100MB, MAXSIZE=1000MB, FILEGROWTH=50MB) LOG ON ( NAME='HRDATA_LOG_01', FILENAME='G:\mssql\Mssql$corp1\log\hr_log_01.ldf', SIZE=10MB, MAXSIZE=100MB,

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

95

FILEGROWTH=10) GO

In this example, only the required primary filegroup with a primary file and the transaction log are created as part of the database creation, and additional filegroups and files are added in subsequent steps to enable demonstration of functions such as adding filegroups and data files. b. Configure the database. The mode for the Oracle database can be found using the following query:
SELECT log_mode FROM v$database

If the database is in ARCHIVELOG mode, then set the database to Simple Recovery model; if it is in NOARCHIVELOG mode, then set to Full Recovery model. If the tablespace corresponding to the database is in NOLOGGING mode, set the database to Bulk-Logged Recovery model. For more information on the recovery models available in SQL Server, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_bkprst_4l83.asp If the tablespace is in read only or offline mode, set the database to the respective mode after migration of the schema and data is complete. 3. Add secondary filegroups. The HR_INDX filegroup can be added using the following commands:
Use master GO ALTER DATABASE HRDATA ADD FILEGROUP HR_INDX GO

4. Add secondary datafiles. The Filename, Current Size, Autoextensibility, Max Size, and Increment Size values retrieved from the Oracle database are used in creating similar datafiles for the HRDATA database. The primary file has already been created to match the first file in the HR_DATA tablespace. Create additional datafiles for the filegroups using the following commands:
ALTER DATABASE HRDATA ADD FILE ( NAME='HRDATA_02', FILENAME='E:\mssql\Mssql$corp1\data\hr_data_02.ndf', SIZE=100MB, MAXSIZE=1000MB, FILEGROWTH=100MB) GO ALTER DATABASE HRDATA ADD FILE ( NAME='HRINDX_01', FILENAME='F:\mssql\Mssql$corp1\data\hr_indx_01.ndf', SIZE=1000MB, FILEGROWTH=0), ( NAME='HRINDX_02', FILENAME='F:\mssql\Mssql$corp1\data\hr_indx_02.ndf', SIZE=100MB,

96

Developing: Databases Migrating Schemas

MAXSIZE=1000MB, FILEGROWTH=50MB) TO FILEGROUP HR_INDX GO

In this example, the focus was restricted to migrating a single schema. When migrating an entire database consisting of multiple schemas, the storage decisions will have to be made using the best practices in storage allocation that were followed in Oracle. Place temporary files on separate devices, separate data and log files, separate data and index files, distribute I/O load across all available decisions, and separate tables with high activity onto separate devices. For descriptions of best practices for the storage component of SQL Server, refer to the section titled "Optimizing the Storage Component" at http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part10/c3361.mspx. Multiple schemas commonly share tablespaces in Oracle databases. Conceptually, SQL Server databases can be viewed more like Oracle tablespaces, and objects from multiple schemas can be created in the same database. From this perspective, SQL Server databases function similarly to Oracle tablespaces. Figure 6.5 shows the relation between schemas and tablespaces, where a schema's objects can be in multiple tablespaces and a tablespace can have objects from multiple schemas.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

97

Figure 6.5
Sharing of tablespaces/databases by multiple schemas/owners

98

Developing: Databases Migrating Schemas

Migrate the Schema Objects


In the "Migrate the Schema" section earlier in this chapter, the focus was limited to the schema and its storage structures. This focus culminated in creation of databases with appropriate filegroups and datafiles. The database merely forms the shell in which the schema objects will be enclosed. In this section, the following high-level steps in migrating the schema objects are taken. These steps cover tasks related to schema objects and security. 1. Create the schema owner. 2. Create the schema objects. These steps are discussed in detail under the following subheadings.

Create the Schema Owner


A schema owner in Oracle is a user with privileges to create objects. This is true in SQL Server as well. A user will have to be created in SQL Server in the database in which the schemas objects will be created. The user is then given privileges in the database to create objects and populate the data. Users of Oracle and SQL Server databases can be classified as administrative users, application users, and schema owners. Administrative users are users with special roles, such as database administrator and security administrator. Application users are users who manipulate data in the owning user's tables. Schema owners are users who create and maintain objects related to an application.

The basics for the creation of all the three types of users are the same and are discussed in detail here. This knowledge is useful in creating application users and administrative users in Chapter 7, "Developing: Databases Migrating the Database Users." A discussion of user accounts for each of the categories of users is presented in the rest of this section. This discussion covers the differences in the logins between Oracle and SQL Server and the associated authentication and password functionality.

Accounts
In terms of logins, all three types of users are created equal and are differentiated only by the privileges they are bestowed. However, keeping with the practice of isolation and autonomy of user databases, logins are implemented a little differently in SQL Server. It is important to understand this difference before starting the migration of schemas and users. SQL Server has two levels (not types) of logins. In addition to the instance login, SQL Server requires separate logins be created for each database that the user or schema needs to connect to. To avoid confusion while referring to these various logins and accounts, this guide uses the following terminologies commonly employed for the various logins in Oracle and SQL Server. Oracle Instance Level user or username SQL Server Instance Level login (login ID) SQL Server Database Level user

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

99

Logins provide access to the instance of SQL Server, whereas the user account controls privileges to objects inside the database. Figure 6.6 illustrates these relationships.

Figure 6.6
Migration of schema owner security from Oracle to SQL Server

The migration of a schema owner from Oracle to SQL Server requires a login be created at the instance level and a user be created at the database level. The impact of this architectural change is observed in how objects are qualified. In Oracle, a schema object is identified in SQL statements using schema.object_name. While migrating to SQL Server the schema object has to be fully qualified as database.owner.object_name. Even though there is no difference between the login aspects of the different types of users, only the schema owners and their migration are discussed here. A discussion of application users and administrative users is postponed to the next task of the database migration, migrating the user, which is discussed in Chapter 7, "Developing: Databases Migrating the Database Users."

Authentication
Oracle offers several options for the authentication of users. The two popular methods in use are authentication by the database and authentication by the operating system. In SQL Server, the database mode is called SQL Server Authentication Mode and the operating system mode is called Windows Authentication Mode. The database authentication modes in Oracle and SQL Server are closely compatible and use a user name and password pair. The operating system authentication is quite different between Oracle and SQL Server. Oracle's operating system mode can only authenticate users with local accounts on UNIX servers. Windows authentication for SQL Server is actually performed by the domain and not the local account on the Windows server.

Password Management
The Oracle RDBMS also provides password management functions, such as account locking, password lifetime and expiration, password history, and password complexity verification. The SQL Server RDBMS does not provide these services, and Windows security is used to provide these features. Hence the migration of Oracle user names to SQL Server logins and users is dependent on the type of authentication in use as well as the requirements of password

100

Developing: Databases Migrating Schemas

management. Table 6.1 shows the migration options for Oracle logins based on authentication mode and the requirements on password management functionality. Table 6.1: Login Migration Based on Authentication Requirements Oracle Authentication Mode
Database Database Operating system

Oracle Password Management


None Required N/A

SQL Server Authentication Mode


Database Windows Windows

As a practice, the schema owner login should not be used for connection by the application instance (three-tier) or the application users (two-tier and three-tier). Hence a SQL Server authenticated login would be appropriate. However, for clients who restrict authentication to Windows mode only because of security concerns, a domain login should be created for the schema owner. Based on the security model either Windows users or groups review the guidelines in the "SQL Server 2000 SP3 Security Features and Best Practices" white paper available at http://www.microsoft.com/sql/techinfo/administration/2000/security/securityWP.asp to assist with implementation. The following SQL statement can be used to extract the characteristics of the schema owner account in the Oracle database:
SELECT du.username, DECODE(du.password,'EXTERNAL','EXTERNAL','DB') "AUTHENTICATION MODE", du.default_tablespace, du.temporary_tablespace, dp.resource_name, dp.limit FROM dba_users du, dba_profiles dp WHERE du.profile = dp.profile AND dp.resource_type = 'PASSWORD'

SQL Server does not have a CREATE USER statement. There are two system stored procedures to add logins for the two modes of authentication, and a separate stored procedure to add users at the database level. It is possible to add a new login for SQL Server using Enterprise Manager and system stored procedures. To add a new login to a SQL Server instance using Enterprise Manager: 1. Expand the SQL Server Group and expand the Server (Server Name\Instance Name). 2. Expand Security, right-click Logins, and then click New Login. 3. Provide the login name and authentication method in the General pane. System and database level privileges can be granted through fixed roles under the Server Roles pane. Object-level privileges can be granted either directly or through user roles in the Database Access pane. If the user is to be authenticated by the domain, the correct domain should be picked using the drop-down menu under Windows Authentication in the General pane. If domain authentication is used, the name of the login is in the form Domain name\Domain login name. The default database that the user will connect to when connecting to SQL Server can be set under the Defaults heading in the General pane. This is different from the default tablespace setting for a user in Oracle.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

101

By checking the appropriate database, the new user can be granted privileges on specific databases in the SQL Server instance by checking the appropriate database in the Database Access pane. This action also creates user accounts of the same name in the respective databases. For a schema owner, the db_owner role has to be granted for the privilege of creating objects in the target database. The db_owner role has complete administrative authority over the database and can be considered to be similar to the Oracle dba role but restricted to a single SQL Server database. To give the SQL Server database object owner equivalent of the resource role in Oracle, which grants DDL privileges, the database role db_ddladmin can be used. SQL Server also has the CREATE TABLE, CREATE TRIGGER, CREATE VIEW and CREATE PROCEDURE roles that can be used to further curtail the capabilities of the schema owner. To add a new Windows authenticated login to a SQL Server instance using TSQL, use the following syntax:
sp_grantlogin [ @loginame = ] 'login_name'

where login_name is of the form domain_name\domain_login_name To add a new database authenticated login to a SQL Server instance using TSQL, use the following syntax:
sp_addlogin [ @loginame = ] 'login_name' [ , [ @passwd = ] 'password' ] [ , [ @defdb = ] 'database_name' ] [ , [ @encryptopt = ] 'encryption_option' ]

where database_name specifies the database the login connects to after logging in (default database). While passwords are encrypted in SQL Server by default, the option exists to skip encryption to allow custom password encryption by the application using a different algorithm. A user account should be created separately for the login in the default database. To create a user account to a SQL Server database using T-SQL, use the following syntax:
sp_grantdbaccess [ @loginame = ] 'login_name' [, [ @name_in_db = ] 'user_name'

The name chosen for the user account can be different from that for the login account. Some of the other characteristics associated with a user or schema owner in Oracle that need to be addressed are: Default tablespace. In SQL Server, a default filegroup can be set for each database which has the same function and effect as the Oracle default tablespace using the following syntax.
ALTER DATABASE database_name MODIFY FILEGROUP filegroup_name DEFAULT

Temporary tablespace. By default, all users of SQL Server use the tempdb database. Tablespace quota. Quotas cannot be set in SQL Server at the instance, database, or filegroup level.

102

Developing: Databases Migrating Schemas

Create the Schema Objects


This section discusses the migration of the schema objects from Oracle to SQL Server. The following is a complete list of objects that are classified by Oracle as schema objects: Tables Clusters Object Tables Index-organized Tables Constraints Triggers Indexes Views Object Views Functions Stored Procedures Packages Synonyms Sequences Database Links Object Types

Table 6.2 provides a high-level view of how the Oracle schema objects map to SQL Server. Table 6.2: SQL Server Objects that Replace Oracle Schema Objects Oracle
Table Cluster Object Table Index-organized Table Constraints Triggers Index View Object View Synonym Sequence Database Link Object Types Procedure Function Package

SQL Server
Table Table Cluster Table (with clustered index) Constraints Triggers Index View N/A View Identity Linked Server N/A Stored Procedure Function Stored Procedure

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

103

Some of these objects fall under broader classifications, such as tables and views. While the discussion of some of the objects is beyond the scope of this guide, the following are covered in this chapter: Tables Comments Constraints Triggers Views Indexes Stored Programs, including functions, stored procedures, and packages Objects not found in SQL Server, including sequences, synonyms, and database links

The data definition language (DDL) for defining objects is in accord with ANSI SQL standards. Because both Oracle and SQL Server maintain compliance with SQL-92 and SQL-99 standards, the syntax for defining objects is very similar.

Identifiers and Naming


Oracle object and column names are generally not case-sensitive. However, no assumptions should be made because they can be forced to be case sensitive by delimiting them (enclosing them) in double quotation marks (). For example, the name TRANSACTION_DATE can be made case sensitive by using Transaction_Date. Also, quotes can be used to create irregular identifiers such as Transaction Date (with a blank space). Similarly, object and column names in SQL Server are not case-sensitive by default, but this default behavior can be changed by modifying configuration settings. Nonstandard or delimited identifiers can be created just as in Oracle using either quotation marks () or brackets ([ ]). Read the "Using Identifiers" section in SQL Server Books Online for rules about the construction and use of regular and delimited identifiers. Oracle table and column names are stored in data dictionary tables as uppercase strings unless forced to be case sensitive using delimiters. In contrast, the default SQL Server behavior is to store them in the case used when they were created. Because scripts for use with Oracle are written with such expectations, it is recommended you use uppercase identifiers while creating the objects in SQL Server (without forcing case by using delimiters).

Qualifiers and Name Resolution


The names of objects are influenced by uniqueness requirements. Oracle object names have to be unique for a schema. In SQL Server, the same owner can own objects of the same name in two different databases, that is, the combination of owner.table_name needs to be unique only within a database. Hence objects are qualified in Oracle as
[schema.]object_name

whereas in SQL Server the complete qualifier is


[database.]owner.]object_name

Rules have been established in both Oracle and SQL Server for resolving an object_name that is not qualified. In Oracle, the resolution is governed by synonyms and the concept of namespaces. Tables, views, snapshots, sequences, synonyms, procedures, functions, and packages are in a single namespace. Triggers, indexes, and clusters each have their own individual

104

Developing: Databases Migrating Schemas

namespace. The following order is used to resolve an object that falls in the table namespace: 1. Table namespace of current user's schema 2. Private synonym in the current user's schema 3. Public synonym The order is different in SQL Server, and resolution occurs as: 1. Current user's schema in the current database (set with USE statement) 2. Database owner dbo of the current database These relationships have to be taken care of during the migration of the users and applications. Because a user can own objects of the same name in multiple databases and different users can have objects of the same name in the same database, it is recommended that both database name and owner name be used to qualify objects in SQL Server. Changing object identifiers to meet SQL Server standards can affect the applications using them. However, changing the names of certain objects, such as indexes and constraints, could be transparent to the business applications. Given the rules for naming in Oracle and SQL Server, there is no need to change the names of any of the identifiers. Note It is recommended that all objects in the migrated SQL Server database be qualified by database and owner because the rules for resolving names could otherwise lead to the wrong object.

Working with Data Types


Oracle and SQL Server offer two types of data types: native data types and user defined data types. The data types that are provided by the DBMS are called native data types or system data types. There are four basic classes of data: character, numeric, datetime, and binary. In addition, both Oracle and SQL Server have a few data types that are unique to each system. Tables 6.3 through 6.5 provide a complete mapping of Oracle data types to SQL Server data types that you should use when migrating. Care has been taken to suggest the closest possible data type both in terms of type of data being considered as well as the scale or size of the data that has to be accommodated. A bad choice in data type can lead to a lot of problems during data migration. The choice has to be driven by the definition in the source table and not by the data in the table. Even when data migration is successful, the application may fail some time in the future because the table cannot handle data that it was originally designed for. SQL Server data types can comfortably provide a close equivalent of the Oracle data types, and over sizing is not recommended because it drives up the row size and the storage requirements and will affect performance.

Character or Alphanumeric Data


Note that Oracle and SQL Server differ in the parameter used to define Char (Nchar), and Varchar (NVarchar) columns. For example, when CHAR(n) is used to specify a column data type in Oracle, n represents the number of characters, while in SQL Server it represents number of bytes.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

105

Table 6.3 provides the SQL Server data types to use when migrating Oracle character data. Table 6.3: SQL Server Equivalent for Oracle Character-based Data Types Oracle Data Type
Char NChar Varchar NVarchar Varchar2 NVarchar2 LONG CLOB NCLOB

Max Size (Bytes)


2000 2000 4000 4000 4000 4000 231 232 232

SQL Server Data Type


Char NChar Varchar NVarchar Varchar2 NVarchar Text Text NText

Max Size (Bytes)


8000 4000 8000 4000 8000 4000 2311 2311 2301

Numeric Data
Oracle has only one numeric data type, NUMBER, that can store zero, positive, negative, fixed, and floating point numbers, with a precision (p) of 38 digits and a scale (s) ranging from -84 to 127. Synonyms, such as NUMERIC, DECIMAL, FLOAT, INTEGER, can be used. SQL Server, on the other hand, has eight distinct named numeric data types that constrain the range of values they can hold. SQL Server also has three other data types that are of the number category: BIT, MONEY, and SMALLMONEY. Table 6.4 compares numeric data types in Oracle and SQL Server. Table 6.4: Finding Closest SQL Server Equivalent to Oracle Numeric Data Types Oracle
Number(19,0) Int or Number(10,0) SmallInt or Number(6,0) Number(3,0) Number(p,0) Float or DoublePrecision or Number(38) Real or Number(19) or Float(63) Number(19,4) Number(10,4)

SQL Server
BigInt Int SmallInt TinyInt Decimal or Numeric(p,s) Float Real or Float(24) Money SmallMoney

Binary Data
Oracle's BLOB replaces the older RAW and LONG RAW data types. The IMAGE data type in SQL Server is the closest equivalent for Oracles BLOB data type, and it can support data up to 2 GB in size. The BINARY data type is a fixed size column while VARBINARY is its variable size counterpart. Table 6.5 provides SQL Server equivalents for Oracle binary data types.

106

Developing: Databases Migrating Schemas

Table 6.5: SQL Server Equivalent for Oracle Binary Data Types Oracle Data Type
BLOB Raw Long Raw BFile Raw(n)

Max Size
4 GB 2000 bytes 2 GB 4 GB (file pointer) 2000 bytes

SQL Server Data Type


Image Image Image N/A Binary(n) or VarBinary(n)

Max Size
2 GB 2 GB 2 GB 8000 bytes

For more information about designing for and implementing BLOBs in SQL Server, refer to http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part3/c1161.mspx.

Date and Time


Historically, Oracle has had one data type, Date, which stores both date and time information. In Oracle 9i, three new date and time data types have been added to address these shortcomings. The Timestamp data type can store fractional seconds up to 9 digits of precision. Time zone aware data types, Timestamp With Time Zone and Timestamp with Local Time Zone, are also available and they can handle daylight saving time. SQL Server has two data types: DateTime and SmallDateTime. DateTime can represent the date and time in the range January 1, 1753 to December 31, 9999 with a precision of one-three hundredth of a second. SmallDateTime can represent dates in the range January 1, 1900, through June 6, 2079 with a precision limited to the minute. The Oracle Date and Timestamp data types should be migrated to the DateTime data type. The other two time zone aware data types do not have an equivalent in SQL Server.

User Defined Types


A simple implementation of user-defined data types is the capability to define data types as simple variations of the primitive data types. For example, defining a data type ZIP as a native type Char, with a length of 5, provides a uniform definition that avoids any ambiguity between designers and developers. For example, the user-defined data type ZIP can be created as follows:
CREATE TYPE zip_type AS OBJECT (zip char(5)) /

In SQL Server, the user-defined ZIP data type is:


EXEC sp_addtype zip_type, 'char(5)'

User-defined data types can be recursively used to define other user-defined data types. However, SQL Server does not support object types and their collections.

Tables
The definition of the table data structure is the same, both in concept and form, in both Oracle and SQL Server. This can be attributed to compliance with ANSI SQL standards. When designing the table data structure, data integrity is as important as data access. To serve these purposes, Oracle provides several options for how data can be organized within the table. Table 6.6 summarizes them:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

107

Table 6.6: Oracle and SQL Server Table Types Compared Oracle
Heap-Organized Table Clustered Table Partitioned Table Nested Table Temporary Table External Table Object Table Index-Organized Table

SQL Server
Heap Temporary Table Clustered Index

The most common implementation of the table object is, in its basic form, the heaporganized table, and this table object is the focus of the remainder of this section. The CREATE TABLE syntax, which creates a simple heap organized table, has three major parts: Table name Body (enclosed in parenthesis) column name, column properties (data type, defaults, inline column constraints), out-of-line column constraints, table constraints Storage specification

Table 6.7 compares the structural composition of the CREATE TABLE syntax in Oracle and SQL Server. Table 6.7: High-level Comparison of Table Definition Syntax Oracle
CREATE TABLE [schema.]table_name ( column name column data type default expression column constraint, out of line or table constraint ) [storage_specification]

SQL Server
CREATE TABLE [[database.]owner.]table_name ( column name column data type defaults expression column constraint, out of line or table constraint ) [storage_specification]

The steps for creating a table in SQL Server using both Enterprise Manager and T-SQL are provided in the following procedures. To create a new table using Enterprise Manager, follow these steps: 1. Expand Server, then Databases, and then the target database. 2. Right-click Tables and select New Table. A new window appears for adding columns and other objects, such as indexes and constraints related to the table. This window is called the Table Designer tool. 3. Add new columns in the top grid (column definition grid) along with the column data type and nullability constraint. Some of the column's properties, such as Default

108

Developing: Databases Migrating Schemas Value, can be provided in the Columns pane at the bottom. The Identity property is discussed in greater detail under the "Sequences" heading. 4. Click the Table and Index Properties button found on the Columns window, or right-click anywhere in the column definition grid and select Properties to bring up the table Properties window, which can be used to specify table level properties, constraints, and indexes. To create a new table using T-SQL, use the following syntax:
CREATE TABLE [ database_name.[ owner ] . | owner. ] table_name ( { < column_definition > | column_name AS computed_column_expression | < table_constraint > } [ ,...n ] ) [ ON { filegroup | DEFAULT } ] [ TEXTIMAGE_ON { filegroup | DEFAULT } ] where < column_definition > ::= column_name data_type [ COLLATE < collation_name > ] [ [ DEFAULT constant_expression ]

The following list covers each of the components of the CREATE TABLE syntax: Table and column name. As is the case in the "Identifiers and Naming" section earlier in this chapter, Oracle table and column names do not need any changes when migrated to SQL Server. Column data type. The column data type forms part of the table definition and is an integral part of creating a table. For the sake of reducing the complexity in the discussion of tables (because there are several components to be dealt with here), the discussion of data types has been covered in the section "Working with Data Types" earlier in this chapter. Default value. The rules for specifying default value are very similar in both databases, and default values can be specified in most cases. One of the difficulties that may be encountered while migrating to SQL Server is the lack of an equivalent Oracle system function used for defining the default values. In SQL Server, a default value cannot be specified for a column of type timestamp. While Oracle treats defaults as a property of the column, SQL Server defines defaults as a constraint. Constraints. A detailed discussion of constraints appears in the "Constraints" section later in this chapter. Storage properties. A detailed discussion of the storage architecture appears in the "Migrate the Storage Architecture" section in Chapter 5, Developing: Databases Migrating the Database Architecture." The only storage property that can be specified during table creation is the filegroup name. If a filegroup is not specified, then the table is created in the database's default filegroup. In the Table and Index Properties window, the Table Filegroup drop-down list can be used pick a filegroup from which the table will be allocated storage. The Text Filegroup drop-down list can be used to specify the filegroup to be used for large objects, such as text and image columns.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

109

Note In Oracle and SQL Server, tables can be created based on the definition of other tables. In Oracle, this is accomplished using the following statement
CREATE TABLE table_name AS SELECT

The same can be achieved in SQL Server using the following syntax:
SELECT INTO table_name

The various table types found in Oracle can be migrated as described in the following list. Detailed knowledge of each table type is not necessary to accomplish the migration. Clustered table. Sometimes simply called clusters, these do not have an equivalent in SQL Server. The tables in the cluster have to be created as regular heaps. If rows are frequently accessed using a range search, a clustered index may be created on such column(s). Only one clustered index can be created on a table. This migration of clustered tables in Oracle to heaps in SQL Server will be transparent to the application and users. Partitioned table. SQL Server does not have the partitioned table option. However, SQL Server offers partitioned views that are built using the same strategy that was used in Oracle (before Oracle 8i) when true horizontal partitioning was not available. This involves creating separate tables for each of the partitions, with a check constraint on the partition key column(s) to enforce a range of values. A partitioned view is then created as a union of all the constituent tables. Details on the implementation of partitioned views can be found under the topic "Creating a Partitioned View" in SQL Server Books Online. This migration of partitioned tables in Oracle to heaps in SQL Server will be transparent to the application and users. Nested table. SQL Server does not support nested tables. The implementation strategy used in Oracle can be imitated in SQL Server for migrating them. The nested table column can be separated out into its own table (denormalized with similar definition) with a unique identifier used to connect the rows from the parent table to the child table. Minor modification to retrieve the data from the child table will be required in the code to accommodate this change. Temporary table. SQL Server supports both local and global temporary tables. SQL Server's local temporary table is equivalent to Oracle's global temporary table (with the ON COMMIT PRESERVE ROWS) because it provides the same level of isolation from other sessions. Table 6.8 shows the syntax for creating temporary tables in SQL Server. Table 6.8: SQL Statements for Creating Temporary Tables in Oracle and SQL Server Oracle
CREATE GLOBAL TEMPORARY TABLE table_name ON COMMIT DELETE|PRESERVE ROWS

SQL Server
CREATE TABLE #table_name

Local temporary tables in SQL Server work identically to Oracle temporary tables and do not warrant any changes in the code. But the code pieces that assume that rows are deleted automatically when working with tables with ON COMMIT

110

Developing: Databases Migrating Schemas DELETE ROWS has to be modified to include a DELETE statement in compensation. External table. SQL Server does not have the option to create tables whose data resides in flat files. Hence such data will have to be imported into the database. Object table. SQL Server does not support objects. The strategy for migrating object tables to SQL Server is to flatten (absorb) the object column into the table itself. This will induce only a small change in the SQL code that accesses the data. Index-organized table. The SQL Server clustered indexes are very similar to the index-organized table (IOT), where the index is merged into the table instead of two separate structures. Clustered indexes are implemented in a fashion similar to index-organized tables and have very similar features. Index-organized tables are sorted on the primary key. To migrate an indexorganized table to SQL Server, the CLUSTERED keyword can be used in conjunction with the PRIMARY KEY constraint, as described in the syntax given in Table 6.9. In SQL Server, a clustered index can be created on any column(s) of a table, but it has to be created using the CREATE INDEX clause. If a clustered index is created with non-unique columns, SQL Server enforces uniqueness by adding an uniqueifier to the rows with duplicates. In most cases, the primary key would be the ideal column for a clustered index. An identity column can also be used for a clustered key. In SQL Server, it is recommended that every table should have a clustered index. Note Clustered indexes should be created before creating any nonclustered indexes. If clustered indexes are planned for a table, defer creation of nonclustered indexes (as part of PRIMARY KEY or UNIQUE constraint definition) during the table creation in this step. Table 6.9 shows the SQL Server options for creating the equivalent of an Oracle index-organized table. Table 6.9: SQL Statements for Creating Index-organized Tables in Oracle and SQL Server Oracle
CREATE TABLE table_name ( column_name datatype, ) ORGANIZATION INDEX [ storage_definition ]

SQL Server
CREATE TABLE table_name ( column_name datatype, [ CONSTRAINT constraint_name ] PRIMARY KEY CLUSTERED [ ON filegroup | DEFAULT ] ) or CREATE [UNIQUE] CLUSTERED INDEX index_name ON table_name ( column_name [, ] ) [ ON filegroup | DEFAULT ]

Migration of tables is complicated by the referential integrity (foreign key) constraints between tables. There are two options for how to migrate tables. Option one is to create

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

111

tables in a specific order based on the foreign key constraints (parent tables first). Option two is to create tables in any order, leaving out the foreign key constraints, and adding them after all the tables have been created.

Comments
This is a property of the table that is often not documented and, hence, easily overlooked during a migration. SQL Server does not have a comment property associated with tables and columns. However, SQL Server provides the capability to associate custom properties, called extended properties, with objects. Comments can be added to tables and columns using Enterprise Manager and system stored procedures. To add a comment on tables and columns using Enterprise Manager, follow these steps: 1. Expand Server, then Databases, and then the target database. 2. Left-click Tables to display the list of tables. 3. Right-click the target table and left-click Design Table. This brings up the Table Designer. 4. Comments for the individual columns can be added in the Description field of the Columns pane at the bottom of the Table Designer window. 5. In the Table Designer window, click the Table and Index Properties button found above the grid. 6. Left-click the Tables pane of the Properties window. Table comments can be added in the Description field in the Tables pane. The sp_addextendedproperty system stored procedure can be used to add any userdefined metadata to any column in SQL Server. This (and any other properties that are added to the column) can be retrieved with the fn_listextendedproperty function. Table 6.10 shows the syntax for adding comments at the table and column levels. Table 6.10: Comparison of Functionality for Adding Comments in Oracle and SQL Server Oracle
COMMENT ON TABLE table_name IS comment

SQL Server
sp_addextendedproperty 'comment', comment, 'user', schema_name, 'table', table_name

COMMENT ON COLUMN table_name.column_name IS comment

sp_addextendedproperty 'comment', comment, 'user', schema_name, 'table', table_name, 'column', column_name

Constraints
Constraints are data integrity rules that are defined on the columns of a table to enforce certain business rules. As an example, the specification of data types in the definition of the column constitutes a constraint.

112

Developing: Databases Migrating Schemas

Table 6.11 compares the availability of various types of constraints in Oracle and SQL Server: Table 6.11: Constraints Available in Oracle and SQL Server Oracle
NOT NULL UNIQUE PRIMARY KEY FOREIGN KEY CHECK

SQL Server
NOT NULL UNIQUE PRIMARY KEY FOREIGN KEY CHECK

The functionality provided by Oracle and SQL Server to define constraints on columns, including the syntactic use of inline (or column) constraints and out-of-line (or table) constraints, is almost identical. The syntax in Table 6.12 illustrates the similarity. Out-ofline constraints have to be used when more than one column is involved in the definition of a constraint. If a constraint name is not specified using the CONSTRAINT clause, a default name is generated by the database. The default names given by Oracle are of the form SYS_Cn where n is a unique number generated by Oracle. The names provided by SQL Server are more suggestive with respect to the type of constraint (for example, CK, PK, and FK) and the table and column(s) involved. The syntax for the two options for defining constraints in SQL Server is provided in Table 6.12. Table 6.12: T-SQL Statements for Defining Inline and Out-of-line Constraints in SQL Server Inline (Column) Constraint
CREATE TABLE table_name (column_name datatype [ CONSTRAINT constraint_name ] [ [ NOT ] NULL | UNIQUE | PRIMARY KEY | CHECK (condition) | REFERENCES ref_table (ref_column) ] )

Out-of-line (Table) Constraint


CREATE TABLE table_name (column_name datatype, [ CONSTRAINT constraint_name ] [ UNIQUE (column_name [, ...]) | PRIMARY KEY (column_name [, ...]) | CHECK (condition) | FOREIGN KEY (column_name [, ...]) REFERENCES ref_table (ref_column) ] )

NOT NULL Constraint


In both Oracle and SQL Server, the NOT NULL constraint can be specified only as a column constraint. More often than not, a constraint name is not provided while creating NOT NULL constraints. When a column has not been specifically defined as NOT NULL in Oracle, it defaults to being nullable. When NULL or NOT NULL is not specified explicitly for a column definition, SQL Server uses the default of NOT NULL. For ANSI compatibility, setting the database option ANSI_NULL_DEFAULT to ON changes the database default to NULL. In SQL Server, database and session settings can override the nullability specified by the column definition. Some DBAs like to use meaningful names for all constraints so that errors referring to constraint violations can be easily recognized in error messages. When migrating from Oracle, NOT NULL constraints with user-defined names can be queried using the following syntax:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

113

CREATE TABLE NOT NULL CONSTRAINTS (constraint_name varchar2(30), owner varchar2(30) table_name varchar2(30), column_name varchar2(30), search_condition clob); INSERT INTO NOT NULL CONSTRAINTS SELECT dc.constraint_name, dc.owner, dc.table_name, dcc.column_name, to_lob(dc.search condition) FROM dba_constraints dc, dba_cons_columns dcc WHERE dc.owner = dcc.owner AND dc.constraint_name = dcc.constraint_name AND dc.constraint_type = 'C' AND dc.owner = 'user_name'; COMMIT; SELECT * FROM NOT_NULL_CONSTRAINTS WHERE dbms_lob.instr(search_condition, 'IS NOT NULL',1,1) > 0;

NOT NULL constraints can be created on tables using Enterprise Manager and the CREATE TABLE T-SQL statement. To create a NOT NULL constraint using Enterprise Manager, follow these steps: 1. In the Table Designer, the Allow Nulls column should be unchecked for enforcing the NOT NULL constraint and checked to indicate a NULL constraint. To create a NOT NULL constraint Using T-SQL, use the following syntax:
CREATE TABLE table_name ( column_name datatype [ CONSTRAINT constraint_name ] { NULL | NOT NULL }, )

Check Constraint
Both Oracle and SQL Server use the same syntax to define check constraints; both also have similar restrictions in its usage. CHECK constraints can be defined on a single column or multiple columns at the table level. The conditions specified in the CHECK clause should evaluate to a Boolean value of TRUE or FALSE. The conditions can refer to other columns but are restricted to the row being modified. Both Oracle and SQL Server allow multiple CHECK constraints on a column. A column constraint can only reference the column it is being created on. Each constraint can have multiple concatenated conditions. The process to create check constraints on tables using Enterprise Manager and T-SQL is provided in the following procedures. To create a check constraint using the Table Designer in Enterprise Manager, follow these steps: 1. In the Table Designer, click the Manage Constraints button to bring up the table Properties window with the Check Constraints pane active. 2. Click the New button to add a constraint.

114

Developing: Databases Migrating Schemas 3. A name of the type CK_table_name is inserted by the system in the Constraint name box. The constraint name can be changed to match the name given in Oracle only after entering a valid condition in the Constraint expression box. To create a check constraint using T-SQL, use the following syntax: Table 6.13 provides the syntax for the two alternative formats available in SQL to specify CHECK constraints during table creation. The syntax is the same in Oracle and SQL Server. Table 6.13: T-SQL Statements for Defining Inline and Out-of-line CHECK Constraints in SQL Server Inline (Column) Constraint
CREATE TABLE table_name (column_name datatype [ CONSTRAINT constraint_name ] CHECK (condition) )

Out-of-line (Table) Constraint


CREATE TABLE table_name (column_name datatype, [ CONSTRAINT constraint_name ] CHECK (condition) )

Unique Constraints
Oracle and SQL Server use indexes to enforce unique constraints. In Oracle, if a unique or non-unique index already exists on the constraint columns, the index is used to enforce the constraint without creating new ones. Therefore, if a user-defined index is preferred instead of a system-defined (and named) one, it is recommended that the index be created first. Unless a clustered index is explicitly specified, SQL Server creates a nonclustered index on the unique key column(s). An important difference in the implementation of the UNIQUE constraint is Oracle allows multiple rows with NULL values in all the columns making up the unique constraint, whereas SQL Server allows only one row to have a NULL value for the UNIQUE column. In Oracle, the UNIQUE, PRIMARY KEY, and FOREIGN KEY constraints can be a composite of up to 32 columns, while in SQL Server the limit is 16. Unique constraints can be created using Enterprise Manager as well as T-SQL statements. To create a UNIQUE constraint using Enterprise Manager, follow these steps: 1. In the Table Designer, click the Table and Index Properties button. 2. Select the Indexes/Keys tab and click New. A system-assigned index name appears with the column name, column order, index filegroup, and fill factor filled in using defaults. You will need to replace these default values with your own. 3. Enter the name of the new index in the Index name text box and select the columns to be included in the constraint in the correct order in the grid. 4. Check the Create UNIQUE check box and select the Constraint radio button. 5. Check the Create as CLUSTERED box if a clustered index is to be used to enforce the constraint. If a nonclustered index is chosen, a filegroup can be specified for the index. Because only a single clustered index can be created on a table, and the filegroup cannot be changed from the primary filegroup. 6. The equivalent of Oracle's PCTFREE can be specified for the index that will be created to enforce the constraint in the Fill Factor box.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

115

To create a UNIQUE constraint using T-SQL, use the following syntax: The only syntactic difference in the UNIQUE constraint definition between Oracle and SQL Server is with the USING INDEX clause used to define the index that enforces the constraint. The USING INDEX clause of Oracle has an equivalent in SQL Server and can be used with both column and table constraints. The syntax for table constraint is described in Table 6.14. Table 6.14: SQL Statements for Defining UNIQUE Constraints in Oracle and SQL Server Oracle
CREATE TABLE table_name ( column_name datatype, , [ CONSTRAINT constraint_name ] UNIQUE ( column_name [, ] ) [ USING INDEX [ TABLESPACE tablespace_name ] [ storage_definition ] ]

SQL Server
CREATE TABLE table_name ( column_name datatype, , [ CONSTRAINT constraint_name ] UNIQUE [ CLUSTERED | NONCLUSTERED ] ( column_name [, ] ) [ WITH FILLFACTOR = fillfactor ] [ ON FILEGROUP { filegroup | DEFAULT } ]

Primary Key Constraint


PRIMARY KEY constraints have all the characteristics of UNIQUE constraints, with the additional restriction that all primary key columns be NOT NULL. Oracle creates a UNIQUE index and a NOT NULL constraint to implement PRIMARY KEY constraints. With Oracle 9i, an existing non-unique index may be used to enforce the primary key constraint. In SQL Server, if a clustered index does not already exist on the table, or a nonclustered index is not explicitly specified, a unique, clustered index is created to enforce the PRIMARY KEY constraint. The following procedures demonstrate that it is far easier to use T-SQL than Enterprise Manager when it comes to creating a PRIMARY KEY constraint. To create a primary key constraint using Enterprise Manager, follow these steps: 1. In the Table Designer, click and select the column(s) that are part of the primary key (use the Ctrl key for this). 2. Right-click anywhere along the selected column(s) and click Set Primary Key. A check mark appears next to the Set Primary Key option. 3. A unique clustered index using the naming convention PK_table_name is automatically created by the system. 4. The name and properties of the index can be changed from those assigned by the system through the Indexes/Keys pane of the Table and Index Properties window. To create a primary key constraint using T-SQL, use the following syntax: Oracle and SQL Server differ only in the USING INDEX part of the PRIMARY KEY constraint syntax. The syntax differs from the UNIQUE constraint definition only in the UNIQUE keyword being replaced with PRIMARY KEY. In both Oracle and SQL Server, a NOT NULL constraint is added by the system for all primary key columns when one is not explicitly specified.

116

Developing: Databases Migrating Schemas

The CREATE TABLE syntax in Table 6.15 is for defining primary key constraints and its associated index. Table 6.15: SQL Statements for Defining PRIMARY KEY Constraints in Oracle and SQL Server Oracle
CREATE TABLE table_name ( column_name datatype, , [ CONSTRAINT constraint_name ] PRIMARY KEY ( column_name [, ] ) [ USING INDEX [ TABLESPACE tablespace_name ] [ storage specs ] ]

SQL Server
CREATE TABLE table_name ( column_name datatype, , [ CONSTRAINT constraint_name ] PRIMARY KEY [ CLUSTERED | NONCLUSTERED ] ( column_name [, ] ) [ WITH FILLFACTOR = fillfactor ] [ ON FILEGROUP { filegroup | DEFAULT } ]

Foreign Key Constraint


A foreign key constraint enforces the rule that a non-null value in one or more columns of a table (child) must exist in a corresponding set of columns in the same (self-referential) or another (parent) table. The referenced key columns must possess a primary key or unique key index on them. The number and data types of columns specified in the FOREIGN KEY clause should match the corresponding columns of the referenced table. FOREIGN KEY constraints in SQL Server have a restriction of not being able to cross database boundaries. In the case where foreign key constraints existing between different schemas of a source Oracle database are migrated to separate databases in SQL Server, the constraints will have to be replaced with triggers. In Oracle, the ON DELETE clause is used to specify the actions that will automatically be undertaken if a parent row is deleted. Additionally, SQL Server has the ON UPDATE clause for handling updates to the parent rows. Table 6.16 shows the availability of various control actions using these clauses in Oracle and SQL Server: Table 6.16: Functionality Available in Oracle and SQL Server with Respect to Foreign Key Constraint Operation
ON DELETE ON DELETE ON DELETE ON UPDATE ON UPDATE

Action
SET NULL CASCADE NO ACTION CASCADE NO ACTION

Oracle
Yes Yes Yes No No

SQL Server
No Yes Yes Yes Yes

Note In Oracle, the default of NO ACTION (restrict) is assumed only by the absence of an ON DELETE { SET NULL | CASCADE } clause in the definition. In SQL Server, the default is NO ACTION for both operations. To mimic the action of SET NULL, it has to be handled programmatically in the application using an appropriate SQL statement. Foreign key constraints can be created in SQL Server using Enterprise Manager and TSQL.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

117

To create a foreign key constraint using Enterprise Manager, follow these steps: 1. Right-click the table name on the table list, click Design Table, and then click Table and Index Properties. 2. Select the Relationships tab and click New. 3. A new relationship is created with a default name and current table selected in the Foreign key table drop-down list. Type in or select (from drop-down list) the appropriate columns involved in the current table (child table). Also select the appropriate table for the Parent key table and its parent columns. 4. Select appropriate behaviors for the trigger in the check boxes at the bottom of the screen. The important behaviors are Check existing data on creation and Enforce relationship for INSERTs and UPDATEs. To create a foreign key constraint using the Database Diagram Wizard, follow these steps: 1. Expand Server, then Databases, then target database. 2. Right-click Diagrams and click New Database Diagram. 3. The wizard will lead you through adding the tables to be included in the diagram. See Figure 6.7.

Figure 6.7
Using Database Diagram Wizard for creating referentials

4. To add a relationship, select the foreign key column of the child table, and drag it to the primary or unique key column of the parent table. 5. A pop-up dialog will appear with columns filled in based on the columns selected in the drag and drop. The actions need not be precise because information such as

118

Developing: Databases Migrating Schemas name, primary key, and foreign key columns, can be modified before saving. See Figure 6.8.

Figure 6.8
Choosing the right relationship properties

To create a foreign key constraint using T-SQL, use the following syntax: Table 6.17 contains the syntax to add foreign key constraints with the actions of ON DELETE and also the ON UPDATE statement, which does not exist in Oracle. Table 6.17: SQL Statements for Defining FOREIGN KEY Constraints in Oracle SQL Server Operation
ON DELETE

Oracle
CREATE TABLE table_name (, column_name datatype, , [ CONSTRAINT constraint_name ] FOREIGN KEY (column_name [,] REFERENCES [schema.]ref_table_name (ref_column_name[, ]) ON DELETE { SET NULL |

SQL Server
CREATE TABLE table_name (, column_name datatype, , [ CONSTRAINT constraint_name ] FOREIGN KEY (column_name [,] REFERENCES ref_table_name (ref_column_name[, ]) ON DELETE { CASCADE |

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

119

Operation
ON UPDATE

Oracle
CASCADE }

SQL Server
NO ACTION} CREATE TABLE table_name (, column_name datatype, , [ CONSTRAINT constraint_name ] FOREIGN KEY (column_name [,] REFERENCES ref_table_name (ref_column_name[, ]) ON UPDATE { CASCADE | NO ACTION}

Triggers
Triggers are used to enforce more complex business rules than can be enforced using constraints. Triggers are stored procedures that are implicitly executed when certain data modification (using data manipulation language statements, also known as DML) is performed against tables and views. In Oracle, there can be up to twelve combinations of trigger executions (actions) based on: DML operation: INSERT, UPDATE, DELETE Timing: BEFORE, AFTER Level: ROW, STATEMENT

Table 6.18 evaluates SQL Server support for the Oracle trigger functionality. Table 6.18: Functionality of Oracle Triggers Mapped to SQL Server Trigger Feature
DML INSERT DML UPDATE DML DELETE Timing BEFORE Timing AFTER Level Views INSTEAD OF Multiple triggers per action

Oracle
Yes Column/Row Yes Yes Yes Row/Statement Yes Yes

SQL Server
Yes Row Yes Yes (INSTEAD OF) Yes Row Yes Yes (first/last specified)

The SQL Server INSTEAD OF triggers are equivalent to Oracles BEFORE triggers. When migrating triggers from Oracle, the only drawback is that SQL Server does not support statement-level triggers. However, there are two pseudo-tables, inserted and deleted, that are populated during the trigger execution and which can be used to simulate statement-level operations. These inserted and deleted pseudo tables are similar to the :old and :new pseudo-rows in Oracle and are populated with all (multiple) rows affected by the trigger execution. The SQL Server deleted and inserted pseudotables are available by default and are not specified as part of the trigger definition. In

120

Developing: Databases Migrating Schemas

Oracle, the pseudo-rows have to be defined in the trigger using the REFERENCING clause. Note This section is concerned only with creation of the trigger object. A detailed discussion on trigger logic (code) and its migration is found in Chapter 11, "Developing: Applications Migrating Oracle SQL and PL/SQL" under the "Migrating the Data Access" section. When multiple triggers exist on a table for the same action, Oracle does not guarantee any particular order of execution of multiple triggers. SQL Server has the capability to specify which trigger should be fired before (first) and after (last) all other triggers using the system stored procedure sp_settriggerorder. This feature has been found useful in eliminating some of the complexities in the code found in Oracle triggers to manage the sequence of events. Triggers can be defined on tables either using Enterprise Manager or using T-SQL. To create a trigger using Enterprise Manager, follow these steps: 1. In the Enterprise Manager, expand Server, then Databases, then target database, and then Tables. 2. Right-click the table name on the table list, click All Tasks, and then click Manage Triggers. Or, click the Triggers button in the Table Designer. 3. Type in the trigger text using the provided template. 4. Click Apply if more than one trigger is to be created, or OK to save all changes and close the dialog box. To create a trigger using T-SQL, use the following syntax: Table 6.19 compares the commonly used functionality of the trigger definition CREATE TRIGGER statement in Oracle and SQL Server. Table 6.19: SQL Statements for Creating Triggers in Oracle and SQL Server Oracle
CREATE TRIGGER trigger_name { BEFORE | AFTER | INSTEAD OF } { { INSERT [ OR ] | DELETE [ OR ] | UPDATE [ OF column_name [, ] ] } } ON table_name REFERENCING [OLD AS :old] [NEW AS :new] [ FOR EACH ROW ] WHEN ( condition ) Pl/sql_block

SQL Server
CREATE TRIGGER trigger_name ON table_name | view_name { { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } AS [ { IF UPDATE ( column_name ) [ { AND | OR } UPDATE ( column_name ) ] [ ...n ] | IF ( COLUMNS_UPDATED ( ) { bitwise_operator } updated_bitmask ) { comparison_operator } column_bitmask [ ...n ] } ] sql_statement [ ...n ] } }

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

121

Indexes
There are two major categories of indexes in Oracle: B-tree indexes and Bitmap indexes. All other indexes are variations of these two basic types to provide additional features. Table 6.20 provides a quick comparison of the indexing schemes available in Oracle and SQL Server: Table 6.20: Indexing Schemes Available in Oracle and SQL Server Index Scheme
B-tree Unique B-tree Non-unique B-tree Composite B-tree Ascending B-tree Descending B-tree Cluster B-tree Reverse Key B-tree Key Compressed B-tree Function-based B-tree Index-Organized Table B-tree Partitioned Bitmap Bitmap Join

Oracle
Yes Yes Yes (32 columns) Yes Yes Yes Yes Yes Yes Yes Yes Yes (30 columns) Yes

SQL Server
Yes Yes Yes (16 columns) Yes Yes No No No No Yes (Clustered) No No No

B-Tree Indexes
SQL Server offers two types of indexes: clustered and nonclustered. Both these types are based on the B-tree data structure. Hence assumptions can be drawn as to their performance in the two systems. An important difference between the two implementations is Oracle does not index rows when all the key columns have null values in them, while SQL Server does. Both Oracle and SQL Server support the following basic versions of the B-tree index: Composite. Oracle allows indexes with up to 32 columns, whereas SQL Server allows up to 16 columns. While indexes with 16 or more columns are rare, the number of columns will have to be trimmed down if such indexes were to be migrated to SQL Server. Unique. Unique indexes are used to enforce PRIMARY KEY and UNIQUE constraints. Even though nulls cannot be compared (nor considered equal) for sake of uniqueness, when two rows are identical, null values are considered as identical. In SQL Server, primary key constraints cannot have null values, whereas unique indexes allow one row with null value. For more information, refer to the "Unique Constraints" section earlier in this chapter. Non-unique. This is the basic form of the B-tree or nonclustered index, where key values can be repeated. Ascending. The key values are sorted and stored in ascending order. Descending. The key values are sorted and stored in descending order.

The following list provides variations of the B-tree index, implemented with a modification to the basic B-tree structure, that provide a specific feature:

122

Developing: Databases Migrating Schemas Cluster indexes. Such indexes can be found in Oracle on clustered tables (clusters). Clustered tables are not supported by SQL Server and neither are cluster indexes. Index-organized tables. The clustered indexes in SQL Server are similar to the index organized tables. Refer to the discussion under the "Tables" section earlier in this chapter. Reverse key indexes. Oracle developed these indexes to reduce contention for index blocks by indexes on columns that have sequential values that are written almost simultaneously. This is achieved by reversing the bytes in the key value to produce non-sequential numbers. This indexing scheme is not available in SQL Server. The reverse key index is useful in Oracle Real Application Cluster (RAC) implementations, and there is no disadvantage in migrating such indexes to clustered or nonclustered indexes. Partitioned indexes. In Oracle, B-tree indexes can be created that are local to the partition or global to the entire table (partitioned or non- partitioned). This feature is not available in SQL Server and should be replaced by clustered or nonclustered indexes. Key compressed index. In key compression, the leading subsets of a key can be compressed in a manner similar to clustering by storing the leading subset only once for repeating values. SQL Server does not support key compression. Function-based index. In Oracle, expressions in the WHERE clause that contain functions do not use indexes as an access path. To overcome this disadvantage, indexes can be created by applying the function to compute the value of the expression and storing it in an index. For example, if the column part_name is accessed using the expression UPPER(part_name), a function-based index can be defined on UPPER(part_name). SQL Server does not support function-based indexes. There is no other index type in SQL Server that can provide similar functionality.

Bitmap Index
The bitmap indexes are specially designed for improving the retrieval of data based on columns with very low cardinality. Because maintenance of the bitmap indexes is very expensive, their utility is restricted to Data Warehouses and DSS systems where there is very low or zero update activity. Bitmap indexes are not available in SQL Server and cannot be substituted with B-tree indexes. In SQL Server and versions of Oracle before 9i, indexes are used only when the leading subset of key values are involved. In Oracle 9i, Oracle introduced the concept of fast index scan, whereby the entire index is scanned for matching values in the non-leading keys by skipping the leading key columns. In SQL Server, additional indexes have to be created for the non-leading columns to provide access paths similar to Oracle. This requirement is not documented inside the database and is dependent on the SQL statements used in applications. These situations can only be discovered by profiling the application during testing. The various types of indexes can be created using Enterprise Manager, the Create Index Wizard, or T-SQL.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

123

To create an index using Enterprise Manager, follow these steps:


The steps for creating an index are very similar to that described for UNIQUE key. Creation of a non-unique, nonclustered index is demonstrated here.

1. Expand Server, then Databases, then target database, and then Tables. 2. Right-click the table name on the table list, click Design Table, and then click Table and Index Properties. 3. Select the Indexes/Keys tab and click New. 4. Enter the name of the new index in the Index name text box. Insert the columns or the index in grid along with the required sort order for the column. 5. Uncheck the Create UNIQUE check box to specify a non-unique index. 6. Select the target filegroup from the Index Filegroup drop-down list. 7. Click the desired value for Fill factor (equivalent of PCTFREE). To create an index using the Create Index Wizard, follow these steps: 1. Expand a server group, and then expand the server in which to create the index. 2. On the Tools menu, click Wizards. 3. Expand Database. 4. Double-click Create Index Wizard. 5. Complete the steps in the wizard To create an index using T-SQL, use the following syntax: Table 6.21 shows the syntax of the CREATE INDEX statement for the features and functionality discussed here. Table 6.21: SQL statements available for Creating Indexes in Oracle and SQL Server Oracle
CREATE [UNIQUE | BITMAP ] INDEX index_name ON table_name ( column_name [ ASC | DESC ] [, ] ) [ physical_attributes ] [ { COMPRESS | NOCOMPRESS } prefix_length ] [ REVERSE ] [ TABLESPACE tablespace_name ] [ storage_description ]

SQL Server
CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name ON table_name ( column_name [ ASC | DESC ] [, ] ) [ WITH index_options ] [ ON filegroup ]

Views
Views are used in Oracle and SQL Server to hide query complexity and encourage query reuse. Both Oracle and SQL Server have added features, such as updateability and indexes to views, to improve their utility and performance.

124

Developing: Databases Migrating Schemas

Table 6.22 compares the availability of the various types of views in Oracle and SQL Server. Table 6.22: Types of Views Available in Oracle and SQL Server View Type
Simple Views Join Views Partitioned Views Read-only Views Updateable Views Inline Views Object Views

Oracle
Yes Yes Yes Yes Yes Yes Yes

SQL Server
Yes Yes Yes No Yes Yes No

The following are the various types of views available in Oracle and the support for them in SQL Server: Simple views. This view query is based on a single table and their migration is trivial. Join views. This view query is based on more than one table. The join views available in SQL Server are less restrictive than their Oracle counterparts in their use of aggregate functions in the query. Hence migrating them from Oracle to SQL Server should not pose any problems. When a view is defined with an outer join and is queried with a qualification on a column from the inner table of the outer join, the results from SQL Server and Oracle can differ. Partitioned views. This view query provides a union of partitions of data in different tables (regular non-partitioned tables). Use of partitioned views was common in Oracle before the introduction of partitioned tables in Oracle 8. SQL Server supports the entire range of features that are available in Oracle with respect to partitioned views. Read-only views. This view query with the WITH READ ONLY clause specified to curb update activity against the base tables of the view. SQL Server does not have an equivalent for this feature. When migrating a read-only view, care has to be taken to ensure that only SELECT privileges are granted to the users. Updatable views. These are simple, join, or partitioned views against which DML statements can be executed, subject to certain restrictions. In SQL Server, the restrictions can be bypassed (except the use of aggregate functions) by defining INSTEAD OF triggers on the views. The WITH CHECK OPTION clause prevents modifications to data that violate the criteria in the WHERE clause of the view query. DML statements on join views can modify data only in one base table. Inline views. An inline view is a subquery that is used like a view in the FROM clause of SQL statements. SQL Server fully supports their definition. Object views. Object views are virtual object tables that can be used to manipulate object data types as well as relational data cast as objects. Object views are not available in SQL Server and should be recreated (flattened) as regular views. Indexed views. Oracle does not support creation of indexes on views. However, UNIQUE, PRIMARY KEY, and FOREIGN KEY constraints can be defined on views which indirectly create indexes. These constraints are a subset of those available with tables and work similarly. SQL Server allows creation of indexes. However the first index has to be a unique clustered index. Additional nonclustered indexes can then be created. Thus, SQL Server can support the migration of indexed views.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

125

Views can be created using Enterprise Manager, the Create View Wizard, or T-SQL. To create a view using Enterprise Manager, follow these steps: 1. Open Enterprise Manager, expand Server, then Databases, and then the target database. 2. Right-click Views and click New View. 3. The view can be created using the diagram and grid panes (top two areas) or by entering the view query in the SQL pane (SELECT area). See Figure 6.9.

Figure 6.9
Using the View Designer of Enterprise Manager

To create a view using the Create View Wizard, follow these steps: 1. Expand a server group and then expand the server in which to create the view. 2. On the Tools menu, click Wizards. 3. Expand Database. 4. Double-click Create View Wizard. 5. Complete the steps in the wizard.

126

Developing: Databases Migrating Schemas

To create a view using T-SQL, use the following syntax: Table 6.23 offers a comparison of the CREATE VIEW statements in Oracle and SQL Server. Table 6.23: SQL Statement for Creating Views in Oracle and SQL Server Oracle
CREATE [OR REPLACE] [FORCE | NO FORCE] VIEW [schema.]view_name [ ( column_alias [ inline_constraint ] [,] [, out_of_line_constraint ] ) ] AS select_statement [ WITH READ ONLY ] [ [ WITH CHECK OPTION ] [ CONSTRAINT constraint_name ] ]

SQL Server
CREATE VIEW [owner.]view_name [ ( column_name [,] ) ] AS select_statement [ WITH CHECK OPTION ]

In Oracle, views can be created without permissions on the base objects, or even without the base objects existing, by using the FORCE keyword. This is not allowed in SQL Server. SQL Server also does not allow ORDER BY and GROUP BY clauses in the view definition. Note When a view is created in Oracle using an asterisk (*) in the SELECT clause, the asterisk is expanded to the actual column names in the definition of the view. SQL Server retains the exact definition used in the DDL. Oracle does the same thing when an asterisk (*) is not used. While converting from Oracle to SQL Server, the view definition may be changed back to an asterisk instead of specifying every column.

Stored Programs
Oracle and SQL Server have the capability to store complex business logic (beyond constraints) inside the database. The code to support such requirements cannot be provided by SQL because it cannot execute commands based on logical conditions. It also fails to support looping operations. In spite of recent extensions to SQL, such as incorporation of a CASE expression, it is still difficult to perform more than one operation based on a logical condition using these methods. The need to implement control structures and programming constructs cannot be met with SQL alone. Hence Oracle and SQL Server both offer procedural extensions to SQL, PL/SQL, and Transact-SQL (TSQL), respectively, that offer a more complete environment for defining stored programs (or subprograms). Stored programs provide modularity (top-down design), encapsulation (logic is hidden), abstraction (black box approach), security (only execute on subprogram), and extensibility (user-defined functionality). Additionally, existence inside the database promotes accessibility (available to all users), reuse (available to all database users), speed of execution (stored programs are in compiled form), performance (handling of large amounts of data close to its source), and reduced resource requirements (reduces network bandwidth, memory, and so on because only the results are transported). In Oracle, Java can be used to write native stored procedures and user-defined functions. A PL/SQL procedure can also call external procedures or functions written in the C programming language and stored in a shared library. In Oracle, there are four types of stored programs: functions, procedures, packages, and triggers. These objects are very similar in construction, functionality, and usage to those

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

127

used in SQL Server. Hence the migration of their structures is trivial. However, because of the vast difference between the syntax in the languages (PL/SQL and T-SQL) used in these objects, the migration of the embedded code is far from trivial. Triggers have already been discussed in this chapter. Functions, procedures, and packages are covered here. Oracle ensures the purity of the actions that are performed inside stored programs. Hence there are no hidden side effects when migrated to SQL Server and you can concentrate on reproducing the logic using T-SQL. Here are a few points that concern all types of stored programs: Overloading. SQL Server does not support overloading, and such functions and procedures will have to be recreated using unique names. Execution privileges. Oracle stored programs can be defined to execute with definer-rights (owner) or invoker-rights (user). SQL Server only uses invoker-rights. Oracle stored programs that are built on definer-rights architecture can be converted to SQL Server invoker-rights subprograms by properly qualifying objects inside stored programs. With invoker rights, the schema name could also be passed in and SQL dynamically constructed. Parameters. Both Oracle and SQL Server support positional, named, or mixed notation for parameters. SQL Server can handle all the types of parameters that are found in Oracle, including tables. Oracle has three parameter modes: IN, OUT, and IN OUT. SQL Server does not allow the IN OUT type of parameters, and this type warrants some rework in the application design when migrating. Hence IN OUT parameters will have to be replaced with two separate IN and OUT parameters. The keyword OUTPUT is used in place of OUT, while the keyword IN does not exist and is implied because it is the default mode. Also, even though it is not a good practice, there is no restriction on the usage of OUT and IN OUT parameters in Oracle functions. Violations of this rule should be trapped and corrected during migration. In Oracle and SQL Server, IN parameters can be given default values. If such parameters are skipped during a call, the default value is applied. In SQL Server, this rule holds for stored procedures but not for functions. With functions, the keyword "default" has to be used in the call to the function and cannot be skipped as with stored procedures (which imply the use of the default value). SQL Server parameters are prefixed by the at (@) symbol. Using same name as system. If the first three characters of the procedure name are sp_, SQL Server searches the master database for the procedure. If no qualified procedure name is provided, SQL Server searches for the procedure as if the owner name is dbo. To resolve the stored procedure name as a user-defined stored procedure with the same name as a system stored procedure, provide the fully qualified procedure name.

Only the migration of the object structure is provided in this chapter. Refer to Chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion about migrating the embedded PL/SQL code to T-SQL.

Functions
While the system provides several built-in functions, Oracle and SQL Server provide the capability to define custom functions called user-defined functions. These user-defined extensions to SQL always return a result. They are employed primarily in SQL statements. Functions can be created using Enterprise Manager or T-SQL.

128

Developing: Databases Migrating Schemas

To create a function using Enterprise Manager, follow these steps: 1. Expand a server group, and then expand a server. 2. Expand Databases, and then expand the database in which to create the function. 3. Right-click User Defined Functions and select New User Defined Function. 4. In the Text box, enter the text of the function. Use TAB to indent the text of a function. 5. To check the syntax, click Check Syntax and OK to create. To create a function using T-SQL, use the following syntax: Functions can be created in SQL Server using the CREATE FUNCTION statement, which is very similar to the one found in Oracle, as show in Table 6.24. Table 6.24: SQL Statements for Creating Triggers in Oracle and SQL Server Oracle
CREATE OR REPLACE FUNCTION [schema.]function_name [ ({parameter [IN] datatype [= default]} [, ]) ] RETURN scalar_expression {IS | AS} variable_declaration BEGIN statements RETURN scalar_expression EXCEPTION statements END;

SQL Server
CREATE FUNCTION [owner.] function_name [ ({@parameter [AS] datatype [ = default ]} [, ]) ] RETURN scalar_expression [ AS ] BEGIN variable_declaration statements RETURN scalar_expression END

Stored Procedures
SQL Server has stored procedures that closely resemble Oracle. These stored procedures provide the capability to pass multiple parameters back to the calling environment. Another advantage of stored procedures is that they can perform actions in the database without being tied to a SQL statement. Stored procedures can be created using the Create Stored Procedure Wizard, Enterprise Manager, or T-SQL. To create a view using the Create Stored Procedure Wizard, follow these steps: 1. Expand a server group and then expand the server in which to create the view. 2. On the Tools menu, click Wizards. 3. Expand Database. 4. Double-click Create Stored Procedure Wizard. 5. Complete the steps in the wizard.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

129

To create a stored procedure using Enterprise Manager, follow these steps: 1. Expand a server group, and then expand a server. 2. Expand Databases, and then expand the database in which to create the stored procedure. 3. Right-click Stored Procedures and select New Stored Procedure. 4. In the Text box, enter the text of the stored procedure. Use TAB to indent the text within. 5. To check the syntax, click Check Syntax and OK to create. To create a stored procedure using T-SQL, use the following syntax: Both Oracle and SQL Server offer the CREATE PROCEDURE statement, which is very similar in structure and is shown in Table 6.25. Table 6.25: SQL Statements for Creating Procedures in Oracle and SQL Server Oracle
CREATE OR REPLACE PROCEDURE [schema.]procedure_name [ ({parameter [IN|OUT|IN OUT] datatype [= default]} [, ]) ] {IS | AS} variable_declaration BEGIN statements EXCEPTION statements END; END

SQL Server
CREATE PROCEDURE [owner.] procedure_name [ ({@parameter [AS] datatype [ = default ] [OUTPUT]} [, ]) ] AS BEGIN variable_declaration statements

Packages
One of the main differentiators between stored procedures and packages is the capability to declare variables in the package header that act as global variables. SQL Server does not have packages, and these have to be replaced with stored procedures. However, SQL Server allows the nesting of procedures, and this capability can be used to simulate the structure of packages by nesting functions and procedures. Variables can be made available to all the nested functions and procedures by declaring them in the outmost (or main) procedure. This, however, does not provide database-wide accessibility to these variables, a shortcoming which has to be accommodated by migrating the global variable from the package to the application code.

Solutions for Objects not Found in SQL Server


Some Oracle objects, such as synonyms and sequences, are widely used in Oracle applications. However, there is no equivalent object with similar functionality in SQL Server. Finding solutions to implement the logic and functionality around these objects in SQL Server is critical to the migration.

Synonyms
In Oracle, synonyms are aliases for tables, views, sequences, functions, procedures, and packages. The primary purpose of using synonyms is for location transparency and naming transparency. By creating public synonyms or private synonyms (for each user),

130

Developing: Databases Migrating Schemas

applications can be written without schema qualification in the SQL code because the qualifier is defined in the synonyms. The original object names can also be hidden. Synonyms for tables are more easily migrated by substituting them with views. Views provide the required location and naming transparency as well. For example, a private synonym, CLINTB.DEPT, which connects to the table HR.DEPARTMENT, can be replaced in SQL Server by a VIEW as follows:
CREATE VIEW CLINTB.DEPT AS SELECT * FROM HR.DEPARTMENT

One of the known drawbacks when creating SELECT * views in Oracle is that any changes that are made to the base table definition in terms of adding, dropping, and renaming columns are not automatically reflected in the view definition. This is because Oracle converts the query piece:
SELECT * FROM table_name

into
SELECT column_name [, ] FROM table_name

Consequently, when a column is added to the base table, the view definition does not change. Dropping or renaming a column invalidates the view. However, SQL Server does not have this problem because the view DDL is stored as it was written, that is, SELECT * FROM table_name. It is not possible to find an object-based solution for synonyms connecting to other types of objects, such as procedures and sequences. The simplest solution is to qualify the object schema or owner in the application rather than resolve it at the database login level. The inherent quality of SQL Server in resolution of objects can be used for providing a solution. Synonyms are usually employed in situations where there are multiple versions of the same application being used to support different clients, with each client having its own schema. The schema that will be used is driven by the user login and not the application itself. This can be achieved in SQL Server by setting the default database for a user account and giving the login the appropriate privileges at the object level. This scheme will work as long as all the objects that will be accessed are in a single database and it is set as the default database for the user. For more information about these issues, refer to Chapter 7, "Developing: Databases Migrating the Database Users."

Sequences
In Oracle, sequences are autonomous schema objects that can be used to generate numbers that are in sequential order. In applications, sequences are commonly used to produce artificial keys for tables which do not have a natural key, or the natural key is too long. SQL Server does not have an equivalent for sequences. However, a SQL Server table can have columns that are defined with the IDENTITY property, which simulates the behavior of Oracle sequences. The IDENTITY column is an Oracle sequence and table column rolled into one, that is, the column definition includes the sequence definition. The syntax of the SQL Server IDENTITY property is
IDENTITY [(seed, increment)]

The Oracle The Oracle The Oracle

start with

is replaced by the SQL Server seed. is equivalent to the SQL Server increment

increment by maxvalue

is provided by the SQL Server column data type.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

131

Table 6.26 shows the implementation of the IDENTITY property feature in SQL Server and what would be the equivalent in Oracle. Table 6.26: Implementation of Oracle Sequence in SQL Server Oracle
CREATE SEQUENCE sequence_name START WITH 1 INCREMENT BY 1 CREATE TABLE table_name (column1 NUMBER DEFAULT sequence_name.NEXTVAl, column2 ); INSERT INTO table_name (column2, ) VALUES ( ) CREATE TABLE table_name (column1 NUMBER, column2 ); INSERT INTO table_name (column1, column2, ) VALUES (sequence_name.NEXTVAL, )

SQL Server
CREATE TABLE table_name (column1 NUMBER IDENTITY(1,1), column2 ); INSERT INTO table_name (column2, ) VALUES ( ) CREATE TABLE table_name (column1 NUMBER IDENTITY(1,1), column2 ); INSERT INTO table_name (column2, ) VALUES ( )

One of the advantages of sequences being independent objects is that they can be used to supply a unique identifier across multiple tables. This property of Oracle sequences is not inherent in SQL Servers IDENTITY property. If an identifier is required to be unique across tables (or even databases across the globe), use the uniqueidentifier data type, which can be populated using the NEWID() function. The IDENTITY column can only handle unique values and hence it cannot support the cycle feature of Oracle sequence. A maximum value also cannot be set but can be enforced by choosing an appropriate sized data type for the IDENTITY column. Note There is no sufficient information inside the Oracle database to figure out how and where a sequence is used. Hence the application's documentation, code, or developer will have to be consulted to learn which sequence is used to populate a column. This information is then used to define an IDENTITY column in that table.

Database Links
Oracle database links are used to form a bridge between databases (instances). Remote and distributed transactions can be executed against the remote database using the database link. SQL Server provides similar functionality under the linked server.

132

Developing: Databases Migrating Schemas

Table 6.27 compares some of the characteristics of database links in Oracle and SQL Server. Table 6.27: Comparison of Database Links in Oracle and SQL Server Attribute
Link Types Database Types

Oracle
Public and Private Homogeneous CREATE [ PUBLIC ] DATABASE LINK [ CONNECT TO { CURRENT_USER | user IDENTIFIED BY password } ] USING 'connect_string'

SQL Server
Public Homogeneous and Heterogeneous sp_addlinkedserver [ @server = ] 'server' [ , [ @srvproduct = ] 'product_name' ] [ , [ @provider = ] 'provider_name' ] [ , [ @datasrc = ] 'data_source' ] [ , [ @location = ] 'location' ] [ , [ @provstr = ] 'provider_string' ] [ , [ @catalog = ] 'catalog'

Note The provider_name for an OLE DB-based connection to another SQL Server is 'SQLOLEDB'. The following example shows the use of sp_addlinkedserver to create a linked server to connect to another database for executing distributed queries:
sp_addlinkedserver @server='CORP2', @srvproduct='', @provider='SQLOLEDB', @datasrc='SHUTTLE\CORP2'

Sample Schema Object Migration


The following example illustrates the schema owner and schema objects migration task of the database migration. The HR schema has been used in this example. 1. Create the SQL Server login account (database authenticated). The following commands create a database-authenticated login with name HR.
Use MASTER EXEC sp_addlogin @loginame='HR', @passwd='HR', @defdb='HRDATA'

2. Create user account in the database. The following commands create a user account for the HR login in the HRDATA database in which objects will be created.
Use HRDATA EXEC sp_grantdbaccess @loginame='HR',

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

133

@name_in_db='HR'

3. Grant privileges to create objects in the database. Privileges can be granted to the schema owner to create objects in a database using either the CREATE permissions or the db_owner fixed database role. a. Grant complete authority on the database. The following command can be used to grant the db_owner role:
EXEC sp_addrolemember @rolename='db_owner', @membername='HR'

b. Grant privileges to create objects only. The following commands can be executed to only grant permission to create objects:
GRANT CREATE FUNCTION TO HR GRANT CREATE PROCEDURE TO HR GRANT CREATE TABLE TO HR GRANT CREATE VIEW TO HR

4. Create sequences. While sequences cannot be migrated to SQL Server, information about the sequences in the Oracle database should be extracted first. Because sequences are migrated to IDENTITY columns in tables, the information about sequences is required to appropriately define tables (in the next step). The following command extracts the required information on sequences.
SELECT sequence_name, min_value, max_value, increment_by FROM dba_sequences WHERE sequence_owner = 'HR'

The following output is obtained:


SEQUENCE_NAME DEPARTMENTS_SEQ EMPLOYEES_SEQ LOCATIONS_SEQ MIN_VALUE 1 1 MAX_VALUE INCREMENT_BY 9990 9900 10 1 100

1 999999999999999999999999999

The three sequences in the HR schema are assumed to be used to populate the columns DEPARTMENTS.DEPARTMENT_ID, EMPLOYEES.EMPLOYEE_ID and LOCATIONS.LOCATION_ID, respectively. 5. Create tables. There are two options for how to migrate tables. Option one is to create tables in a specific order based on the foreign key constraints (parent tables first). Option two is to create tables in any order, leaving out the foreign key constraints and then adding them after all the tables have been created. a. Obtain a list of tables. If option one is followed, the dependencies between tables and the right order in which to create the tables is required. There are several scripts available (mostly PL/SQL-based) to produce such a list. Tables without any dependencies form the first level, and other levels are formed based on the nesting of the dependencies. Note None of the available scripts will work when there are interdependencies between tables that form a closed loop. The following script shows such circular dependencies that make these scripts not viable.

134

Developing: Databases Migrating Schemas

SELECT table_name parent_table_name, NULL child_table_name FROM dba_tables ut WHERE owner = 'HR' AND NOT EXISTS (SELECT 'x' FROM user_constraints uc WHERE constraint_type = 'R' AND ut.table_name = uc.table_name) UNION ALL SELECT p.table_name parent_table_name, c.table_name child_table_name FROM dba_constraints p, dba_constraints c WHERE p.owner = c.owner AND p.owner = 'HR' AND p.constraint_type IN ('P','U') AND c.constraint_type = 'R' AND p.constraint_name = c.r_constraint_name

Note In the following output there is a parent-child relationship between EMPLOYEES and DEPARTMENTS in both directions that creates the loop.
PARENT_TABLE_NAME CHILD_TABLE_NAME JOBS REGIONS REGIONS LOCATIONS EMPLOYEES DEPARTMENTS JOBS EMPLOYEES JOBS EMPLOYEES DEPARTMENTS COUNTRIES COUNTRIES DEPARTMENTS DEPARTMENTS EMPLOYEES EMPLOYEES EMPLOYEES JOB_HISTORY JOB_HISTORY JOB_HISTORY LOCATIONS

To implement option two, first obtain a list of all tables that need to be migrated using the following command:
SELECT table_name, tablespace_name, iot_type FROM dba_tables WHERE owner = 'HR' The output obtained is: TABLE_NAME COUNTRIES DEPARTMENTS EMPLOYEES JOBS JOB_HISTORY LOCATIONS REGIONS EXAMPLE EXAMPLE EXAMPLE EXAMPLE EXAMPLE EXAMPLE TABLESPACE_NAME IOT_TYPE IOT

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

135

b. Extract DDL along with constraints. There are several scripts and tools that can be used to extract the DDL for the table object. For an Oracle 9i database, the Oracle-supplied package DBMS_METADATA can be used. First, execute the following in SQL*Plus to define the output style.
set pagesize 0 set long 10000 EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM, 'STORAGE', FALSE); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM, 'TABLESPACE', TRUE); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM, 'SQLTERMINATOR', TRUE); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM, 'CONSTRAINTS', TRUE); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM, 'REF_CONSTRAINTS', TRUE); EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM (DBMS_METADATA.SESSION_TRANSFORM, 'CONSTRAINTS_AS_ALTER', TRUE);

Then execute DBMS_METADATA.get_ddl for each of the tables. The following two examples show how to extract the DDL for entire tables using this procedure:
SELECT dbms_metadata.get_ddl('TABLE', 'COUNTRIES', 'HR') FROM DUAL; The output is as follows: CREATE TABLE "HR"."COUNTRIES" ("COUNTRY_ID" CHAR(2) CONSTRAINT "COUNTRY_ID_NN" NOT NULL ENABLE, "COUNTRY_NAME" VARCHAR2(40), "REGION_ID" NUMBER, CONSTRAINT "COUNTRY_C_ID_PK" PRIMARY KEY ("COUNTRY_ID") ENABLE ) ORGANIZATION INDEX NOCOMPRESS PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING TABLESPACE "EXAMPLE" PCTTHRESHOLD 50; ALTER TABLE "HR"."COUNTRIES" ADD CONSTRAINT "COUNTR_REG_FK" FOREIGN KEY ("REGION_ID") REFERENCES "HR"."REGIONS" ("REGION_ID") ENABLE; Execute the following command to obtain the definition of the LOCATIONS table. SELECT dbms_metadata.get_ddl('TABLE', 'LOCATIONS', 'HR') FROM DUAL;

The output returned is:


CREATE TABLE "HR"."LOCATIONS" ("LOCATION_ID" NUMBER(4,0), "STREET_ADDRESS" VARCHAR2(40), "POSTAL_CODE" VARCHAR2(12), "CITY" VARCHAR2(30) CONSTRAINT "LOC_CITY_NN" NOT NULL ENABLE, "STATE_PROVINCE" VARCHAR2(25), "COUNTRY_ID" CHAR(2) ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "EXAMPLE" ;

136

Developing: Databases Migrating Schemas

CREATE UNIQUE INDEX "HR"."LOC_ID_PK" ON "HR"."LOCATIONS" ("LOCATION_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 NOLOGGING TABLESPACE "INDX" ; ALTER TABLE "HR"."LOCATIONS" ADD CONSTRAINT "LOC_ID_PK" PRIMARY KEY ("LOCATION_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 TABLESPACE "INDX" ENABLE; ALTER TABLE "HR"."LOCATIONS" ADD CONSTRAINT "LOC_C_ID_FK" FOREIGN KEY ("COUNTRY_ID") REFERENCES "HR"."COUNTRIES" ("COUNTRY_ID") ENABLE; NOLOGGING

c. Convert the extracted Oracle DDL to SQL Server syntax. In this step, the table DDL is converted along with constraint and index definitions. Foreign key constraints are handled separately in the next step. Create clustered indexes before creating any non-clustered indexes. If clustered indexes are planned for a table, defer creation of nonclustered indexes (as part of PRIMARY KEY or UNIQUE constraint definition) during the table creation in this step. The first table in the example, COUNTRIES, is an index-organized table. For SQL Server, it is converted into a heap with a clustered index on the primary key, COUNTRY_ID. Note In Oracle, there is a single datatype of the number variety which is constrained with appropriate scale. While converting to SQL Server, the data type which is closest in scale needs to be used, as has been demonstrated in the example. The following commands are used to create the COUNTRIES and LOCATIONS tables respectively.
CREATE TABLE HR.COUNTRIES (COUNTRY_ID CHAR(2) CONSTRAINT NN_COUNTRY_ID NOT NULL, COUNTRY_NAME VARCHAR(40), REGION_ID NUMERIC, CONSTRAINT PK_COUNTRY_C_ID PRIMARY KEY CLUSTERED (COUNTRY_ID) ON HR_INDX ) CREATE TABLE HR.LOCATIONS (LOCATION_ID SMALLINT NOT NULL, STREET_ADDRESS VARCHAR (40), POSTAL_CODE VARCHAR(12), CITY VARCHAR(30) CONSTRAINT NN_LOC_CITY NOT NULL, STATE_PROVINCE VARCHAR(25), COUNTRY_ID CHAR(2) ) ALTER TABLE HR.LOCATIONS ADD CONSTRAINT PK__ID PRIMARY KEY NONCLUSTERED (LOCATION_ID) ON HR_INDX

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows d. Create the foreign key constraints. After all the tables have been created, the foreign key constraints can be created in any order. The following commands create foreign keys on the COUNTRIES and LOCATIONS tables.
ALTER TABLE HR.COUNTRIES ADD CONSTRAINT FK_COUNTR_REG FOREIGN KEY (REGION_ID) REFERENCES HR.REGIONS (REGION_ID); ALTER TABLE HR.LOCATIONS ADD CONSTRAINT FK_LOC_C_ID FOREIGN KEY (COUNTRY_ID) REFERENCES HR.COUNTRIES (COUNTRY_ID);

137

e. Extract and migrate the table comments. The following command can be used to extract comments in tables in the HR schema.
SELECT table_name, comments FROM dba_tab_comments WHERE owner = 'HR' AND comments IS NOT NULL

The output returned contains the comments for the tables in the HR schema, such as COUNTRIES, DEPARTMENTS, EMPLOYEES, JOBS, JOB_HISTORY and LOCATIONS. The output can then be copied into the statements for adding comments to tables in SQL Server. The following is an example of the use of the sp_addextendedproperty procedure to add a comment on the COUNTRIES table.
sp_addextendedproperty 'comment', 'country table. Contains 25 rows. References with locations table.', 'user', HR, 'table', countries

f. Extract and migrate the column comments:


SELECT table_name, column_name, comments FROM dba_col_comments WHERE owner = 'HR' AND comments IS NOT NULL

The output contains the column level comments. The following is an example of the use of the sp_addextendedproperty procedure to add a comment on the REGION_ID column of the COUNTRIES table.
sp_addextendedproperty 'comment', 'Region ID for the country. Foreign key to region_id column in the departments table.' , 'user', hr, 'table', countries, 'column', region_id

6. Create triggers. This step demonstrates how to migrate the table triggers from Oracle to SQL Server. The logic (code) that is embedded in the triggers is not discussed here. For more information, refer to the "Migrating the Data Access" section in Chapter 11, "Developing: Databases Migrating Oracle SQL and PL/SQL."
SELECT table_name, 'CREATE TRIGGER ' || description header FROM dba_triggers WHERE owner = table_owner AND table_owner = 'HR' AND base_object_type = 'TABLE' TABLE_NAME HEADER EMPLOYEES CREATE TRIGGER secure_employees

138

Developing: Databases Migrating Schemas

BEFORE INSERT OR UPDATE OR DELETE ON employees EMPLOYEES CREATE TRIGGER update_job_history AFTER UPDATE OF job_id, department_id ON employees FOR EACH ROW

The equivalent syntax for SQL Server can be generated with the aid of Table 6.24. The commands for creating the secure_employees and update_job_history triggers will convert as:
CREATE TRIGGER secure_employees ON employees INSTEAD OF INSERT, UPDATE, DELETE AS GO CREATE TRIGGER update_job_history ON employees AFTER UPDATE AS IF (UPDATE(job_id) or UPDATE(department_id)) GO

7. Create indexes. All indexes related to constraints should have been created in Substep 5.b. This step covers the indexes that are not related to any constraints. The following query can be used to retrieve information on indexes in Oracle that are not related to any constraints. This query will work only in Oracle 9i because the data dictionary before this release of Oracle does not have the columns used in this query.
SELECT i.table_name, i.index_name, c.column_name, i.uniqueness, i.tablespace_name, i.pct_free FROM dba_indexes i, dba_ind_columns c WHERE i.table_name = c.table_name AND i.index_name = c.index_name AND i.owner = c.index_owner AND i.owner = 'HR' AND (i.owner, i.index_name) NOT IN (SELECT index_owner, index_name FROM dba_constraints WHERE owner = 'HR' AND index_name IS NOT NULL) ORDER BY i.table_name, i.index_name, c.column_position

The output obtained has been formatted as follows:


TABLE_NAME INDEX_NAME TABLESPACE_NAME INDX INDX EMPLOYEES EMP-JOB-IX INDX 10 DEPARTMENTSDEPT_LOCATION_IX 10 DEPARTMENT_ID JOB_ID NONUNIQUE NONUNIQUE 10 EMPLOYEES EMP_DEPARTMENT_IX COLUMN_NAME PCT_FREE LOCATION_ID NONUNIQUE UNIQUENESS

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

139

EMPLOYEES EMP_MANAGER_IX INDX EMPLOYEES EMP_NAME_IX INDX EMPLOYEES EMP_NAME_IX INDX INDX INDX JOB_HISTORYJHIST_JOB_IX INDX LOCATIONS LOC_CITY_IX INDX LOCATIONS LOC_COUNTRY_IX INDX INDX 10 LOCATIONS LOC_STATE_PROVINCE_IX 10 10 10 10 JOB_HISTORYJHIST_DEPARTMENT_IX 10 JOB_HISTORYJHIST_EMPLOYEE_IX 10 10 10

MANAGER_ID LAST_NAME FIRST_NAME DEPARTMENT_ID EMPLOYEE_ID JOB_ID CITY COUNTRY_ID STATE_PROVINCE

NONUNIQUE NONUNIQUE NONUNIQUE NONUNIQUE NONUNIQUE NONUNIQUE NONUNIQUE NONUNIQUE NONUNIQUE

This information can be used to create indexes in SQL Server as follows:


CREATE NONCLUSTERED INDEX hr.dept_location_ix ON hr.departments(location_id) WITH FILLFACTOR = 90 ON HR_INDX CREATE NONCLUSTERED INDEX hr.emp_name_ix ON hr.employees(last_name, first_name) WITH FILLFACTOR = 90 ON HR_INDX

Because indexes have been created over several steps, you should verify that an equivalent index has been created in SQL Server for each index in Oracle. In Oracle, generate a listing of all indexes using the following command:
SELECT i.table_name, i.index_name, c.column_name, i.uniqueness, i.tablespace_name, i.pct_free FROM dba_indexes i, dba_ind_columns c WHERE i.table_name = c.table_name AND i.index_name = c.index_name AND i.owner = c.index_owner AND i.owner = 'HR' ORDER BY i.table_name, i.index_name, c.column_position

In SQL Server, execute the following command on each table with the table name substituted for name to retrieve information on the indexes available for the table:
sp_helpindex [ @objname = ] 'name'

8. Create views. Extract the views and their definition from Oracle using the following command:
SELECT view_name, text FROM dba_views WHERE owner = 'HR'

The output provides the view query.

140 The query for DEPT is as follows:

Developing: Databases Migrating Schemas

select "DEPARTMENT_ID", "DEPARTMENT_NAME", "MANAGER_ID", "LOCATION_D", "DEPT_LOC" from hr.departments

The query for EMB_DETAILS_VIEW is as follows:


SELECT e.employee_id, e.job_id, e.manager_id, e.department_id, d.location_id, l.country_id, e.first_name, e.last_name, e.salary, e.commission_pct, d.department_name, j.job_title, l.city, l.state_province, c.country_name, r.region_name FROM employees e, departments d, jobs j, locations l, countries c, regions r WHERE e.department_id = d.department_id AND d.location_id = l.location_id AND l.country_id = c.country_id AND c.region_id = r.region_id AND j.job_id = e.job_id WITH READ ONLY

For the view DEPT, the view text shows capitalized names enclosed in double quotes for the column names:
select "DEPARTMENT_ID", "DEPARTMENT_NAME", "MANAGER_ID", "LOCATION_D", "DEPT_LOC" from hr.departments

This is an indication that the column names were filled in by the system, and the original DDL for the CREATE VIEW dept is as follows:
select * from hr.departments

As pointed out in the discussion on using views instead of synonyms for tables, SQL Server retains the exact definition used in the DDL. Oracle does the same thing when an asterisk (*) is not used. This behavior is shown in the EMP_DETAILS_VIEW view text. Thus, while converting the syntax, the DDL may be changed back to an asterisk instead of specifying every column.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

141

Note The DEPT view was added to the HR schema to demonstrate this point. The EMP_DETAILS_VIEW can be created under the HR schema using the following command:
CREATE VIEW hr.EMP_DETAILS_VIEW AS SELECT e.employee_id, e.job_id, e.manager_id, e.department_id, d.location_id, l.country_id, e.first_name, e.last_name, e.salary, e.commission_pct, d.department_name, j.job_title, l.city, l.state_province, c.country_name, r.region_name FROM hr.employees e INNER JOIN hr.departments d ON e.department_id = d.department_id INNER JOIN hr.locations l ON d.location_id = l.location_id INNER JOIN hr.countries c ON l.country_id = c.country_id INNER JOIN hr.regions r ON c.region_id = r.region_id INNER JOIN hr.jobs j ON j.job_id = e.job_id;

The WITH READ ONLY clause will have to be enforced by ensuring only SELECT privilege is given to users. 9. Create functions, stored procedures, and packages. The following command can be used to get a listing of all functions defined by the schema owner, HR:
SELECT object_name, object_type FROM dba_objects WHERE owner = 'HR' AND object_type IN ('FUNCTION', 'PROCEDURE', 'PACKAGE', 'PACKAGE BODY')

The output retuned is:


OBJECT_NAME SECURE_DML OBJECT_TYPE PROCEDURE ADD_JOB_HISTORY PROCEDURE

The source code of each of these objects can be extracted from the data dictionary using the command:

142

Developing: Databases Migrating Schemas

SELECT text FROM dba_source WHERE owner = 'HR' AND NAME = 'ADD_JOB_HISTORY' ORDER BY line

The output is:


TEXT PROCEDURE add_job_history ( p_emp_id , p_start_date , p_end_date , p_job_id , p_department_id ) IS BEGIN INSERT INTO job_history (employee_id, start_date, end_date, job_id, department_id) VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id); END add_job_history; job_history.employee_id%type job_history.start_date%type job_history.end_date%type job_history.job_id%type job_history.department_id%type

An overview of the effort in migrating functions, stored procedures, and packages has been given in the "Stored Programs" section of this chapter. For a detailed discussion of migrating the code, refer to the "Migrating the Data Access" section in Chapter 11. 10. Create synonyms. Synonyms are used with non-schema users to provide location transparency. Because SQL Server does not support synonyms, the SQL will have to be modified to contain a schema prefix. Note An Oracle schema could potentially have objects or properties that are not in use. Examples of such objects are disabled constraints, invalid views, invalid synonyms, invalid function, invalid procedures, and dropped column (not purged from table). Although it is not necessary to migrate these objects to the new environment, certain clients may differ in this regard.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

143

7
Developing: Databases Migrating the Database Users
Introduction and Goals
Chapter 6, "Developing: Database Migrating Schemas," concentrated strictly on the migration of the schema, its owner, and objects. The other class of database users is the one that accesses the data in the tables of these schemas. These users do not own any objects, such as tables or views, but may have some synonyms that may point to the tables and views in the application schema. The effort in migrating these users to Microsoft SQL Server is in creating the users and reproducing the privileges they possess in the Oracle database. The privileges may be granted to the user directly or through roles. The migration of roles is also discussed in this chapter. The steps in migrating the users are: 1. Create the user account. 2. Create roles and grant privileges.

Create User Accounts


A detailed discussion of database accounts, authentication, and password management was provided in the "Create the Schema Owner" section of Chapter 6. The creation of user accounts does not differ from the creation of schema accounts in any respect. The difference is in the privileges that they are granted. Figure 7.1 shows the relationship between database users, roles, and schema objects.

144

Developing: Databases Migrating the Database Users

Figure 7.1
Dependencies in the migration of users from Oracle to SQL Server

The following query can be run in the source Oracle database to create a list of users that have privileges on any object in a specific schema. The query has been constrained to only a specific schema and its users. This aids in situations where only a subset of the schemas and the related users are migrated from an Oracle database.
SELECT grantee FROM dba_tab_privs WHERE owner = username UNION SELECT grantee FROM dba_col_privs WHERE owner = username

The grantee could be a user or a role. As with the case of the schema owner, the following query can be used to retrieve the characteristics of any login.
SELECT du.username, DECODE(du.password,'EXTERNAL','EXTERNAL','DB') "AUTHENTICATION MODE", du.default_tablespace, du.temporary_tablespace, dp.resource_name, dp.limit FROM dba_users du, dba_profiles dp WHERE du.profile = dp.profile AND dp.resource_type = 'PASSWORD' AND du.username = username

The system stored procedure sp_grantlogin is used to create a SQL Server login for a domain-authenticated account, and sp_addlogin is used to create a SQL Server authenticated account. The procedure sp_grantdbaccess is used to create user

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

145

accounts in the individual databases for these logins. User accounts should be created in a database only if there are objects in the database the user needs to access. Note While migrating Oracle accounts, there may be user accounts that are locked. The Oracle DBA of the original environment should be consulted before making decisions on whether to migrate these accounts.

Create Roles and Grant Privileges


A privilege is permission (as it is called in SQL Server) to perform work inside the database. Oracle and SQL Server provide two sets of privileges: system privileges and object privileges. Table 7.1 provides terms relating to privileges in Oracle and SQL Server that can help avoid any confusion during a migration. Table 7.1: Privilege Terms Used in Oracle and SQL Server Oracle Terminology
Privilege System privilege Object privilege Predefined role permission (for example, DBA) Grantee

SQL Server Terminology


Permission Statement permission Object permission Implied permissions (for example, sysadmin) Security account

While Oracle has several object privileges, the ones commonly granted to users are SELECT, INSERT, DELETE, and UPDATE on tables and EXECUTE on stored programs. SQL Server has the same object privileges. System and object privileges can be granted to users either directly or through roles. SQL Server also offers the WITH GRANT OPTION clause for object privileges. Oracle and SQL Server differ a lot in the system privileges that are available. Oracle has very granular (more than 100) system privileges. SQL Server system privileges, called statement permissions, are restricted to the following list: BACKUP DATABASE BACKUP LOG CREATE DATABASE CREATE DEFAULT CREATE FUNCTION CREATE PROCEDURE CREATE RULE CREATE TABLE CREATE VIEW

The rest of the Oracle system privileges are bundled into several large fixed roles. For example, the fixed database role db_datareader is equivalent to the SELECT ANY TABLE system privilege in Oracle. Oracle and SQL Server offer predefined roles and user-defined roles. The predefined roles have powerful system and object privileges on the data dictionary and the instance.

146

Developing: Databases Migrating the Database Users

The "Roles" topic in SQL Server Books Online provides a list of fixed server roles and fixed database roles and their description. User-defined roles can be granted system as well as object privileges. The privileges for administrative users should be revisited when migrating to a SQL Server database because each database is far more self-contained with respect to administration. Fixed database roles can be granted privileges on a single database instead of the system-wide powers afforded by the DBA role in Oracle. A complete discussion of the system privileges and their equivalents in SQL Server is beyond the scope of this guidance. For more information about the fixed server roles and fixed database roles available in SQL Server and their descriptions, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_da_3xns.asp. There are two important stored procedures for granting system privileges in SQL Server: sp_addsrvrolemember can be used for granting fixed system roles and sp_addrolemember can be used for granting fixed database roles. In Oracle, roles are available at the instance or server level and can be granted privileges on more than one schema. SQL Server user-defined roles are local to a database and owned by a user. Hence, when migrating a role from Oracle to SQL Server, a role has to be created in each of the databases in which privileges have to be granted. To grant privileges and roles to a user, follow these steps: 1. Create role(s) in each of the SQL Server database(s) that will be used to control the permission given to users. A role can be created in a database using the following commands:
Use database GO sp_addrole [ @rolename = ] 'role_name' [ , [ @ownername = ] 'owner' ] GO

Where role_name is the name of the role to be created and owner is the user who will own the role (which will normally be the schema owner or a security account for the application). 2. System level privileges can either be granted to the role created in step 1 or directly to the user. The following statement can be used for granting system privileges to roles and users:
GRANT {ALL | system_privilege_name} TO {user_name | role_name}

Where system_privilege_name is the system privilege to be granted, user_name is the name of the user to whom the grant is made, and role_name is the name of the role through which the privileges will be granted to the user. 3. As with system privileges, object privileges can be granted directly to the user or through roles. The following statement can be used to grant object privileges to users and roles:
GRANT {ALL | object_privilege_name} ON object_name TO {user_name | role_name} [WITH GRANT OPTION]

Where object_privilege_name is the object privilege to be granted, object_name is the object on which the privilege is granted, user_name is the name of the user to whom the grant is made, and role_name is the name of the role through which the privileges will be granted to the user.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

147

Object privileges can also be granted at the more granular column level instead of on the entire table. The following statement can be used to grant privileges on one or more columns of a table:
GRANT {ALL | object_privilege_name} ON table_name ( column_name [,] ) TO {user_name | role_name} [WITH GRANT OPTION]

Where object_privilege_name is the object privilege to be granted, table_name is the table to which the column belongs, column_name is the specific column on which the privilege is being granted, user_name is the name of the user to whom the grant is made, and role_name is the name of the role through which the privileges will be granted to the user. 4. If privileges are being granted through a role, the role can be granted to a user using the following command:
sp_addrolemember [ @rolename = ] 'role_name' , [ @membername = ] 'user_name'

Where role_name is the name of the role to be granted and user_name is the target user.

Sample User Migration


When a user is migrated from Oracle to SQL Server, the same security architecture can be maintained by creating roles and granting privileges in a functionally equivalent manner. The difference in the implementation is depicted in Figure 7.1. Privileges on objects in a database are given to a local user account at the database level and do not transgress database boundaries. This implies that, for a server or instance level login, a user account has to be created in each of the databases that it needs privileges in (either directly or through roles). The following steps are a practical example of how to migrate the security architecture of an application and its user from Oracle to SQL Server. This example is an extenuation of the migration of the HR schema in Oracle which was started in Chapter 6. 1. Create the user account in SQL Server for each of the Oracle users. In Oracle, there is a single user account that is granted privileges on various schemas. In SQL Server, the single user account translates to a two-tier implementation. a. Obtain a list of user accounts in the Oracle database that have to be migrated to SQL Server. The list can be obtained using the following statement:
SELECT grantee FROM dba_tab_privs WHERE owner = 'HR' UNION SELECT grantee FROM dba_col_privs WHERE owner = 'HR'

where owner is the name of the schema whose security is being migrated. The following is the output for the HR schema.
GRANTEE OE

148

Developing: Databases Migrating the Database Users b. Obtain the characteristics of user accounts in Oracle that were identified in substep a. The following statement can be executed to gather information on each of the users to be migrated:
SELECT du.username, DECODE(du.password,'EXTERNAL','EXTERNAL','DB') "AUTHENTICATION MODE", du.default_tablespace, du.temporary_tablespace, dp.resource_name, dp.limit FROM dba_users du, dba_profiles dp WHERE du.profile = dp.profile AND dp.resource_type = 'PASSWORD' AND du.username = 'OE'

where OE is the name of the user that is being migrated. The privileges granted to OE user are:
USERNAME OE TEMP OE TEMP OE TEMP OE TEMP OE TEMP OE TEMP OE TEMP DB DB DB DB DB DB AUTHENTICATION MODE DB DEFAULT_TABLESPACE LIMIT UNLIMITED UNLIMITED UNLIMITED UNLIMITED NULL UNLIMITED UNLIMITED EXAMPLE FAILED_LOGIN_ATTEMPTS EXAMPLE PASSWORD_LIFE_TIME EXAMPLE PASSWORD_REUSE_TIME EXAMPLE PASSWORD_REUSE_MAX EXAMPLE PASSWORD_VERIFY_FUNCTION EXAMPLE PASSWORD_LOCK_TIME EXAMPLE PASSWORD_GRACE_TIME TEMPORARY_TABLESPACE RESOURCE_NAME

c. Create SQL Server login accounts that provide access to the SQL Server instance. Because the Oracle account in this example is database-authenticated and none of the password features are in use, the SQL Server login can be created as a SQL Server-authenticated login. The following commands can be executed to create the SQL Server login for the user OE with the password OE in the HRDATA database into which the HR schema's objects are migrated:
Use MASTER EXEC sp_addlogin @loginame='OE', @passwd='OE', @defdb='HRDATA'

d. Create a user account in each of the databases in which the schema's objects have been migrated. The databases in which user accounts have to be created depend on the Oracle schemas for which the user has privileges and the SQL Server

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows database to which they have been migrated. The following statement can be used to see the schemas on which the user OE has been granted privileges.
SELECT owner FROM dba_tab_privs WHERE grantee = 'OE' UNION SELECT owner FROM dba_col_privs WHERE grantee = 'OE'

149

The output of the statement is as follows:


OWNER HR SYS

In Chapter 6, the HR schema was migrated to the HRDATA database in SQL Server. SYS is the data dictionary owner, which translates to master database in SQL Server. However, because the data dictionary is divided between the master database and the application (user-created) databases, for an application user, an account in the master database may not be necessary and should be created after proper scrutiny of the security requirements. The following commands can be used to create a user for the OE login in the HRDATA database:
Use HRDATA EXEC sp_grantdbaccess @loginame='OE', @name_in_db='OE'

2. Create roles through which the privileges will be granted to users. Only those roles that are granted to users and schemas that are being migrated will be created in SQL Server. Also, the role has to be created in each of the databases with the application schema as the owner. The following statement can be used to retrieve the roles that have been granted to the user OE:
SELECT granted_role, admin_option FROM dba_role_privs WHERE grantee = 'OE'

The following is the output for the OE user:


GRANTED_ROLE CONNECT RESOURCE ADMIN_OPTION NO NO

The CONNECT and RESOURCE roles are both system roles. No application roles are in use for this schema. Hence all system and object privileges will be granted directly to the user. If you want to introduce roles in SQL Server, it should be performed at this point. 3. Grant privileges and/or roles to the user. The roles that were identified in Step 2 are granted here. The CONNECT role gives the user the capability to connect to the Oracle database. In SQL Server, the same privileges are inherently granted when the login and user accounts are created. The RESOURCE role gives the capability to create objects in the database. There is no role in SQL Server that provides the

150

Developing: Databases Migrating the Database Users same level of privileges as RESOURCE. As a result, the following privileges are granted.
GRANT CREATE FUNCTION TO OE GRANT CREATE PROCEDURE TO OE GRANT CREATE TABLE TO OE GRANT CREATE VIEW TO OE

The following statement is used to retrieve system and object privileges granted to the OE user:
SELECT do.object_type Lvl, dtp.privilege, dtp.grantable, dtp.owner, dtp.table_name, NULL column_name FROM dba_tab_privs dtp, dba_objects do WHERE dtp.owner = do.owner AND dtp.table_name = do.object_name AND grantee = 'OE' UNION SELECT 'Column', privilege, grantable, owner, table_name, column_name FROM dba_col_privs WHERE grantee = 'OE' UNION SELECT 'Sys Priv', privilege, admin_option, NULL, NULL, NULL FROM dba_sys_privs WHERE grantee = 'OE' ORDER BY privilege, grantable, owner, table name, column name

The following output is returned for the OE user:


LVL PRIVILEGE GRANTABLE NO NO NO NO NO NO NO NO NO NO NO NO NO SYS HR HR HR HR HR HR HR HR HR DBMS_STATS COUNTRIES EMPLOYEES LOCATIONS COUNTRIES DEPARTMENTS EMPLOYEES JOBS JOB_HISTORY LOCATIONS OWNER TABLE_NAME COLUMN_NAME Sys Priv CREATE SNAPSHOT Sys Priv QUERY REWRITE Sys Priv UNLIMITED TABLESPACE Table Table Table Table Table Table Table Table Table Table EXECUTE REFERENCES REFERENCES REFERENCES SELECT SELECT SELECT SELECT SELECT SELECT

Based on the preceding output, grant statements can be devised as follows:


GRANT SELECT ON countries to OE; GRANT SELECT ON departments to OE; GRANT SELECT ON employees to OE; GRANT SELECT ON jobs to OE; GRANT SELECT ON job_history to OE; GRANT SELECT ON locations to OE;

SQL Server does not contain equivalents for CREATE SNAPSHOT, QUERY REWRITE, REFERENCES, and UNLIMITED TABLESPACE privileges. The CREATE SNAPSHOT privilege is related to replication, and the QUERY REWRITE privilege is related to data warehousing, and both are beyond the scope

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows of this guide. The REFERENCES privilege is for creating foreign key constraints across schemas which are not allowed in SQL Server. SQL Server does not impose storage quotas on schemas; an equivalent for REFERENCES is not needed in SQL Server.

151

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

153

8
Developing: Databases Migrating the Data
Introduction and Goals
Migration of the data represents the final task in the migration of an Oracle database to Microsoft SQL Server. Even though SQL Server offers several tools that make the bulk transfer of data from an external source easier, aspects of the migration that cannot be automated, such as planning the migration and validating that the data has been moved completely and without error, should be prioritized. The data migration task can be broken up into the following three subtasks: 1. Planning. It is important to understand the migration options, evaluate the characteristics of the source data, and evaluate environmental and business constraints. 2. Execution. This subtask involves preparing the database and transferring the data. 3. Validation. This subtask accounts for all the data and verifies data integrity.

Planning the Data Migration


There are two planning prerequisites. Before making any decisions, you should first fully understand the various options available for transferring the data from an Oracle database to a SQL Server database, particularly the advantages and limitations of each of the options. The second prerequisite is to document all the factors or characteristics of the original environment that could influence your decision making from the available options. These prerequisites are discussed in detail under the following headings.

Options for Migration


In an Oracle to SQL Server migration scenario, the two avenues for migrating data are to either push the data from Oracle to SQL Server, or to pull the data from Oracle into SQL Server. Even though Oracle can communicate with heterogeneous databases, the operation is only at a transactional level and cannot be taken advantage of for bulk data operations as needed in a migration situation. The only specialized utility Oracle provides for exporting data out of the database is the export utility exp. This utility can only create dumps in binary format that cannot be used for importing into a non-Oracle database.

154

Developing: Databases Migrating the Data

SQL Server is built for bulk copying of data, with specialized features and interfaces to a large host of data sources. Figure 8.1 shows the various paths that offer bulk copy functions for moving data from Oracle.

Figure 8.1
Paths for accessing an Oracle database from SQL Server

Data can be migrated from Oracle to SQL Server using one of the following options: Bulk Copy Program (bcp). bcp is a command line utility that uses the ODBC bulk copy API in SQL Server. bcp cannot connect to an Oracle database and, when employed for moving data from Oracle to SQL Server, the data has to be an ASCII text file. Multiple copies of bcp can be run in parallel while working on the same target table. The number of parallel sessions is limited by the number of CPUs on the server. For information about using the bcp utility with Microsoft SQL Server, refer to the "bcp Utility" article available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/coprompt/cp_bcp_61et.asp. BULK INSERT. BULK INSERT is a T-SQL statement. This statement has the same kind of functionality as bcp and can be used to import Oracle data that has been captured in a text file. This operation scales linearly with the number of CPUs, but it is limited to one thread per CPU. For information on the complete BULK INSERT T-SQL statement and its usage, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/tsqlref/ts_ba-bz_4fec.asp. Data Transformation Service (DTS). DTS is a powerful set of GUI tools that can be used to extract, transform, and populate SQL Server databases from Oracle

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

155

databases. DTS wizards make the entire process of defining and performing the import very easy. DTS provides access to the BULK INSERT statement in the DTS Bulk Insert task. In addition to working with text files, DTS can use OLE DB and ODBC providers to connect to an Oracle database. The direct connectivity to Oracle provides the capability to work with binary large objects (BLOBs). For an overview of DTS, refer to http://msdn.microsoft.com/SQL/sqlwarehouse/DTS/default.aspx?pull=/library/enus/dnsql2k/html/dts_overview.asp. Even though the functionality to transform data during the migration exists, an assumption made during this discussion is that no transformation of the source data is required. The BULK INSERT T-SQL statement has been documented in SQL Server Books Online as the fastest bulk copy method because it executes with the SQL Server process. In contrast, bcp and DTS are external utilities that are running as their own processes. As a result, they have the overhead of having to communicate with SQL Server (using IPC) and pass it data from the source. When bcp and DTS are run on a client computer, there is network overhead as well. bcp is faster than DTS when importing very large quantities of data into SQL Server, but it lacks some of the features of DTS. Studies have ranked bcp first in speed, followed by BULK INSERT, and then DTS. Refer to the SQL Server Magazine article "Showdown bcp vs. DTS" available at http://www.winnetmag.com/Articles/Print.cfm?ArticleID=19760. Other factors must be considered in making a migration strategy decision. These factors are discussed under the following heading. The Microsoft SQL Server Migration Assistant (SSMA) provides the Schema and Data Migrator tool, which can migrate the schema and also move the data. The data is migrated using the SQL Server Linked Server feature to connect directly to Oracle through the Oracle Gateway. For more information on the Schema and Data Migrator and to download the tool refer to http://www.microsoft.com/sql/migration. The beta version of this tool is available as of the date of publishing this solution. Version 1.0 of the tool is slated to be available in June 2005.

Factors in Migration
Given the various options for migrating data from Oracle to SQL Server, the choice is influenced by several business and environment factors. Some of the important factors (the more critical ones stated first) are: Volume of data. The amount of data that has to be migrated is the number one factor. High volumes of data require more storage, more processing power, larger network bandwidth, larger migration window, and increased risk of failure. For more information on how to optimize large data imports in SQL Server, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/optimsql/odp_tun_1a_5gyt.asp. Migration window. The time window available for migration may necessitate multiple parallel sessions, more resources, and staging of data. Type of data. The existence of BLOBs can be handled only by DTS. Server processing capacity. Running bcp on the same server as the database reduces network overhead of bcp talking to SQL Server, but it consumes CPU on the server. Storage availability. The amount of storage available in both the source and target environment influences the choice of migration method and strategy. For example, moving the text files on to the target server reduces network overhead.

156

Developing: Databases Migrating the Data The configuration of storage also affects the speed of migration, such as putting the source text file and the database files on separate disks or devices. Data source. The capability to create flat files in the source environment affects the choice of migration method. A fixed-field format source file takes a lot more storage and is considerably slower. Delimited format is recommended. Type of database. Batch, OLTP, or DSS database type defines the type of schema objects and their characteristics. Recovery Model. Database recovery models should be set to simple or bulklogged to minimize logging of BULK INSERT. The capability to do so can be affected if there are databases being shared by other applications that are in use during the migration. Use proven methods. Only use methods and options that have proven to work by the industry and for the environment. For example, OLE DB has been proven to be faster than ODBC while providing all of the same features.

Migration Strategy
The following are some of the factors that can contribute to the creation of a robust migration strategy: Minimize logging. The actions that should be taken to minimize logging are: Set the database recovery model to bulk-logged or simple. Use TABLOCK hint during bulk loading. The target table should either be empty or no indexes should be defined on it. Disable or drop all triggers on the target table. Drop replication on the target table.

Reduce operations overhead. You should drop indexes, constraints, defaults, and identity operations. When a clustered index is planned for a table, order the data while creating the output file from Oracle. The BULK INSERT statement accepts the ORDER clause and the bcp utility accepts the ORDER hint to specify the sort of the data files. Avoid concurrent access by other users to the tables being loaded. The TABLOCK hint can be used while performing BULK INSERT or bcp. Improve parallelism. With a large amount of data (which is the assumption made throughout this section), the best performance can only be achieved by performing loads in parallel. The parallelism can be inter-table or intra-table. Avoid side effects. Turn off triggers, bypass identity, bypass default column setting, and drop replication. Reduce complexity. When the source database has a large number of tables, break it down into logical subsets and stage data, if necessary. Importing static and historical data can be performed in advance. Reduce risk. Break down larger source data text files using either an editor or by specifying ranges in the SQL queries that are used for exporting the data into spool files. Any failures that occur when dealing with large files waste time and resources. In a migration containing very volatile data, the migration should be completed in a single stage instead of multiple stages. Commit more resources. Additional CPU, memory, and storage should be acquired for the migration. Auditing. Include audit steps during the process. This enables trapping of errors as they occur during the transfer of data.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

157

Table 8.1 details the suitable method of data porting for different migration environments. Table 8.1: Data Migration Options Scenario
Data migration environments that have high performance requirements and have huge volumes of data to be transferred Methods of data migration that use a small amount of scripts Applications that need to use scripts on the data transfer process Applications that require high-powered transformation of moderate volumes of data Tables contain large objects (LOB) such as RAW, LONG, BLOB, and CLOB

Recommended method
bcp or BULK INSERT

bcp BULK INSERT DTS DTS

Here are some additional considerations for planning the migration: The final strategy should be based on the amount of time available for the actual deployment and cutover. The performance of the load operation should be dependent on the chosen strategy. Instead of sticking to a single method, a combination of methods can be employed based on the strengths and weaknesses of the individual methods. For example, DTS need not be used as the migration method just because there are LOB columns. The tables in the migration can be broken up and a combination of methods could be used, taking advantage of the strengths of each. Parallel operation should be applied on large tables. The performance of bcp versus BULK INSERT has been found to vary. Test both these approaches in your scenario before making a decision. Test data files should be created and the performance of the various methods measured in the target environment. A prototype of the final strategy should be created and the process should be practiced and perfected before deployment.

Executing the Data Migration


This section details the steps to be performed for migrating data. The section has been broken into three tasks: 1. Pre-implementation 2. Implementation 3. Post-implementation Each of these tasks is discussed in detail under the following headings.

158

Developing: Databases Migrating the Data

Pre-Implementation Tasks
The following tasks have to be completed before performing the migration: 1. Back up the database. Delete any test data from the target database and create a full backup of the empty database. Restoring the backup will be faster than the work involved in rolling back a failed or incomplete migration. 2. Export Oracle data into text files. Data from each table that has to be migrated is exported into comma-delimited ASCII text files. Note The string data should be optionally enclosed in double quotation marks (") in case there are commas in the column data. The date fields can be exported in the desired format using the TO_CHAR function. Sample: In SQL*Plus, connect to the schema and execute the following command:
desc countries

The output is:


Name COUNTRY_ID COUNTRY_NAME REGION_ID Null? Type ----------------------------------------- -------- ----------------NOT NULL CHAR(2) VARCHAR2(40) NUMBER

Create a .sql file (example countries.sql) with the following content:


set pages 0 set lines 10000 set trimspool on set feedback off spool /data/dump/countries.csv select country_id||','||country_name||','||region_id from countries; spool off

Executing this file using SQL*Plus produces the following text file that can be imported into SQL Server. Note If these commands are typed at the command prompt inside the SQL*Plus utility in Microsoft Windows, the output file will contain extraneous prompt commands at the beginning and end of the file. This does not occur if the commands are executed from a file.
AR,Argentina,2 AU,Australia,3 BE,Belgium,1 BR,Brazil,2 CA,Canada,2 CH,Switzerland,1 CN,China,3 DE,Germany,1

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

159

DK,Denmark,1 EG,Egypt,4 FR,France,1 HK,HongKong,3 IL,Israel,4 IN,India,3 IT,Italy,1 JP,Japan,3 KW,Kuwait,4 MX,Mexico,2 NG,Nigeria,4 NL,Netherlands,1 SG,Singapore,3 UK,United Kingdom,1 US,United States of America,2 ZM,Zambia,4 ZW,Zimbabwe,4

Note Using SQL*Plus and the spool command is the best way to create commaseparated (CSV) exports of Oracle table data. Other solutions available based on the functionality currently available in Oracle are DBMS_OUTPUT, UTL_FILE, and Pro*C. For details on the various methods, refer to http://asktom.oracle.com/~tkyte/flat/index.html However, these options should be employed only if the data is being manipulated during the output process because there are performance overheads in their use. 3. Set the recovery model of the SQL Server database. Change the recovery model of the target SQL Server database to bulk-logged or simple. Bulk-logged mode is preferred. The following command sets the recovery model of a database to bulk-logged:
ALTER DATABASE database_name SET RECOVERY BULK_LOGGED

where database_name is the name of the database whose recovery model has to be set. 4. Disable all constraints. Identify and disable all constraints, including CHECK, PRIMARY KEY, FOREIGN KEY, and UNIQUE. Create scripts to restore them. In SQL Server, only FOREIGN KEY and CHECK constraints can be disabled. The following command can be used to disable FOREIGN KEY and CHECK constraints of a SQL Server table:
ALTER TABLE table_name NOCHECK CONSTRAINT ALL

where table_name is the name of the table whose constraints have to be disabled. PRIMARY KEY and UNIQUE constraints have to be dropped. The following command can be used to drop PRIMARY and UNIQUE by specifying their name:
ALTER TABLE table_name DROP CONSTRAINT constraint_name

where constraint_name is the name of the constraint and table_name is the name of the table in which the constraint exists.

160

Developing: Databases Migrating the Data

Note By default, constraints are not checked when using bcp. 5. Disable all triggers. Disable triggers on all tables into which data will be loaded. The following command can be used to disable all triggers on a table.
ALTER TABLE table_name DISABLE TRIGGER ALL

where table_name is the name of the table on which the triggers have to be disabled. 6. Drop all indexes. Drop all remaining indexes using the following command after saving the script to recreate them.
DROP INDEX table_name.index_name

where index_name is the name of the index to be dropped and table_name is the table on which the index exists. 7. Handle IDENTITY column inserts. Let explicit values be copied into columns of a table with the Identity property.
SET IDENTITY_INSERT table_name ON

This setting can be used only on a single table at a time. So ensure that this is set before importing a table, or use the functionality in the method being used. In bcp, the same thing can be achieved using the -E switch. In BULK INSERT, use the KEEPIDENTITY argument.
BULK INSERT 'table_name' FROM 'data file' WITH KEEPIDENTITY

8. Back up the database. Create a full backup after the data migration before performing any validation or testing.

Implementation Tasks
The implementation is the actual transfer of data from the source Oracle database to the target SQL Server database. The implementation can be performed in one step or in two steps. In the single step, implementation data is streamed directly from the Oracle database to the SQL Server database. In the two step process, data is extracted from the Oracle database into files and then loaded into the SQL Server database. SQL Server provides the following three tools for loading data into a database: The bulk copy program (bcp) The BULK INSERT statement Data Transformation Services (DTS)

The use of these three tools in transferring data into SQL Server is discussed under the following headings.

Using bcp
The bcp utility can be accessed from the command prompt. The syntax for use is provided here:
bcp database_name.owner.table_name in data_file t field_terminator S server_name\instance_name U username P password c t ,

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

161

where database_name.owner.table_name is the fully qualified name of the table into which the data is to be loaded, data_file is the name of the text file containing the data, field_terminator is the character that is used to delimit columns in a row, server_name\instance_name is the SQL Server instance, username is the login name, and password is the password for the user login. -c specifies that the data in the file is in the character data type -t specifies the field terminator Figure 8.2 shows the results of executing this command at the Windows command prompt.

Figure 8.2
Importing data from a text file using bcp

To make the bcp process repeatable, a format file should be created. The format file can be created interactively by executing the bcp command, as shown in Figure 8.3.

Figure 8.3
Creating a format file for use with bcp imports

162

Developing: Databases Migrating the Data

The format file looks like this:


8.0 3 1 2 3 SQLCHAR SQLCHAR SQLNUMERIC 0 0 0 2 40 19 "," "," "\r\n" 1 2 3 COUNTRY_ID COUNTRY_NAME REGION_ID SQL_Latin1_General_CP1_CI_AS SQL_Latin1_General_CP1_CI_AS ""

Refer to the topic "Using Format Files" in SQL Server Books Online for more information about format files. Note The format file will have to be modified to handle text files that contain column data values and which are enclosed by quotation marks ("). The format file can be used for future imports into the same table by using the -f switch. This is illustrated in Figure 8.4. In the example, the format file is assumed to have been copied to C:.

Figure 8.4
Importing data from a text file using bcp format file

Table 8.2 contains the additional arguments that are relevant while importing large volumes of data from Oracle. Table 8.2: Additional Arguments of the bcp Utility Argument (or option)
-k -h "hints" -E -a -e error_file -b batch_size

Description
bypasses DEFAULT definition bulk copy hints allows Identity insert network packets size 512 to 65536 (default 4096) logs errors during the import uses default, which is all rows in the file

Note Hidden characters in text files can cause problems such as the "unexpected null found" error. Care has to be taken when opening and manipulating text files containing data.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

163

Using BULK INSERT


The BULK INSERT T-SQL statement provides the same functionality as the bcp utility. The major difference is that this functionality is from within a database session. The following statement achieves the same work as was demonstrated with bcp:
BULK INSERT hrdata.hr.countries FROM 'c:\countries.csv' WITH ( DATAFILETYPE = 'char', FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' )

Note Only members of the sysadmin role can use BULK INSERT. Table 8.3 contains additional arguments that need to be used while importing large amounts of Oracle data. Table 8.3: Additional Arguments Available with the BULK INSERT Command Argument (or option)
-k TABLOCK KEEPIDENTITY FIRE_TRIGGERS CHECK_CONSTRAINTS BATCHSIZE

Description
bypasses DEFAULT definition locks allows values to be specified for identity column trigger execution is not suppressed constraints on table have to be checked use default, which is all rows in the file

Note For more information about the bulk copy operations and on improving its performance, refer to "Optimizing Bulk Copy Performance" at http://msdn.microsoft.com/library/en-us/optimsql/odp_tun_1a_5gyt.asp?frame=true, and "Bulk Copy Performance Considerations" at http://msdn.microsoft.com/library/enus/adminsql/ad_impt_bcp_5zqd.asp. For information on different switches available with Bulk Insert, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_babz_4fec.asp.

Using DTS
This section is used to demonstrate how DTS wizards can be used to import data into SQL Server tables from text files and directly from an Oracle database. The DTS Import/Export Wizard can be launched from Enterprise Manager using the following steps or using the DTS Import Wizard: 1. Expand Server. 2. Right-click Databases, click All Tasks, and then click Import Data. 3. Click the Next button on the wizard's Welcome screen. 4. The next window presented is for choosing the data source. In the Data Source drop-down list choose Text File. Pick the file containing Oracle table data in the File name text box.

164

Developing: Databases Migrating the Data 5. The file format characteristics are chosen in the next window. The file type should be ANSI, and the row delimiter should be {CR}{LF}. If column data values are quoted, it can be specified in the Text qualifier field. The delimiter character, which is a comma (,), is specified in the next screen. 6. The Microsoft OLE DB Provider for SQL Server should be chosen as the destination and the server and login information provided for the target SQL Server instance. DTS offers the option of executing the import, saving it to a DTS package and scheduling. For information on how to schedule DTS packages, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/dtssql/dts_pkgmng_4hm6.asp.

DTS from Oracle Database


The steps in creating a DTS package to connect and import data directly from the Oracle database is very similar to the process described in the previous section. Instead of pointing the source to a file, an Oracle database is specified through an OLE DB or ODBC provider for Oracle. The DTS Import/Export Wizard can be launched from Enterprise Manager using the following steps or using the DTS Import Wizard: 1. Expand Server. 2. Right-click Databases, click All Tasks, and then click Import Data. 3. Click the Next button on the wizard's Welcome screen. 4. The next window presented is for choosing the data source. In the Data Source drop-down list, choose the Microsoft ODBC Driver for Oracle. Enter the Oracle database alias and login information. 5. The Microsoft OLE DB Provider for SL Server should be chosen as the destination and the server and login information provided for the target SQL Server instance. 6. The next screen offers the option of copying data from a table (or view) or using a query. 7. If the table or view option is chosen, the next screen provides a list of tables and views available for import. The destination table name and transformation information has to be provided along with the source table/view. 8. If query option is chosen, the next screen provides a window for entering the query statement. The destination table and transformation can be entered in the next screen. 9. The execution of the import can be done immediately or scheduled for a later time. Note DTS has the capability of reading the table description from the Oracle database and creating tables in SQL Server. The wizard has been found to be faulty in its choice of equivalent data types for SQL Server tables. Data can be migrated directly out of the database instead of through an intermediary file if the interconnect between the servers hosting the two databases is very fast. Also, no additional storage is required for the data files on the two servers. Use of files can reduce the deployment time if the data export can be done in advance and staged for import. For a case study on incrementally loading tables using the SQL Server bulk load options, refer to http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/incbulkload.mspx.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

165

Post-Implementation Tasks
The following tasks have to be performed to ensure the success of the data transfer. The tasks performed here undo the changes that were made to the schema for affecting the data transfer, such as disabling of constraints and triggers. Chapter 9, "Developing: Databases Unit Testing the Migration" has a more detailed discussion of performing the first two tasks as a part of testing the migrated database. 1. Validate the data migration. The validation tasks are discussed in detail in the "Validating the Data Migration" section later in this chapter. 2. Re-enable the constraints. Enable all constraints, including CHECK, PRIMARY KEY, FOREIGN KEY, and UNIQUE. Use the scripts that were created while disabling the constraints. FOREIGN KEY and CHECK constraints can be enabled using the following syntax:
ALTER TABLE table_name CHECK CONSTRAINT ALL

where table_name is the table on which the CHECK constraints have to be enabled. PRIMARY KEY and UNIQUE constraints will have to be recreated using the original DDL. Oracle allows multiple rows where all columns (nullable) of a UNIQUE constraint have NULL values. SQL Server treats NULL as duplicates (instead of an unknown value) for the purpose of uniqueness. Hence creating a UNIQUE constraint would return an error if such a condition exists. Because it would be difficult to create values, the best way to maintain the status quo would be to replace the UNIQUE constraint with a trigger that accomplishes the same thing. 3. Recreate the indexes. The constraint indexes can be recreated using the saved scripts. Multiple indexes on a single table can be created in parallel using different client sessions. 4. Enable the triggers. Identify and enable all triggers by using the following syntax:
ALTER TABLE table_name ENABLE TRIGGER ALL

where table_name is the table on which the triggers are to be enabled. 5. Set the recovery model. Restore the recovery model of the SQL Server database to the original setting (normally full) by using the following syntax:
ALTER DATABASE database_name SET RECOVERY FULL

6. Capture data statistics. Perform the steps necessary to capture data statistics required by the optimizer. 7. Back up the database. Create a full backup of the entire server, including the user databases. This is the first valid backup of the migrated database.

Validating the Data Migration


The following major tasks have to be performed to validate the migrated data: 1. Verify the data transfer. The logs or audit files for the data transfers should be checked for errors or failures. After this process is complete, row counts of every table in the destination SQL Server database should be matched to the source Oracle database. If any

166

Developing: Databases Migrating the Data discrepancy is found in the counts, troubleshooting should include identifying the missing records and viewing the logs and audit files for error messages that can provide reasons for the failure. The data transfer process should take into consideration any changes in the logical model between the source Oracle database and SQL Server. A more stringent check is difficult for large databases. More validation of the data transfer occurs in the next step as part of the data integrity checks. 2. Validate the data integrity. Integrity is automatically checked when creating or enabling constraints. The WITH CHECK clause has to be used when adding constraints. Data integrity checking is threatened when migrating databases which lack proper primary key constraints and foreign key constraints. For more information on the functionality available with check constraints, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_aaaz_3ied.asp. In many cases, there are no strict equivalents of Oracle datatypes in SQL Server. Hence the integrity of such data has to be tested to ensure that there is no rounding or truncation, and that the exact value is stored.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

167

9
Developing: Databases Unit Testing the Migration
Introduction and Goals
The final task to be performed in the Developing Phase is to test the database. Because everything except the core database design has been transformed, it is very important to thoroughly test every aspect of the Microsoft SQL Server database before release to the production environment. Testing should cover the hardware, the SQL Server installation, security, database objects, data, and performance. The descriptions of the testing process for databases and applications is divided into smaller pieces and spread out between the Developing and Stabilizing Phases. Testing for integrity and functionality of the migrated database is covered in this chapter. Additional application-based testing of the database is performed as part of testing the application. Performance can be verified only through the benchmarking and piloting that is performed in the Stabilizing Phase.

Objectives of Testing
The following areas should be tested: Physical architecture of the database. Have the data files, transaction log, tablespaces, and other items that comprise the database been created correctly? Has the SQL Server instance been properly configured? Security. Have the logins and users been correctly created? Have the appropriate permissions been assigned? Is the database physically secure? Logical architecture of the database. Have the tables, views, stored procedures, triggers, and other database objects been created successfully? Data. Have the contents of the tables been transferred correctly? Do tables contain the correct numbers of rows? Is the data valid? Functionality. Do the stored procedures, triggers, views, and other items comprising Transact-SQL code operate in the same manner as the original Oracle objects? Performance. Does response time and throughput meet requirements and match user expectations?

168

Developing: Databases Unit Testing the Migration

The Testing Process


The database is tested in three different stages of the migration: Developing Phase Database In this stage, the database is unit tested for all the components that have been migrated, which includes the database architecture, the schema, the users, and the data. This type of testing is covered in the rest of this section. Developing Phase Application In this phase, the database is tested with respect to the application support. This primarily tests that all the objects required by the application are present and perform as expected. The application-related testing is covered in the application development Chapters 11 through 17. Stabilizing Phase In the Stabilizing Phase, more thorough tests are performed for integration, performance, stress, and scalability. This phase is described in Chapter 18.

This section describes the testing process that is followed during the Developing Phase for the database. The testing is interspersed with the steps in the migration of the database. The procedure for migration and testing is illustrated in the flowchart in Figure 9.1.

Figure 9.1
Data migration test procedure

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

169

The tasks on the left half, namely, migrating the architecture, migrating the schema, migrating the users, and migrating the data, are part of the database migration and are covered in chapters 5 through 8. The success of these tasks is validated as database integrity testing and data validation. Details regarding these two types of testing appear under the following headings.

Test Database Integrity


After the database schema has been created in the target database, an integrity test should be performed before starting the data migration. This should cover the physical database structure, the schema, and the schema objects. There is no tool available for comparing the integrity and validity of the SQL Server objects as compared to the original Oracle objects. Unit testing has to rely on the experience and expertise of the database designers or administrators who are performing the mapping. However, a true unit test would be to load data from the source objects and run SQL with identical functionality in the two environments. Any discrepancies should be only as a known (explainable) consequence of the migration. The following tasks are performed to ensure database integrity: Unit test the schema objects. The type and number of each object in the original Oracle database has to be compared to that in the SQL Server database. Queries can be written in the two databases (Oracle and SQL Server) to produce a count of each type of object. However, the counts in the two solutions may not match because there is no one-to-one equivalency for every object. Verify that data elements have been created as per the mapping. The proper transformation of the data type and domain values of the columns as well as any constraints, such as NULL and CHECK constraint on the columns have to be verified before performing the data migration. A representative set of data from the original Oracle database can be used for this purpose. Verify that all constraints are in place. For each table having a parent-child relationship, referential integrity is checked. Functions, triggers, and stored procedures have been accurately transformed. Stored procedures, triggers, and views can be tested by using an identical set of representative data in the source and migrated databases. Execution of test cases against these objects should produce identical results. Verify the data access. Additional indexes may be present in SQL Server to offset features such as fast full index scans in Oracle that can use the non-leading columns of indexes. It should be verified that all objects in the original Oracle database have been successfully migrated to SQL Server. A complete set of TSQL statements should be obtained from the application development team and each of them verified for optimal data access paths.

If the application has been migrated, running a few frequently used business transactions would ensure whether the schema has been successfully migrated. After the correctness of the database structure and its objects is guaranteed, data migration can proceed.

Test Security
The security mechanisms available with SQL Server and Windows are significantly different from those used by UNIX and Oracle. The difference in the authentication mechanisms is especially important because it affects the user login process. For example, the requirements of password management functionality, such as aging, locking, and password strength will require that Windows authentication be used in place of database authentication. The system privileges available in the two databases differ

170

Developing: Databases Unit Testing the Migration

significantly. You must verify that only authenticated, authorized users have access to the objects in the database. You must also verify that Oracle users and roles have been correctly mapped to SQL Server users and roles, and that all objects in the database have the appropriate access rights granted to them.

Validate Data
The existence of indexes and constraints during data migration adversely affects the performance of the data migration. The firing of triggers can corrupt the data and produce undesirable effects. Therefore, these objects have to be disabled or dropped. A negative aspect of these actions is that data is not checked for correctness or completeness during the migration and has to be validated after the migration. No special plans are needed for validating data integrity. The database itself checks the integrity of the data when the constraints are enabled using the WITH CHECK clause. For more information on check constraints in SQL Server, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_aa-az_3ied.asp. The two types of tests that have to be performed to validate the data are discussed in the "Validating the Data Migration" section in Chapter 8. Note One of the common risks involved in data migration is the lack of a proper or complete set of constraints in the database. In many applications, the constraints are built into the application. In such cases, the development team will have to be involved in constructing SQL-based tests to verify data integrity by identifying such rules in the application.

Validate the Migration


After data migration is complete, tests have to be performed on the database as a whole. Testing should cover the database architecture, database objects, data, and users. In addition, the database connectivity, security, and performance have to be tested. For information on client connectivity to the database, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_1_client_5er7.asp. A discussion of the security architecture in Oracle and SQL Server is available in Appendix A: "SQL Server for Oracle Professionals." Appendix B: "Getting the Best out of SQL Server 2000 and Windows" contains several references on SQL Server performance. While in smaller databases the data migration validation can be exhaustive, in very large databases, some simpler tests (such as counts using various groupings) may be used. The following checks are recommended: Develop group functions based on type of data. For example, a business-related check can be implemented by calculating the sum of the balances in all accounts. Develop group functions based on time. Check for record counts. Use the application to compare the summary reports. Check for ad-hoc control totals. For example, after ledger data of a financial application has been migrated, adding up the account numbers in that ledger has no business validity. However, this adhoc control total could reveal if all the data in the row has been migrated successfully.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

171

The SQL Server Migration Assistant offers a Migration Tester tool that verifies the migrated objects (procedures, functions, and views) by generating its own set of test data. For more information on the Migration Tester and to download it refer to http://www.microsoft.com/sql/migration. The beta version of this tool is available as of the date of publishing this solution. Version 1.0 of the tool is slated to be available in June 2005. A test that validates the entire migration is the running of production reports and application with production-quality data. The same reports should be executed against the original database and the migrated database using a snapshot (or copy) of production data. Performance and other vital database statistics, such as cache performance and locking, can be gathered.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

173

10
Developing: Applications Introduction
Introduction and Goals
The focus of Chapters 10 through 17 is connecting existing Oracle applications in the UNIX environment to the migrated Microsoft SQL Server database. This chapter serves as an introduction to the application-oriented development chapters. The following list provides an overview of the content in each of the application-oriented development chapters: Chapter 11, Developing: Applications Oracle SQL and PL/SQL. This chapter provides a detailed discussion of the SQL usage differences between Oracle and SQL Server. Transformation of PL/SQL to T-SQL is also covered. Chapter 12, Developing: Applications Perl. This chapter discusses the strategies available for migrating a Perl application for use with the SQL Server environment. Chapter 13, Developing: Applications PHP. This chapter discusses the strategies available for migrating a PHP application for use with the SQL Server environment. Chapter 14, Developing: Applications Java. This chapter discusses the strategies available for migrating a Java application for use with the SQL Server environment. Chapter 15, Developing: Applications Python. This chapter discusses the strategies available for migrating a Python application for use with the SQL Server environment. Chapter 16, Developing: Applications OraclePro*C. This chapter discusses the strategies available for migrating a Pro*C application for use with the SQL Server environment. Chapter 17, Developing: Applications Oracle Forms. This chapter discusses the strategies available for migrating an Oracle Forms application for use with the SQL Server environment.

No application migration will be successful without a clear understanding of the Oracle technologies that are factors during an Oracle database migration to SQL Server. An overview of these basic technologies and specific migration considerations is discussed

174

Developing: Applications Introduction

in the following two sections. The application ecosystem can be broken into the following two tiers: Data access Programming language and connectivity tier

Data access comprises the components of the application that interact with the database data. SQL is the language used for retrieving and manipulating the database data. The application developer is responsible for ensuring that applications designed for the Oracle source database work the same way with the migrated SQL Server target database. The changes required vary based on the programming language support and the API libraries available for SQL Server. The migration of SQL and PL/SQL components to T-SQL forms a common requirement, irrespective of the changes to the rest of the application. The migrated applications then have to be tested against the migrated SQL Server database to ensure their original functionality. From an application developer's perspective, Oracle and SQL Server manage data in similar ways. The internal differences between Oracle and SQL Server are significant, but if managed properly, these differences will have minimal impact on a migrated application.

Application Migration Strategies


There are four different strategies for application migration that are discussed throughout the development chapters. These strategies include: Interoperation. The application components remain in the UNIX environment. Port or rewrite the application to the Microsoft .NET platform. The application is migrated to run on the .NET framework. Port or rewrite the application to the Microsoft Win32 platform. The application is migrated to run on the Windows environment using the Win32 API. Quick port using Windows Services for UNIX 3.5. The application is ported to use Interix within the Windows environment.

Each of the four strategies is discussed in more detail under the following headings. One, some, or all of these solution design strategies forms the basis of the solution concept that was completed during the Envisioning Phase. Even when the goal is to standardize on the Windows platform, some of these strategies may be employed as intermediary steps in a multi-phased migration. In general, .NET development is favored over Win32, but a number of recommendations in the following chapters provide guidance on migrating to a Win32 solution. This information is provided in cases where a Win32 migration is the most cost-effective and straightforward development route. For more information on these migration strategies, see "Application and Database Migration Solution Design Strategy" in Chapter 2, "Envisioning Phase."

Interoperation
In this strategy, the application remains on the UNIX platform, while only the database is migrated to SQL Server on the Windows platform. This strategy is employed when minor changes can be made to the source code, connection strings, or the connectivity layer to allow for the change in the back end database. Often, an interoperation strategy is used as an interim step in a phased migration. This strategy allows for the application to be quickly connected to the solution's SQL Server

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

175

database with minimal risk or effort. However, this would not be an adequate solution if one of the business requirements for the migration project is elimination of the UNIX environment.

Port or Rewrite to .NET Framework


The .NET framework is the next generation platform from Microsoft for building, deploying, and running Windows applications. It provides a highly productive, standardsbased, multi-language environment for integrating existing investments with nextgeneration applications and services. When porting or rewriting, migration to the .NET platform is preferred because it is the current development platform for Windows applications. When migrating an application to the .NET framework, there are two different options. One is to perform a full port of the application, and the other is to rewrite the application using the .NET application programming interfaces (APIs). As of this writing, with the increased adoption of the .NET Framework in the industry, the availability of the .NET APIs for application porting are expanding. For some of the languages, the drivers are in beta with support being improved to include all functionality. Verify the stability of the library and the support for all the function calls needed for SQL Server when picking a driver. A rewrite of the application is required when there is no support on the .NET platform for the language in use. An example of this would be an application based on Oracle Forms. The best option in this case would be to rewrite the application on Windows using Visual Basic .NET. Rewriting an application can potentially pose challenges. It can be a time-consuming, risky, and costly option. The risk inherent in a rewrite is that the business logic or an important functionality is inadvertently changed while rewriting the code. Careful testing and stabilizing of the rewritten application needs to be conducted to ensure proper business logic and functionality. The following chapters provide detailed information on database connectivity and the code changes needed to migrate to the SQL Server solution. Rewriting an application requires additional knowledge beyond the scope of this guide. Where appropriate, this guide provides references to more information about rewriting.

Port or Rewrite to Win32


The Win32 API is the primary programming interface to the Windows operating system family, including Windows 2003. The Win32 API provides functions for processes, memory management, security, windowing, and graphics. Considering the benefits of migration to the .NET Framework, the only reason to migrate to the Win32 platform is that migration to .NET is not technically feasible for your organization. As with the previous strategy, two migration options are available when moving to a Win32-based platform. The first is to perform a port of the application, and the other is to rewrite the application using the Win32 APIs. A port can be used for most applications that have been developed in Java, Perl, PHP, or Python, which are available on both the UNIX and Windows platforms. APIs are available for SQL Server in a majority of cases. A port requires minimum changes to be done to the source code, and a port uses UNIX-standard compatible libraries and utilities that exist on the Windows platform. Compared to a port, the source code in a rewrite requires considerable changes to operate in the new environment. When possible, consider rewriting the application using .NET instead of Win32.

176

Developing: Applications Introduction

Quick Port Using Windows Services for UNIX


Applications on UNIX can be rehosted to an environment on Windows (Interix) that is similar to UNIX. Interix is a complete, POSIX-compliant (Portable Operating System Interface) development environment that is tightly integrated with the Windows kernel. It is a part of the Windows Services for UNIX 3.5 product suite from Microsoft. One of the quickest migration paths possible is to port the code directly to Windows Services for UNIX. Interix allows native UNIX applications and scripts to work alongside Windows applications. The best way to view Interix is to understand it as a POSIX-compliant version of UNIX built on top of the Windows kernel. Importantly, Interix is not an emulation of the UNIX environment on the Windows APIs. Migration using Windows Services for UNIX involves obtaining the source code, installing it in the Interix development environment, modifying the configuration scripts and makefiles, and recompiling the application. This strategy is referred to as a quick port. As with any other migration strategies, the data access (SQL, PL/SQL) has to be migrated to target a SQL Server database. A quick port is preferred if the client application is closely tied to UNIX utilities that are not available on Windows. This approach cannot rely on special libraries for which the source code is not available. Also, quick ports using Windows Services for UNIX cannot take advantage of most standard Windows functionality. The ported application may be successful immediately, or it may require modifications to account for the new hardware platform, the target operating systems, and local configuration information. It may not be possible to determine whether a quick port is possible without actually conducting the port. Extensive testing of the application is essential after the port to ensure that all features have been migrated successfully.

Scenarios and Cases


In the following chapters, these migration strategies are discussed in relation to common application languages. All four strategies may not be available, feasible, or desirable for each of the languages. For example, performing a quick port using Windows Services for UNIX is not discussed in the Perl chapter because Perl can run natively on the Windows platform. Because the facilities available to Perl on Windows are much richer (including .NET) than those available to Perl on Windows Services for UNIX, migrating Perl applications to a Windows environment is preferred. Throughout Chapters 11 through 17, the migration strategies are divided into specific scenarios that may be similar to existing situations that you will encounter. Each scenario may be split into individual cases that detail the various options available for a given scenario. For example, the migration strategy of porting may best apply to your situation. In porting PHP applications to run in the Windows environment scenario, there are three cases that are discussed based on the database driver being utilized. These individual cases include detailed information regarding the use of ORA, OCI8, or ODBC database drivers to maintain connectivity in the solution.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

177

11
Developing: Applications Migrating Oracle SQL and PL/SQL
Introduction
Although Oracle and Microsoft SQL Server both use the ANSI SQL language, each uses proprietary extensions to add functionality. Oracle uses PL/SQL, while SQL Server uses Transact SQL. This chapter focuses on application code that may need to be modified for use with SQL Server because of these extensions. This chapter should be used in conjunction with Chapters 12 through 17. These chapters provide information for application-specific languages, including Perl, PHP, Java, Python, Pro*C, and Oracle Forms. No application migration will be successful without a clear understanding of the Oracle technologies that come into play during an Oracle database migration to the SQL Server platform. The strategy for application conversion is broken down into the following two segments: Data access, which is discussed in this chapter. Programming language and connectivity tier, which are discussed in Chapters 12 through 17.

Migrating Data Access


The application conversion development team is responsible for ensuring that applications designed for the Oracle source database work the same way with the migrated SQL Server target database. The application migration process may appear complicated. Before doing the migration of the Oracle production database, the DBA or application developer should install a SQL Server test database. Applications can then be modified and tested, if necessary, to ensure their original functionality with the SQL Server database. Testing is necessary because of the core differences between Oracle and SQL Server. There are some architectural differences between each RDBMS. The terminology used to describe Oracle architecture often has a completely different meaning in Microsoft SQL Server; for example, the term database has a different meaning. Additionally, both

178

Developing: Applications Migrating Oracle SQL and PL/SQL

Oracle and SQL Server have made many proprietary extensions to the SQL-92 standard. These extensions are discussed throughout this chapter. From an application developer's perspective, Oracle and SQL Server manage data in similar ways. The internal differences between Oracle and SQL Server are significant, but if managed properly, have minimal impact on a migrated application. The most significant migration issue that confronts the developer is the implementation of the SQL-92 SQL language standard and the extensions that each RDBMS has to offer. Some developers use only standard SQL language statements, preferring to keep their program code as generic as possible. Generally, this means restricting program code to the entry-level SQL-92 standard, which is implemented consistently across many database products, including Oracle and SQL Server. Using RDBMS-specific extensions might produce unneeded complexity in the program code during the migration. For example, Oracle's DECODE function is a nonstandard SQL extension specific to Oracle. The CASE expression in SQL Server is not implemented in all database products. Both the Oracle DECODE and the SQL Server CASE expressions can perform sophisticated conditional evaluations from within a query. The alternative to using these functions is to perform the function programmatically, which could require substantially more data be retrieved from the RDBMS. SQL has seen several sets of standards implemented and embraced by American National Standards Institute (ANSI) and International Standards Organization (ISO). Although there are several enhancement releases of each standard set, the major releases are SQL-86, SQL-89 (also called SQL1), SQL-92 (also called SQL2) and SQL99 (also called SQL3), with SQL-99 merging ANSI and ISO standards into one set. There are four levels or sets of features in SQL-92: entry-level, transitional, intermediate, and full. SQL Server 2000 Transact-SQL complies with the entry level of the SQL-92 standard, and it supports many additional features from the intermediate and full levels of the standard. SQL-99 has two sets of features: core and non-core. Oracle 9i fully supports a majority of the core SQL-99 features, and it partially supports the non-core features. The SQL-92 entry-level feature set is very similar to the SQL-99 core feature set, which means Oracle 9i supports entry level features of SQL-92 entry level. Even though both support many of the standard features, Oracle and SQL Server do not use the same syntax. In addition, each RDBMS has its own unique functionality and extensions. Oracle SQL statements and Microsoft SQL Server T-SQL are compatible with each other in several areas with minimal changes in the syntax. All the basic and advanced functionality furnished by Oracle can be achieved in SQL Server with ease. Though both these RDBMSs can be used to build robust and efficient systems, they differ radically in the administrative functions and platform dependence. This section shows how to arrive at T-SQL equivalents for the most common Oracle SQL usages, and it includes comprehensive coverage of all aspects of the SQL language implementation.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

179

Sample Tables
The following tables are used in all the examples in this chapter. Table 11.1: Category Table COLUMN_NAME
CATEGORYID CATEGORYNAME DESCRIPTION

DATATYPE
VARCHAR(10) VARCHAR(40) VARCHAR(50)

CONSTRAINT
Primary Key

Table 11.2: Customer Table COLUMN_NAME


CUSTOMERID COMPANYNAME CONTACTNAME CONTACTTITLE ADDRESS CITY REGION POSTALCODE COUNTRY PHONE FAX

DATATYPE
VARCHAR(10) VARCHAR(40) VARCHAR(40) VARCHAR(40) VARCHAR(50) VARCHAR(30) VARCHAR(30) VARCHAR(10) VARCHAR(30) NUMERIC(10.0) NUMERIC(10.0)

CONSTRAINT
Primary Key

Table 11.3: Employee Table COLUMN_NAME


EMPLOYEEID LASTNAME FIRSTNAME TITLE TITLEOFCOURTESY BIRTHDATE HIREDATE ADDRESS CITY POSTALCODE COUNTRY REPORTINGTO

DATATYPE
VARCHAR(10) VARCHAR(40) VARCHAR(40) VARCHAR(40) VARCHAR(50) DATETIME DATETIME VARCHAR(40) VARCHAR(30) NUMERIC(10.0) VARCHAR(10) VARCHAR(10)

CONSTRAINT
Primary Key

Foreign Key

180

Developing: Applications Migrating Oracle SQL and PL/SQL

Table 11.4: OrderMaster Table COLUMN_NAME


ORDERID CUSTOMERID EMPLOYEEID ORDERDATE REQUIREDDATE SHIPPEDDATE SHIPVIA FREIGHT SHIPNAME SHIPADDRESS SHIPCITY SHIPREGION SHIPPOSTALCODE

DATATYPE
VARCHAR(10) VARCHAR(10) VARCHAR(10) DATETIME DATETIME DATETIME VARCHAR(30) NUMERIC(10.0) VARCHAR(30) VARCHAR(30) VARCHAR(10) VARCHAR(30) VARCHAR(10)

CONSTRAINT
Primary Key Foreign Key Foreign Key

Table 11.5: OrderDetails Table COLUMN_NAME


ORDERID PRODUCTID UNITPRICE QUANTITY DISCOUNT

DATATYPE
VARCHAR(10) VARCHAR(10) NUMERIC(10.2) NUMERIC(10.0) NUMERIC(10.0)

CONSTRAINT
Foreign Key Foreign Key

Table 11.6: OrderPrice Table COLUMN_NAME


ORDERID PRODUCTID REVISEDPRICE REVISEDON

DATATYPE
VARCHAR(10) VARCHAR(10) NUMERIC(10.2) DATETIME

CONSTRAINT
Foreign Key Foreign Key

Table 11.7: Product Table COLUMN_NAME


PRODUCTID PRODUCTNAME SUPPLIERID CATEGORYID QUANTITYPERUNIT UNITPRICE UNITINSTOCK UNITSONORDER REORDERLEVEL DISCONTINUED

DATATYPE
VARCHAR(10) VARCHAR(30) VARCHAR(30) VARCHAR(10) NUMERIC(10.0) NUMERIC(10.2) NUMERIC(10.0) NUMERIC(10.0) NUMERIC(10.0) NUMERIC(10.0)

CONSTRAINT
Primary Key Foreign Key Foreign Key

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

181

Table 11.8: Shippers Table COLUMN_NAME


SHIPPERID COMPANYNAME PHONE

DATATYPE
VARCHAR(10) VARCHAR(30) NUMERIC(10.0)

CONSTRAINT
Primary Key

Table 11.9: Suppliers Table COLUMN_NAME


SUPPLIERID COMPANYNAME CONTACTNAME CONTACTTITLE ADDRESS CITY POSTALCODE COUNTRY PHONE FAX

DATATYPE
INT DENTITY VARCHAR(30) VARCHAR(30) VARCHAR(40) VARCHAR(40) VARCHAR(30) NUMERIC(10.0) VARCHAR(40) NUMERIC(10) NUMERIC(10)

CONSTRAINT
Primary Key

Table 11.10: ShippedOrders Table COLUMN_NAME


ORDERID CUSTOMERID EMPLOYEEID ORDERDATE REQUIREDDATE SHIPPEDDATE SHIPVIA FREIGHT SHIPNAME SHIPADDRESS SHIPCITY SHIPREGION SHIPPOSTALCODE

DATATYPE
VARCHAR(10) VARCHAR(10) VARCHAR(10) DATETIME DATETIME DATETIME VARCHAR(30) NUMERIC(10.0) VARCHAR(30) VARCHAR(30) VARCHAR(10) VARCHAR(30) VARCHAR(10)

CONSTRAINT
Foreign Key Foreign Key Foreign Key

Migration Process Overview


Recommended high-level steps for the migration process include: 1. Extraction of data access. Identify the SQL statements used in the application and make sure it will work with SQL Server. If some SQL statements are databasespecific, rewrite them for SQL Server. 2. Transaction management. Accommodate changes in the transaction because of the migration.

182

Developing: Applications Migrating Oracle SQL and PL/SQL 3. Fetch strategy. Pay special attention to cursor management and rewrite it in easier way by making use of effective cursor management available in SQL Server 2000. Try to avoid cursors being used by Oracle application if possible. 4. Subprograms conversion. Identify all the procedures, functions, and triggers and rewrite in T-SQL. 5. Job scheduling. Batch jobs that are written in PL/SQL should be rewritten. 6. Interface file conversion. For any inbound (text file -> table) jobs, pay attention to the interface file and make sure there are no problems with the current format of the interface file. Pay attention to any outbound interface as well. Keep in mind that the date format between Oracle and SQL Server are different. 7. Workflow automation. Workflow automation is implemented in the application. The creation of Mail Ids is required. 8. Performance tuning. Tune the T-SQL statements wherever it is required. During these steps, the developer should look at the application code and start converting Oracle-specific code into T-SQL compatible code. All major Oracle commands and how to convert them into T-SQL are discussed in detail throughout this chapter. Oracle and SQL Server both provide extensive support for XML, whose migration is not within the scope of this guide. For details on the SML support available in SQL Server refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_cs_9oj8.asp.

Step 1: Extraction of Data Access


The first step is to identify the SQL used in the application. SQL includes Data Manipulation Language (DML), Data Query Language (DQL or, simply, queries), and operators. The majority of SQL syntax is common between Oracle and SQL Server with minor variations. There are several different types of information in the SQL code that will need to be identified. If these types do not comply with the specifications and syntax of SQL Server, this information will need to be modified. An in-depth discussion of the following topics is included in this section: Operators Functions Queries Data Manipulation Language (DML)

Operators
Operators are used for many different purposes in SQL. Table 11.11 displays these operators by type. Table 11.11: Commonly Used Operators Operator Type
Arithmetic Comparison Logical

Example
+, -, *, / =, <>, <=, >= AND, OR, NOT , XOR

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

183

These operators may play a role in the SELECT clause of a query (for reporting purposes), or they may be used in a PL/SQL block for performing various calculations to complete the transaction. The information under the following heading describes how each of the operators is used in Oracle and SQL Server.

Operator Precedence
When multiple arithmetic operators are used in a single query, the processing of the operation takes place according to the precedence of the operators. The precedence level of arithmetic operators in an expression is multiplication (*), division (/), modulo (%), addition (+), concatenation (+), and subtraction (-). Equations contained within parentheses take precedence and are performed first. The order of execution is from left to right.

Concatenation Operators
Concatenation is performed differently between SQL Server and Oracle. The concatenation operator (for character strings) used in Oracle is || or CONCAT(string1, string2). The following example shows this syntax for each usage. Each of these statements will produce the same result in Oracle.
SELECT FIRSTNAME || LASTNAME NAME FROM EMPLOYEE SELECT CONCAT(FIRSTNAME, LASTNAME) NAME FROM EMPLOYEE;

In these statements, the first name is concatenated with the last name and the label for the field is NAME. SQL Server uses a different operator for concatenation. The concatenation operator (for character strings) used in SQL Server is +. The following example shows the first name concatenated with the last name, along with empty space in between the names. The label for the field is NAME.
SELECT FIRSTNAME + + LASTNAME As NAME FROM EMPLOYEE

Comparison Operators
Conditions are evaluated using comparison operators, including "=", ">" and ">=", <, and <=. These operators are common to both Oracle and SQL Server, with the exception of the inequality operator. Generally, Oracle developers use "!=" (and rarely "^=") instead of "<>"as an inequality operator. Though "! =" can be used in SQL Server, "<>" is recommended because it is the ANSI Standard. Oracle and SQL Server have the clauses IS NULL and IS NOT NULL for comparisons involving null values. The use of IS NULL and IS NOT NULL will ensure behavior consistent with that of Oracle irrespective of the setting of ANSI_NULLS variable.

Modulo Operator
The modulo function returns the remainder when two numbers are divided. Oracle uses MOD function as shown in the following example:
SELECT MOD(10,3) FROM DUAL;

This example returns 1. SQL Server uses "%" to perform this operation. The following example is the same statement written for SQL Server:
SELECT 10%3

Logical Operators
Like comparison operators, logical operators are the same in Oracle and SQL Server. Table 11.12 reviews logical operators. Table 11.12: Common Logical Operators in Oracle and SQL Server

184

Developing: Applications Migrating Oracle SQL and PL/SQL

Operator
ALL AND ANY BETWEEN EXISTS IN LIKE NOT SOME IS NULL IS NOT NULL OR

Meaning
TRUE if all of a set of comparisons are TRUE TRUE if both Boolean expressions are TRUE TRUE if any one of a set of comparisons is TRUE TRUE if the operand is within a range TRUE if a subquery contains any rows TRUE if the operand is equal to one of a list of expressions TRUE if the operand matches a pattern Reverses the value of any other Boolean operator TRUE if some of a set of comparisons are TRUE TRUE if operand is NULL TRUE if operand is not NULL TRUE if either Boolean expression is TRUE

Datatype Precedence
Each column value and constant in a SQL statement has a datatype that is associated with a specific storage format, constraints, and a valid range of values. When a table is created, a datatype is specified for each column. Arithmetic operations performed on columns and constants of different datatypes, such as INT and SMALLINT, are called as mixed mode arithmetic operations. In mixed mode arithmetic operations, the lower datatype value is converted into a higher datatype value according to datatype precedence. Datatype conversion is often needed to attain compatibility during calculation, concatenation, or comparison of values.

Functions
In many places, built-in functions are used for conversion and for other purposes. Some commonly used functions are defined and discussed here with examples provided for their use. Some functions that are available in Oracle are not available in SQL Server 2000 or earlier versions. However, those functions can be written in SQL Server with the same name and the same functionality. They are called user-defined functions. Some user-defined functions are described later in this chapter.

Number and Mathematical Functions


Table 11.13 details the differences in syntax and usage between Oracle and SQL Server. These functions are all mathematically based. Table 11.13: Mathematical Functions in Oracle and SQL Server Oracle Function
ABS

Description
Returns the absolute value of n Returns cosine of n. n must be between 1 and 1

Oracle Example
ABS (-20)

SQL Server Function


ABS

SQL Server Example


ABS(-20)

ACOS

ACOS (.4)

ACOS

ACOS(.4)

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

185

CEIL

Returns the smallest integer greater than n Returns the smallest integer smaller than n Returns the reminder of m divided by n Computes the value of argument 1 raised to the power of argument 2 Returns the nearest value of the decimal as per the precision specified Truncates the decimal

CEIL (12.3) returns 13

CEILING

CEILING (12.3) returns 13

FLOOR

FLOOR (12.3) returns 12

FLOOR

FLOOR (12.3) returns 12

MOD

MOD (12,5) returns 2 POWER (10,5) returns 100000

12%5 returns 2

POWER

POWER

POWER (10,5) returns 100000

ROUND

ROUND (10.125,2) returns 10.13

ROUND

ROUND (10.125,2) returns 10.130

TRUNC

TRUNC (12.54) returns 12

CONVERT

CONVERT (INTEGER, 12.54) returns 12

Character Functions
Table 11.14 details the differences in syntax and usage between Oracle and SQL Server. These functions are all related to text strings and characters. Table 11.14: Character Functions in Oracle and SQL Server Oracle Function
CHR

Description
Returns the character for the ASCII value Appends two string values

Oracle Example
CHR (65) returns A CONCAT ('Tom', 'Mike') return Tom Mike INITCAP ('terry adams') returns Terry Adams

SQL Server Function


CHAR

SQL Server Example


CHAR (65) returns A 'Tom' + 'Mike' returns Tom Mike An algorithm can be written for equivalent functionality

CONCAT

INITCAP

Gives uppercase to the title (Title case)

No equivalent function

186

Developing: Applications Migrating Oracle SQL and PL/SQL

LOWER

Returns the lowercase of the string Trims the leading spaces in a given String Replaces a matching String with a new String in a given String Pad characters to the right side of the string Pad characters to the left side of the string SOUNDEX function lets you to compare words that are spelled differently but sound alike To take only a few characters from a string Translate a character string. Used for encryption

LOWER ('TIM') returns tim LTRIM (' TIM') returns 'TIM'

LOWER

LOWER('TIM') returns tim LTRIM(' TIM') returns 'TIM'

LTRIM

LTRIM

REPLACE

REPLACE ('TIM','I','o') returns ToM

REPLACE

REPLACE('TIM ','i','o') returns ToM

RPAD

RPAD ('USA',5,'*') returns USA** LPAD ('USA',5,'*') returns **USA SOUNDEX ('CHARLOTTE' ) is equal to SOUNDEX('CH ARLOTE')

No equivalent function

An algorithm can be written for equivalent functionality An algorithm can be written for equivalent functionality SOUNDEX ('CHARLOTTE' ) is equal to SOUNDEX('CH ARLOTE')

LPAD

No equivalent function

SOUNDEX

SOUNDEX

SUBSTR

SUBSTRING ('Bikes', 1,4) returns Bike This function in Oracle is used to convert a string into another form. Low level of encryption is done using this. TRIM (' THIS IS TEST ') returns THIS IS TEST UPPER ('tim') returns TIM ASCII ('A') returns 65

SUBSTRING

SUBSTRING ('Bikes', 1,4) returns Bike An algorithm can be written for equivalent functionality

TRANSLATE

No equivalent function

TRIM

To remove the leading and trailing spaces of a string Returns the uppercase of the string Returns the ASCII value of

LTRIM and RTRIM can be combined UPPER

RTRIM(LTRIM( ' THIS IS TEST ')) returns THIS IS TEST UPPER('tim') returns TIM ASCII('A') returns 65

UPPER

ASCII

ASCII

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

187

the character INSTR Used to find the location of a sub string or a character inside a string Returns the length of a given string INSTR ('NORTH CAROLINA','O R', 1) returns 2 LENGTH ('TOM') returns 3 CHARINDEX CHARINDEX ('OR','NORTH CAROLINA', 1) returns 2 LEN ('TOM') returns 3

LENGTH

LEN

Date Functions
Table 11.15 details the differences in syntax and usage between Oracle and SQL Server. These functions are all date-based. Table 11.15: Date Functions in Oracle and SQL Server Oracle Function
ADD_MONTH S

Description
To add number of months for a given date

Oracle Example
ADD_MONTH S ('15-NOV2004', 1) returns '15DEC-2004' NEXT_DAY (SYSDATE, 'FRIDAY') returns the date of the coming Friday SYSDATE returns the current date TO_CHAR (sysdate, 'MM/DD/YY')

SQL Server Function


DATEADD

SQL Server Example


DATEADD (MM, 1,'15NOV-2004') returns '15DEC-2004' An algorithm can be written for equivalent functionality

NEXT_DAY

NEXT_DAY function returns the first weekday later than the passed date Returns the current date

No equivalent function

SYSDATE

GETDATE ()

GETDATE () returns the current date with time stamp CONVERT (char, getdate() , 1) returns date in MM/DD/YY format. CAST ('12MAY-2003 12:00:00' as DATETIME)

TO_CHAR

TO_CHAR is used when an INTEGER or DATE value needs to be converted into String Any given string in a valid date format can be converted into DATE Datatype

CAST or CONVERT

TO_DATE

TO_DATE ('12MAY2003','DDMON-YYYY')

CAST

The CONVERT function is used in SQL Server to change the date format. This is equivalent to the TO_CHAR function of Oracle. CONVERT has three parameters. The

188

Developing: Applications Migrating Oracle SQL and PL/SQL

first parameter is datatype. The second parameter is for the column to be formatted. The third parameter is the desired format. Please see the list of formats and the corresponding formula in Table 11.16. Table 11.16: CONVERT Function Formats in SQL Server WITH CENTURY (YYYY)
0 OR 100 (*) 101 102 103 104 105 106 107 108 9 OR 109 (*) DEFAULT + MILLISECONDS 110 111 112 13 OR 113 (*) 114 20 OR 120 (*) 21 OR 121 (*) 130* 131*

INPUT/OUTPUT
MON DD YYYY HH:MIAM (OR PM) MM/DD/YY YY.MM.DD DD/MM/YY DD.MM.YY DD-MM-YY DD MON YY MON DD, YY HH:MM:SS MON DD YYYY HH:MI:SS:MMMAM (OR PM) MM-DD-YY YY/MM/DD YYMMDD DD MON YYYY HH:MM:SS:MMM(24H) HH:MI:SS:MMM(24H) YYYY-MM-DD HH:MI:SS(24H) YYYY-MM-DD HH: MI: SS.MMM (24H) DD MON YYYY HH:MI:SS:MMMAM DD/MM/YY HH:MI:SS:MMMAM

Conditional Functions
Conditional functions are used to compare values or to evaluate a Boolean expression. Table 11.17 compares these functions between Oracle and SQL Server. Table 11.17: Conditional Functions in Oracle and SQL Oracle Function
NVL

Description
To return a default value if the expression is null

Oracle Example
NVL (SALARY, 0) returns 0 if the column value (SALARY column) is null NVL2 (Salary, Salary*2, 0) returns two times the salary if not null and 0 if null

SQL Server Function


ISNULL

SQL Server Example


ISNULL (SALARY, 0) returns 0 if the column value (SALARY column) is null CASE SALARY WHEN null THEN 0 ELSE SALARY*2 END

NVL2

To return a value if the expression is null or null

CASE

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

189

NULLIF

Returns a null value if the two specified expressions are equivalent Used to evaluate the values with "ifelse" logic.

NULLIF (SYSDATE, SYSDATE) returns NULL DECODE(DIS CONTINUED, 0, 'No', 1, 'Yes', 'NA')

NULLIF

NULLIF (GETDATE (), GETDATE ()) returns NULL CASE DISCONTINUE D WHEN 0 THEN 'No' WHEN 1 THEN 'Yes' ELSE 'NA' END CASE discontinued WHEN 0 THEN 'No' WHEN 1 THEN 'Yes' ELSE 'NA' END

DECODE

CASE

CASE

Used to evaluate the values with "ifelse" logic.

CASE discontinued WHEN 0 THEN 'No' WHEN 1 THEN 'Yes' ELSE 'NA' END

CASE

Queries
The SELECT statement is used to query the database and retrieve data. SELECT can be combined with some DDL and DML statements to perform relational operations. The following different clauses commonly used with SELECT in Oracle and SQL Server are discussed in the remainder of this section. Simple queries Joins Database links Group by Case Set operators Rownum

Optimizer hints are not covered in this guidance. Both Oracle and SQL Server use costbased optimizers and offer hints that can be used to influence the optimizer. The optimizer hints used with Oracle are not available in SQL Server. For the various types of hints available in SQL Server refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/acdata/ac_8_qd_03_8upf.asp.

Simple Queries
A simple query is defined as a SELECT statement that retrieves data from a single table with or without filter condition. Because this is a standard command, there are no differences in syntax or usage between Oracle and SQL Server. The following example is a valid statement for both Oracle and SQL Server.

190

Developing: Applications Migrating Oracle SQL and PL/SQL

SELECT * FROM CUSTOMER;

Joins
Joins play a major role when meaningful information needs to be retrieved from more than one table. Often, reports or intricate queries require data from two or more tables. The data from multiple tables must be logically related. SQL combines data from multiple tables using joins into a single result set. Joins can be of following types: Inner join Equi-join Non-equi join Left outer join Right outer join Full outer join

Outer join

Self-join

Inner Join
Inner join is the normal join that is performed among tables to fetch the matching data. Inner join can be either an equi-join or non-equi join. Both equi and non-equi join are explained under the following headings.

Equi-Join
An equi-join equates fields from two or more tables to fetch matching data from the tables selected. There is considerable difference in the syntax of an equi-join between Oracle and SQL Server. The standard Oracle equi-join syntax is shown in the following example:
SELECT TABLE1.FIELD1, TABLE2.FIELD1, TABLE2.FILED2 FROM TABLE1, TABLE2 WHERE TABLE1.FIELD3 = TABLE2.FIELD3 AND TABLE1.FIELD4 = TABLE2.FIELD4

Oracle 9i equi-joins are slightly different; the syntax is modified to comply with the ANSI standard. The Oracle 9i syntax is shown in the following example:
SELECT TABLE1.FIELD1, TABLE2.FIELD1, TABLE2.FIELD2 FROM (TABLE1 INNER JOIN TABLE2 ON TABLE1.FIELD3 = TABLE2.FIELD3 AND TABLE1.FIELD4 = TABLE2.FIELD4)

The SQL Server syntax is also slightly different, and it is shown in the following example:
SELECT TABLE1.FIELD1, TABLE2.FIELD1, TABLE2.FIELD2 FROM TABLE1 INNER JOIN TABLE2 ON TABLE1.FIELD3_=_TABLE2.FIELD3 AND TABLE1.FIELD4 = TABLE2.FIELD4

For example, a company needs to create a report with the following information: Product name Price Suppliers company name Category name

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

191

For a product produced by the example company, the corresponding suppliers company name and the corresponding category name have to be displayed. The PRODUCT table has SUPPLIERID and CATEGORYID, but not their corresponding names. Procedures for using the PRODUCTNAME for Oracle, Oracle 9i, and SQL Server are shown in the following examples. In Oracle, the inner join will provide the needed information:
SELECT A.PRODUCTNAME, A.UNITPRICE, B.COMPANYNAME, C.CATEGORYNAME FROM PRODUCT A, SUPPLIERS B, CATEGORY C WHERE A.SUPPLIERID = B.SUPPLIERID AND A.CATEGORYID = C.CATEGORYID

The following code produces the same inner join in Oracle 9i:
SELECT A.PRODUCTNAME, A.UNITPRICE, B.COMPANYNAME, C.CATEGORYNAME FROM (PRODUCT A INNER JOIN SUPPLIERS B ON A.SUPPLIERID = B.SUPPLIERID INNER JOIN CATEGORY C ON A.CATEGORYID = C.CATEGORYID )

The SQL Server syntax is the same as the preceding Oracle 9i syntax.

Self Joins
Self joins can be used when one row in a table relates itself with other rows in the same table. That is, the same table is used twice for comparison. The same table must be exhibited as two different tables by using different aliases. This function is available in Oracle and SQL Server, though there are slight differences in syntax. The syntax for Oracle self joins is provided here:
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A, TABLE1 B WHERE A.FIELD1 = B.FIELD1 AND A.FIELD2 < B.FIELD2

The SQL Server syntax is shown in the following example. Note the differences in the third and fourth lines of the statement.
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A JOIN TABLE1 B ON A.FIELD1 = B.FIELD1 WHERE A.FIELD2 < B.FIELD2

Consider a case in which the result set must return a list of all companies in the same city as the company ABC Inc. In Oracle, the statement would appear as:
SELECT B.COMPANYNAME FROM CUSTOMER A, CUSTOMER B WHERE A.CITY = B.CITY AND A.COMPANYNAME = 'ABC Inc'

In SQL Server, the same result set can be returned using the following statement:
SELECT B.COMPANYNAME FROM CUSTOMER A JOIN CUSTOMER B ON A.CITY = B.CITY WHERE A COMPANYNAME = 'ABC Inc'

192

Developing: Applications Migrating Oracle SQL and PL/SQL

Non-Equi Joins
Non-equi joins are used when there is not a direct link criterion between the tables. It is simply a join where the joining criteria are anything other than =. The difference in the syntax between Oracle and SQL Server in cases of non-equi join is a minor difference in the operators used to notify the non-equi join. The Oracle syntax is provided in the following example:
SELECT A.FIELD1, A.FIELD2, A.FIELD3, B.FIELD4 FROM TABLE1 A, TABLE2 B WHERE A.FIELD1 = B.FIELD1 AND A.FIELD2 <> B.FIELD2

The SQL Server syntax is provided in the following example. Note the use of the inner join operator.
SELECT A.FIELD1, A.FIELD2, A.FIELD3, B.FIELD4 FROM TABLE1 A INNER JOIN TABLE2 B ON A.FIELD1 = B.FIELD1 WHERE A.FIELD2 <> B.FIELD2

For example, imagine a case where there is a need to retrieve those products listed under the same category but from different suppliers. In Oracle, the statement can be created as follows:
SELECT A.PRODUCTID, A.PRODUCTNAME, A.UNITPRICE FROM PRODUCT A, PRODUCT B WHERE A.CATEGORYID = B.CATEGORYID AND A.SUPPLIERID <> B.SUPPLIERID

The same result can be accomplished in SQL Server, as shown in the following example:
SELECT A.PRODUCTID, A.PRODUCTNAME, A.UNITPRICE FROM PRODUCT A INNER JOIN PRODUCT B ON A.CATEGORYID = B.CATEGORYID AND A.SUPPLIERID <> B.SUPPLIERID

Outer Joins
A join is termed as outer join when the result set contains all the rows from one table and only the matching rows from the other table. An outer join can be a left outer join or a right outer join. Outer join syntax in Oracle and SQL Server remained different until the release of Oracle 9i. Thereafter, both follow ANSI standards of using INNER JOIN or OUTER JOIN in words instead of operators. All rows are retrieved from the left table referenced with a left outer join. Here, the outer join operator (+) is placed beside the table that is to the left of the "=" operator during the join. Oracle left outer join The syntax for left outer join is as follows:
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A, TABLE1 B WHERE A.FIELD1(+) = B.FIELD1

Example for left outer join is:


SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A, SUPPLIERS B WHERE A.SUPPLIERID(+) = B.SUPPLIERID

Oracle 9i left outer join

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows The ANSI syntax for left outer join is as follows:
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A LEFT OUTER JOIN TABLE2 B ON A.FIELD1 = B.FIELD1

193

Example for ANSI compliant left outer join is:


SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A LEFT OUTER JOIN SUPPLIERS B ON A.SUPPLIERID = B.SUPPLIERID

SQL Server left outer join The syntax for left outer join is as follows:
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A LEFT OUTER JOIN TABLE2 B ON A.FIELD1 = B.FIELD1

Example for left outer join is:


SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A LEFT OUTER JOIN SUPPLIERS B ON A.SUPPLIERID = B.SUPPLIERID

All rows are retrieved from the right table referenced in a right outer join. A right outer join is used to furnish null rows to the right side table used during the Join condition. Oracle right outer join The syntax for right outer join is as follows:
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A, TABLE1 B WHERE A.FIELD1 = B.FIELD1(+)

Example for right outer join is:


SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A, SUPPLIERS B WHERE A.SUPPLIERID = B.SUPPLIERID(+)

SQL Server right outer join The syntax for right outer join is as follows:
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A RIGHT OUTER JOIN TABLE2 B ON A.FIELD1 = B.FIELD1

Example for right outer join is:


SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A RIGHT OUTER JOIN SUPPLIERS B ON A.SUPPLIERID = B.SUPPLIERID

All rows from both tables are returned in a full outer join. Oracle full outer join Oracle does not have a separate outer join syntax. The following shows an example for full outer join:
SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A, SUPPLIERS B WHERE A.SUPPLIERID = B.SUPPLIERID(+) UNION

194

Developing: Applications Migrating Oracle SQL and PL/SQL

SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A, SUPPLIERS B WHERE A.SUPPLIERID(+) = B.SUPPLIERID

SQL Service full outer join The syntax for full outer join is as follows:
SELECT A.FIELD1, A.FIELD2, B.FIELD1, B.FIELD2 FROM TABLE1 A FULL OUTER JOIN TABLE2 B ON A.FIELD1 = B.FIELD1

Example for full outer join is:


SELECT A.PRODUCTNAME, A.CATEGORYID, B.COMPANYNAME, B.CONTACTNAME FROM PRODUCT A FULL OUTER JOIN SUPPLIERS B ON A.SUPPLIERID = B.SUPPLIERID

Subqueries
A subquery is a normal query that is nested inside another query using parentheses. Subqueries are used to retrieve data from tables that depend on the values on the same table or a different table. A statement containing a subquery is called a parent statement. A subquery can be commonly found in the FROM and WHERE clauses of a SELECT statement. A subquery in the FROM clause is also called an inline view. Inline views in SQL Server are similar to those in Oracle. In SQL Server, inline views have to be aliased in all cases. Subqueries found in the WHERE clauses are called nested queries. Subqueries may be nested in other subqueries. T-SQL queries can use a maximum of 256 tables, including all subqueries, and a maximum of 32 levels of nesting. In contrast, Oracle allows 255 levels of nesting. It is uncommon to use more than two or three levels of nesting in queries. SQL Server supports most implementations of the nested query in Oracle. Examples of nested queries are found throughout this chapter. SQL Server, however, does not support the ordered set as shown in the following example:
SELECT productid, revisedprice FROM OrderPrice WHERE (orderid, productid) IN (SELECT orderid, productid FROM OrderDetails)

In such cases, the query can be converted into one using joins instead of the nested query.
SELECT OP.productid, OP.revisedprice FROM OrderPrice OP INNER JOIN OrderDetails OD ON OP.orderid = OD.orderid AND OP.productid = OD.productid

A nested query in which the nested query references columns in the parent table is called a correlated subquery. A correlated subquery is executed once for each row in the parent statement. Because Oracle syntax and SQL Server syntax are similar, examples are not provided here.

Grouping Result Set


Oracle and SQL Server provide a method of grouping the result set using the GROUP BY clause. The GROUP BY clause summarizes the result set into the groups defined in the query using aggregate functions such as AVG or SUM. Using the HAVING clause can

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

195

further filter the result set by comparing it with the aggregate function's result. The HAVING clause without a GROUP BY clause behaves like a simple WHERE condition. HAVING clauses are used with aggregation functions and cannot be used with single row functions. The GROUP BY clause treats the HAVING clause in the same way as SELECT treats WHERE. A SELECT statement with a GROUP BY clause cannot have a COLUMN_NAME other than the columns available in the GROUP BY list. When GROUP BY is specified, either each column in any non-aggregate expression in the select list should be included in the GROUP BY list, or the GROUP BY expression must match exactly the select list expression. A GROUP BY clause can include an expression as long as it does not include any aggregate function, and it cannot contain any sub query, either. In SQL Server, GROUP BY ALL can be used, but it is optional. When ALL is specified, null values are returned for the summary columns of groups that do not meet the search condition. Because Oracle and SQL Server SQL are similar, examples are not provided here.

Database Link
When tables are to be joined across databases that are in different servers, the database link functionality can be used. In the examples provided in this section, the databases are distributed in two geographical locations. One database is in New York, and another is in Boston. A business user wants to know the total number of orders for the month of May, 2004. In Oracle, databases are linked using the command CREATE DATABASE LINK. The TNSNAMES.ORA entry for the database is required.
SELECT COUNT(*) FROM ( SELECT ORDERID FROM ORDERMASTER WHERE orderdate > sysdate - 30 UNION SELECT ORDERID FROM ORDERMASTER@BOSTON WHERE ORDERDATE > sysdate - 30 );

In SQL Server, some configuration is needed before accessing remote databases. Before querying tables from different databases on different servers, log into the first database, go to Query Analyzer, and execute the following code.
USE MASTER GO EXEC SP_ADDLINKEDSERVER @SERVER= 'BOSTON', @SRVPRODUCT= '', @PROVIDER='SQLOLEDB', @DATASRC='BOSTON\NWIND' GO EXEC SP_ADDLINKEDSRVLOGIN @RMTSRVNAME ='BOSTON', @USESELF='FALSE', @LOCALLOGIN=NULL,

196

Developing: Applications Migrating Oracle SQL and PL/SQL

@RMTUSER ='ABC', @RMTPASSWORD ='ABC' GO

Executing these procedures establishes the link between the servers. Next, execute the following query to retrieve the data from two databases.
SELECT COUNT(ORDERID) FROM (SELECT ORDERID FROM ORDERMASTER WHERE ORDERDATE > getdate() - 30 UNION SELECT ORDERID FROM [BOSTON].NWIND.DBO.ORDERMASTER WHERE ORDERDATE > getdate() - 30 )

CASE
CASE is used when there is a need to have "if-else" logic. In the example code that appears in this section, a store manager wants to see customer names and segregate the orders into the three following categories: An order value less than $5,000 is small An order value between $5,000 and $10,000 is medium An order value greater than $10,000 is large

In Oracle, this task would be accomplished using CASE as shown in the following code example:
SELECT ORDERID, CASE WHEN SUM(UNITPRICE) <5000 THEN 'SMALL' WHEN SUM(UNITPRICE) <= 10000 THEN 'MEDIUM' ELSE 'LARGE' END FROM ORDERDETAILS GROUP BY ORDERID;

SQL Server supports the simple and searched case statement syntax found in Oracle.

Set Operators: Union


The union operator in a Data Query Language will fetch the distinct rows across the tables. This is used to combine results of two or more queries. If there is more than one row for the same value, the value will be taken only once. The union all operator provides the same functionality as the union operator but does not eliminate duplicate rows. In the following sample code, the head office wants a list of supplier names from all the databases. Apparently, the supplier list is in different databases that are in different geographical locations. Hence, the set operators are combined with a database link. The difference between the Oracle and SQL Server code is shown in the following examples: Oracle
SELECT COMPANYNAME FROM SUPPLIERS@NEWYORK UNION SELECT COMPANYNAME FROM SUPPLIERS@BOSTON;

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows SQL Server
SELECT COMPANYNAME FROM [NEWYORK].NWIND.DBO.SUPPLIERS UNION SELECT COMPANYNAME FROM [BOSTON].NWIND.DBO.SUPPLIERS

197

Set Operators: Intersect


The intersect operator will only retrieve the common values from more than one table. In the following example code, intersect is used to retrieve the supplier names common to both NEWYORK and BOSTON stores. Oracle
SELECT COMPANYNAME FROM SUPPLIERS@BOSTON INTERSECT SELECT COMPANYNAME FROM SUPPLIERS@NEWYORK

SQL Server In SQL Server, a subquery with an EXISTS clause can be written for the same purpose.
SELECT COMPANYNAME FROM [NEWYORK].NWIND.DBO.SUPPLIERS A WHERE EXISTS (SELECT 'X' FROM [BOSTON].NWIND.DBO.SUPPLIERS B WHERE A.COMPANYNAME = B.COMPANYNAME);

Set Operators: Minus


A minus operator is used when the difference of values from more than one table needs to be fetched. In the following example code, this operator is used to list the supplier names that are only in the BOSTON database and that are not in NEWYORK database. Oracle
SELECT COMPANYNAME FROM SUPPLIERS@BOSTON MINUS SELECT COMPANYNAME FROM SUPPLIERS@NEWYORK;

SQL Server Because MINUS is not available in SQL Server, NOT EXISTS can be used. The following example shows how to use the NOT EXISTS operator.
SELECT COMPANYNAME FROM SUPPLIERS A WHERE NOT EXISTS (SELECT 'X' FROM [BOSTON].NWIND.DBO.SUPPLIERS B WHERE A.COMPANYNAME = B.COMPANYNAME);

198

Developing: Applications Migrating Oracle SQL and PL/SQL

ROWNUM
ROWNUM is a special keyword in Oracle. Using ROWNUM can limit the number of rows retrieved without any condition. In the following example, ROWNUM is used to display the top 10 orders from the ORDERDETAILS table. Oracle
SELECT * FROM (SELECT ORDERID, SUM(UNITPRICE) FROM ORDERDETAILS GROUP BY ORDERID ORDER BY SUM(UNITPRICE) DESC ) WHERE ROWNUM < 11;

SQL Server
SELECT TOP 10 ORDERID, SUM(UNITPRICE) FROM ORDERDETAILS GROUP BY ORDERID ORDER BY SUM(UNITPRICE) DESC

Data Manipulation Language (DML)


DML includes INSERT, UPDATE, and DELETE. The MERGE statement has also been discussed. The following sections describe how each command is used in Oracle and how the command can be converted into SQL Server-specific syntax.

Insert
The INSERT command is used to add one or more rows to a table. The column values are separated by commas. Insert can be used in several different ways within Oracle. All of the different forms are explained under the following headings. Inserting a row into a table is identical in SQL Server and in Oracle. The syntax is shown here:
INSERT INTO TABLE1 VALUES (FIELD1, FIELD2 );

Insert Sequences
Compatibility between the Oracle tables and the new SQL Server tables should be checked. If a table in Oracle uses SEQUENCES, then SQL Server tables should use IDENTITY. IDENTITY will automatically increment a column value so that no two rows will have same value for the column. To enable this facility, a column can be included in the table which will have its unique value and auto-incrementing defined. This is achieved through SEQUENCES in Oracle and IDENTITY in SQL Server. This column can also be used as primary key for tables that do not have a natural key. A sequence is a database object that can generate unique, sequential integer values. It can be used to automatically generate primary key or unique key values. A sequence can be either in ascending or descending order. The syntax is explained in the following examples. Oracle
INSERT INTO SUPPLIERS VALUES

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

199

(SUPPLIER_SEQUENCE.NEXTVAL,' Abc Inc.', 'Lingerfelt, Steve', 'Mr.', 'Apt # 104 Andrew''s Drive', 'Charlotte', 28262,'USA',7049654371,7049652300);

SQL Server SQL Server has a unique way of handling running serial numbers: the IDENTITY property. This property provides a unique incremental value for the column. Its use in an insert command is explained later in this section. This is equivalent to Oracle's Sequence. When a column is declared as IDENTITY, a value does not have to be specified for the column. In Oracle, the value has to be explicitly retrieved from a sequence during an insert. The current value of an IDENTITY can be found using the following code:
SELECT IDENT_CURRENT ('SUPPLIERS') INSERT INTO SUPPLIERS VALUES (' Abc Inc.', 'Lingerfelt, Steve', 'Mr.', 'Apt # 104 Andrew''s Drive', 'Charlotte', 28262,'USA','7049654371','7049652300');

Using Date Values in Insert Operations


During an insert or update, the date format can be the same as Oracle (DD-MON-YYYY). However, while retrieving, the format will be different. A conversion function that was discussed in the "Functions" section will have to be used. In Oracle, to insert the date with time, a conversion of the date with date time is needed. This is done with the help of the TO_CHAR function. This is not required in SQL Server because it already stores the date with time stamp. The following example illustrates how to insert the current date in an insert operation of Oracle and SQL Server. Oracle
INSERT INTO OrderMaster VALUES(1,1076,121,SYSDATE,'19-AUG-2004', NULL, 'Federal Shipping', 40,'Ernst Handel','2817 Milton Dr.','Dallas','TX','87110')

SQL Server
INSERT INTO OrderMaster VALUES(1,1076,121, GETDATE(),'19-AUG-2004', NULL, 'Federal Shipping', 40, 'Ernst Handel','2817 Milton Dr.','Dallas','TX','87110')

The following example shows how the time part of a date is handled in Oracle and how this can be converted for SQL Server database. Oracle
INSERT INTO OrderMaster VALUES(1,1076,121,TO_DATE('01-MAR-2004 13:45:33', 'DDMON-YYYY HH24:MI:SS'),'19-AUG-2004', NULL, 'Federal Shipping', 40, 'Ernst Handel','2817 Milton Dr.','Dallas','TX','87110')

SQL Server
INSERT INTO OrderMaster VALUES(1,1076,121, '01-MAR-2004 13:45:33','19-AUG2004', NULL, 'Federal Shipping', 40, 'Ernst Handel','2817 Milton Dr.','Dallas','TX','87110')

Insert with Subquery


In many applications, a subquery is used to return a specific subset of rows which are then inserted into the table. The subquery can refer to any table, view, or

200

Developing: Applications Migrating Oracle SQL and PL/SQL

even the target table itself. If the subquery returns no rows, then no rows are inserted into the table.

Update
Update is an imperative DML statement that facilitates the changes to values of fields in the tables. In an Oracle or SQL Server database, an UPDATE is commonly used to incorporate modifications to an already inserted value. All of the key constraints are handled by the databases prior to affecting the UPDATE permanently. Oracle and SQL Server have followed the conventional ANSI way of using the UPDATE statement. The UPDATE statement can be performed in three different ways: Simple update. This is a straightforward change to the values of one or more fields of a table, with or without a condition for updating. Oracle and SQL Server use the same syntax for UPDATE statement. Update with joins. Updates with joins are frequently used when a change is required in the field value of only the rows matching specific conditions between two or more tables. Oracle and SQL Server use the same syntax for UPDATE statement. Update with subqueries. Requirements will sometimes demand a change to a field value with data from another table. Update with subqueries is useful in situations where data from one table is set to a field from another table. Oracle and SQL Server use the same syntax conventions for UPDATE with subqueries. An exception is updates involving correlated subqueries.
UPDATE OrderDetails o SET o.unitprice = (SELECT unitprice FROM Product p WHERE p.productid = o.productid)

The preceding Oracle update statement can be rewritten using a syntax that is unique to SQL Server as follows:
UPDATE o SET o.unitprice = p.unitprice FROM OrderDetails o JOIN Product p ON p.productid = o.productid

Merge
This is a new feature in Oracle 9i. The MERGE statement is used to update and insert rows in a table. In the following examples, an ORDERPRICE table exists that stores the revised price of a product in a particular order. The price can be revised any number of times. If a record already exists, then the price needs to be updated and the date changed. If a row for that order and product does not exist, then you need to insert a new row with the revised price of the product in that particular order. Oracle
MERGE INTO ORDERPRICE A USING (SELECT ORDERID, PRODUCTID FROM ORDERDETAILS WHERE ORDERID=1 AND PRODUCTID =1) B ON (A.ORDERID = B.ORDERID AND A.PRODUCTID = B.PRODUCTID) WHEN MATCHED THEN UPDATE SET REVISEDPRICE = 120, REVISEDON = SYSDATE WHEN NOT MATCHED THEN INSERT INTO ORDERPRICE VALUES

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

201

(B.ORDERID, B.PRODUCTID, 100, SYSDATE);

SQL Server To achieve this functionality in SQL Server, use a stored procedure with the following logic. More information about writing stored procedures is included later in this chapter in the "Subprograms Conversion" section.
declare @orderid char(10) declare @productid char(10) declare @count integer SELECT @count = COUNT(*) FROM ORDERPRICE WHERE ORDERID= @ORDERID AND PRODUCTID = @PRODUCTID IF @count > 0 UPDATE ORDERPRICE SET REVISEDPRICE = 120, REVISEDON = GETDATE() WHERE ORDERID = @ORDERID AND PRODUCTID = @PRODUCTID ELSE INSERT INTO ORDERPRICE VALUES(@ORDERID, @PRODUCTID, 100, GETDATE())

Delete
As tables are used, unneeded rows can be removed. DELETE is a DML operation that is used to eliminate all or certain rows from a table, keeping the table structure intact. Even though all the rows are deleted, the structure of the table remains and can be populated with additional rows using INSERT. By using conditional deletion, only selected rows are eliminated from the table. Conditional deletion is accomplished by using the conventional WHERE clause. Oracle and SQL Server both allow for conditional deletion. The ANSI way of writing the DELETE statement is used in both Oracle and SQL Server. The following types of delete exist: Simple delete. A general way of deleting all or selected rows from a table. A simple delete operation is similar for Oracle and SQL Server. Delete all rows. Both Oracle and SQL Server offer the TRUNCATE TABLE statement. Delete with subqueries. Using subqueries in conjunction with the DELETE operation allows for more specificity. The subquery enables the value for the WHERE condition to be fetched from a different table. Oracle and SQL Server use the same methodology to delete the rows in a table. Delete duplicate records. It is possible to have exact copies of rows in a table that does not enforce a strict key structure. These situations demand the deletion of all the duplicate records from the table. Oracle and SQL Server have different ways to do this: Oracle
DELETE FROM TABLE1 WHERE TABLE1.ROWID > (SELECT MIN (TABLE2.ROWID) FROM TABLE2 WHERE TABLE1.FIELD1 = TABLE2.FIELD1)

SQL Server For an example of how to perform this in SQL Server, refer to http://support.microsoft.com/default.aspx?scid=kb;en-us;139444

Truncate Both Oracle and SQL Server offer the TRUNCATE command for deleting all rows of a table with minimal logging.

202

Developing: Applications Migrating Oracle SQL and PL/SQL

Step 2: Transaction Management


After identifying all instances of SQL, transactions need to be carefully modified as part of the data migration process. This section discusses how to handle the transactions in SQL Server. To begin, identify all of the transaction commands currently used with Oracle and implement them for SQL Server. In Oracle, all commands inside two transaction control language statements are considered as a batch. If a set of statements needs to be considered as a batch, then all the statements must be inside BEGIN TRANSACTION and COMMIT TRANSACTION commands. A discussion of these architecture topics is provided in Appendix A: SQL Server for Oracle Professionals.

Transaction Control Language (TCL)


A transaction can be defined as a group of statements that must either succeed or fail as a group. In Oracle, transactions are normally implicit in nature and can be explicitly specified by use of BEGIN and END clauses. In SQL Server, BEGIN TRAN and END TRAN have to be explicitly used to demarcate a transaction. Transaction is controlled by any one of these commands: COMMIT ROLLBACK SAVEPOINT

These commands are discussed in detail under the following headings.

COMMIT
The COMMIT command is used to end a transaction. Changes are made permanent in the database with the COMMIT command. This erases all the savepoints inside the transaction and also releases the locks. After a transaction is committed, the change is permanent and cannot be reverted. Of course, you can always delete the inserted row. There is no way to undo a committed transaction. The syntax is as follows: Oracle
COMMIT;

SQL Server
COMMIT TRANSACTION <TRANSACTION NAME>;

The COMMIT WORK command functions in the same way as COMMIT TRANSACTION, except COMMIT TRANSACTION accepts a user-defined transaction name. This COMMIT syntax, with or without specifying the optional keyword WORK, is compatible with SQL-92. This command is available in SQL Server using the following command:
COMMIT WORK;

SAVEPOINT
A savepoint is a marker that divides a very lengthy transaction into several smaller ones. SAVEPOINT is used to identify a rollback point in the transaction. Thus, SAVEPOINT is used in conjunction with ROLLBACK to revert a portion of current transaction. The syntax in Oracle and SQL Server differs: Oracle

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

203

SAVEPOINT <SAVEPOINT NAME>;

SQL Server
SAVE TRANSACTION <SAVEPOINT NAME>;

ROLLBACK
The ROLLBACK command is used to undo a portion of the current transaction. It is possible to roll back the entire transaction so that all changes made by SQL statements are undone, or to roll back a transaction to a save point so that the SQL statements after the save point are rolled back. The syntax in Oracle and SQL Server differs: Oracle
ROLLBACK; ROLLBACK TO <SAVEPOINT NAME>;

SQL Server
ROLLBACK TRANSACTION <TRANSACTION NAME> ROLLBACK TRANSACTION <SAVEPOINT NAME>

In addition, the ROLLBACK WORK statement function is similar to the ROLLBACK. The only difference is that ROLLBACK TRANSACTION accepts a user-defined transaction name. With or without specifying the optional WORK keyword, this ROLLBACK syntax is SQL-92-compatible and is available in SQL Server using the following syntax:
ROLLBACK WORK

Step 3: Fetch Strategy


Cursors are effective for row processing and batch processing. Oracle-based applications can use many cursors for row processing. Analyze the code to determine if it can be converted without using cursors. Cursor operations are always costly when compared to SET operations because they use temporary memory area and processes. If the cursors are not handled properly, it may result in dead processes and locks. In Oracle, a cursor is defined as a work area to execute the SQL statements and to store processing information. Oracle creates a work area to process all the SQL statements. That work area is called an implicit cursor. Oracle also allows you to define an explicit cursor to process and apply business logic. Oracle has implemented the following types of cursors: Implicit cursors Explicit cursors

Implicit cursors are opened automatically whenever queries or DML are issued in Oracle. SQL Server calls this a default result set. Oracle allows defining a cursor with the help of keyword "cursor." Such explicitly declared cursors have a SELECT statement. The output of the select is stored in separate memory areas and it can be used in a PL/SQL program. Explicit cursors can be parameterized. Transact SQL Cursors, in SQL Server, are based on the DECLARE CURSOR syntax and are used mainly in Transact-SQL scripts, stored procedures, and triggers. Transact-SQL cursors are implemented on the server and are managed by Transact-SQL statements sent from the client to the server.

Using Transact-SQL Cursors


High-level steps for using Transact-SQL cursors inside the code include:

204

Developing: Applications Migrating Oracle SQL and PL/SQL 1. Declare Transact-SQL variables to contain the data returned by the cursor. Declare one variable for each result set column. Declare the variables to be large enough to hold the values returned by the column. 2. Associate a Transact-SQL cursor with a SELECT statement using the DECLARE CURSOR statement. The DECLARE CURSOR statement also defines the characteristics of the cursor, such as the cursor name and whether the cursor is read-only or forward-only. 3. Use the OPEN statement to execute the SELECT statement and populate the cursor. 4. Use the FETCH INTO statement to fetch individual rows and have the data for each column moved into a specified variable. Other Transact-SQL statements can then reference those variables to access the fetched data values. Transact-SQL cursors do not support fetching blocks of rows. 5. When you are finished with the cursor, use the CLOSE statement. Closing a cursor frees some resources, such as the cursor's result set and its locks on the current row, but the cursor structure is still available for processing if you reissue an OPEN statement. Because the cursor is still present, you cannot reuse the cursor name at this point. The DEALLOCATE statement completely frees all resources allocated to the cursor, including the cursor name. After a cursor is de-allocated, you must issue a DECLARE statement to rebuild the cursor. The cursor syntax differs between Oracle and SQL Server: Oracle
DECLARE CURSOR <CURSOR_NAME> IS < SELECT STATEMENT> BEGIN FOR <VARIABLE> IN CURSOR_NAME LOOP -- Business Logic END LOOP; END;

SQL Server
DECLARE <VARIABLE> DECLARE <CURSOR_NAME> CURSOR FOR <SELECT STATEMENT> BEGIN OPEN <CURSOR_NAME> FETCH NEXT FROM <CURSOR_NAME> INTO <VARIABLE_NAME> WHILE @@FETCH_STATUS = 0 BEGIN -- business logic goes here FETCH NEXT FROM <CURSOR NAME> INTO <VARIABLE NAME> END CLOSE <CURSOR_NAME> DEALLOCATE <CURSOR_NAME> END

Oracle allows the entire cursor record to be transferred into a user-defined data structure defined using ROWTYPE. In SQL Server, the record values need to be assigned into an individual variable. The following example uses ROWTYPE to process the SHIPPERS table record by record.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Oracle
DECLARE CURSOR SHIPPER_CURSOR IS SELECT * FROM SHIPPERS; SHIPPER_REC SHIPPERS%ROWTYPE; BEGIN OPEN SHIPPER_CURSOR; LOOP FETCH SHIPPER_CURSOR INTO SHIPPER_REC; EXIT WHEN SHIPPER_CURSOR%NOTFOUND; DBMS_OUTPUT.PUT_LINE('SHIPPER NAME IS '||SHIPPER_REC.COMPANYNAME); DBMS_OUTPUT.PUT_LINE('SHIPPER PHONE IS '||SHIPPER_REC.PHONE); END LOOP; CLOSE SHIPPER_CURSOR; END;

205

SQL Server
DECLARE @SHIPPER_NAME VARCHAR(50),@PHONE VARCHAR(50) DECLARE SHIPPER_CURSOR CURSOR FOR SELECT COMPANYNAME,PHONE FROM SHIPPERS; BEGIN OPEN SHIPPER_CURSOR; FETCH NEXT FROM SHIPPER_CURSOR INTO @SHIPPER_NAME, @PHONE WHILE @@FETCH_STATUS = 0 BEGIN PRINT 'SHIPPER NAME IS ' + @SHIPPERNAME PRINT 'SHIPPER PHONE IS ' + @PHONE FETCH NEXT FROM SHIPPER_CURSOR INTO @SHIPPER_NAME, @PHONE END CLOSE SHIPPER_CURSOR DEALLOCATE SHIPPER_CURSOR END

Transact SQL Extensions


SQL Server extends the following functionality with cursors. Some of the key features include: FETCH FIRST Fetches the first row in the cursor. FETCH NEXT Fetches the row after the last row fetched. FETCH PRIOR Fetches the row before the last row fetched. FETCH LAST Fetches the last row in the cursor. FETCH ABSOLUTE n Fetches the nth row from the first row in the cursor if n is a positive integer. If n is a negative integer, the row n rows before the end of the cursor is fetched. If n is 0, no rows are fetched. FETCH RELATIVE n Fetches the nth row from the current position of the cursor. If n is positive, the nth row after the last row fetched is fetched. If n is negative, the nth row before the last

206

Developing: Applications Migrating Oracle SQL and PL/SQL row fetched is fetched. If n is 0, the same row is fetched again. It also allows retrieving one row or block of rows from the current position in the result set. It supports data modifications to the rows at the current position in the result set.

Step 4: Subprograms Conversion


The next step is to migrate the procedures, functions, and triggers. This section discusses how each component can be converted for use with SQL Server.

Migrating Procedures
A stored procedure is a precompiled object. This means that a procedure is compiled beforehand and is readily available for the various applications to execute. As a result, time is not wasted parsing and compiling the procedure again. However, the procedure will be compiled every time if it contains dynamic SQL. Stored procedures in SQL Server are very similar to procedures in Oracle PL/SQL. Some similarities include: Both can accept parameters and return multiple values in the form of output parameters (OUT parameter) to the calling procedure. Both contain programming statements that perform operations in the database, including calling other procedures. Both can overwrite existing procedures. Oracle uses the keyword REPLACE, while SQL Server uses ALTER.

The syntax between Oracle and SQL Server procedures differs. The differences are shown in the following examples: Oracle
CREATE OR REPLACE PROCEDURE <PROCEDURE_NAME> (PARAMETER1 DATATYPE [IN OUT], PARAMETER2 DATATYPE [IN OUT],) IS VARIABLE1 DATATYPE; VARIABLE2 DATATYPE; BEGIN EXECUTABLE STATEMENTS; EXCEPTION EXECUTABLE STATEMENTS; END;

SQL Server
CREATE PROCEDURE <PROCEDURE_NAME> [ ; number ] [ { @PARAMETER DATATYPE } [ VARYING ] [ = default ] [ OUTPUT ] ] [ ,...n ] AS EXECUTABLE STATEMENTS

In the following example, a stored procedure calculates the number of months remaining until retirement for each employee. Retirement age is passed as a parameter to the procedure. The output needs to be written into a table named RETIRE. The RETIRE table has two columns. The columns are EMPLOYEEID and MONTHS_REMAINING. The number of months remaining is calculated and inserted into this table. Before starting the operation, the table needs to be flushed. Oracle

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

207

CREATE OR REPLACE PROCEDURE CALCULATE_RETIRE_MONTHS(RETIRE_AGE NUMBER) IS CURRENT_AGE NUMBER; MONTHS_REMAINING NUMBER; CURSOR EMP_CURSOR IS SELECT EMPLOYEEID, BIRTHDATE FROM EMPLOYEE; BEGIN EXECUTE IMMEDIATE 'TRUNCATE TABLE RETIRE'; FOR CUR_VAL IN EMP_CURSOR LOOP CURRENT_AGE := ROUND(ROUND(SYSDATE - CUR_VAL.BIRTHDATE)/30) MONTHS_REMAINING := (RETIRE_AGE * 12) - CURRENT_AGE; INSERT INTO RETIRE VALUES(CUR_VAL.EMPLOYEE_ID,MONTHS_REMAINING); END LOOP; COMMIT; END;

SQL Server
IF EXISTS (SELECT NAME FROM SYSOBJECTS WHERE NAME = 'CALCULATE_RETIRE_YRS' AND TYPE = 'P') DROP PROCEDURE CALCULATE_RETIRE_YRS GO CREATE PROCEDURE CALCULATE_RETIRE_YRS @RETIRE_AGE INTEGER AS DECLARE @CUR_AGE INTEGER,@MONTHS_REMAINING INTEGER DECLARE @V_DOB DATETIME,@V_EMPLOYEEID VARCHAR(30) DECLARE EMP_CURSOR CURSOR FOR SELECT EMPLOYEEID,BIRTHDATE FROM EMPLOYEE BEGIN TRUNCATE TABLE RETIRE; OPEN EMP_CURSOR FETCH NEXT FROM EMP_CURSOR INTO @V_EMPLOYEEID,@V_DOB IF @@FETCH_STATUS <> 0 BEGIN PRINT 'NO EMP' END ELSE BEGIN WHILE (@@FETCH_STATUS = 0) BEGIN SET @CUR_AGE = ( SELECT DATEDIFF(MONTH,@V_DOB,GETDATE()) ) SET @MONTHS_REMAINING = (SELECT (@RETIRE_AGE * 12) @CUR_AGE) INSERT INTO RETIRE VALUES (@V_EMPLOYEEID,@MONTHS_REMAINING) FETCH NEXT FROM EMP_CURSOR INTO @V_EMPLOYEEID,@V_DOB END END CLOSE EMP_CURSOR DEALLOCATE EMP_CURSOR END

208

Developing: Applications Migrating Oracle SQL and PL/SQL

Functions
While similar to procedures, functions return a value. There may be some mathematical calculation needed for the application that is readily available in the database server. If you need the result of a complex algorithm as part of a SQL statement, you need to write a function that performs the required calculation and returns a value. The syntax between Oracle and SQL Server functions differs, as shown in the following examples: Oracle
CREATE OR REPLACE FUNCTION <FUNCTION_NAME> (PARAMETER DATATYPE, PARAMETER DATATYPE ETC ..) RETURN DATATYPE IS <DECLARATIONS> BEGIN <FUNCTION BODY> RETURN <DATATYPE> EXCEPTION <EXCEPTION HANDLERS>; END;

SQL Server
CREATE FUNCTION <FUNCTION_NAME> @PARAMETER DATATYPE,@PARAMETER DATATYPE ETC .. RETURNS <DATATYPE> AS BEGIN <FUNCTION BODY> RETURN <DATATYPE> END

The difference between built-in functions in Oracle and SQL Server are discussed here. Oracle also has a function called NEXT_DAY that returns the date of the first weekday named by char that is later than the current date. The char argument must be a day of the week in the date language of the session, either the full name or the abbreviation. The minimum number of letters required is the number of letters in the abbreviated version. Any characters immediately following the valid abbreviation are ignored and the return value has the same hours, minutes, and seconds component as the argument date. By default, this function is not available in SQL Server. But the following example shows one way to provide this function in SQL Server, and it can be expanded to meet your needs:
CREATE FUNCTION NEXT_DAY(@D DATETIME,@DAY VARCHAR(10)) RETURNS DATETIME AS BEGIN DECLARE @DAY_NUMBER INTEGER DECLARE @NEWDATE DATETIME IF UPPER(@DAY)='SUNDAY' BEGIN IF ( 1 - DATEPART(DW,@D))=0 SET @NEWDATE = @D + 7 ELSE IF ( 1 - DATEPART(DW,@D)) < 0 SET @NEWDATE = @D + 1 + ( 7 - DATEPART(DW,@D)) ELSE

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

209

SET @NEWDATE = @D + 1 - DATEPART(DW,@D) END ELSE IF UPPER(@DAY)='MONDAY' OR UPPER(@DAY)= MON BEGIN IF ( 2 - DATEPART(DW,@D)) = 0 SET @NEWDATE = @D + 7 ELSE IF ( 2 - DATEPART(DW,@D))< 0 SET @NEWDATE = @D + 2 + ( 7 - DATEPART(DW,@D)) ELSE SET @NEWDATE = @D + 2 - DATEPART(DW,@D) END ELSE IF UPPER(@DAY)='TUESDAY' OR UPPER(@DAY)= TUE BEGIN IF ( 3 - DATEPART(DW,@D)) = 0 SET @NEWDATE = @D + 7 ELSE IF ( 3 - DATEPART(DW,@D))< 0 SET @NEWDATE = @D + 3 + ( 7 - DATEPART(DW,@D)) ELSE SET @NEWDATE = @D + 3 - DATEPART(DW,@D) END ELSE IF UPPER(@DAY)='WEDNESDAY' OR UPPER(@DAY)= WED BEGIN IF ( 4 - DATEPART(DW,@D)) = 0 SET @NEWDATE = @D + 7 ELSE IF ( 4 - DATEPART(DW,@D))< 0 SET @NEWDATE = @D + 4 + ( 7 - DATEPART(DW,@D)) ELSE SET @NEWDATE = @D + 4 - DATEPART(DW,@D) END ELSE IF UPPER(@DAY)='THURSDAY' OR UPPER(@DAY)= THU BEGIN IF ( 5 - DATEPART(DW,@D)) = 0 SET @NEWDATE = @D + 7 ELSE IF ( 5 - DATEPART(DW,@D))< 0 SET @NEWDATE = @D + 5 + ( 7 - DATEPART(DW,@D)) ELSE SET @NEWDATE = @D + 5 - DATEPART(DW,@D)END ELSE IF UPPER(@DAY)='FRIDAY' OR UPPER(@DAY)= FRI BEGIN IF ( 6 - DATEPART(DW,@D)) = 0 SET @NEWDATE = @D + 7 ELSE IF ( 6 - DATEPART(DW,@D))< 0 SET @NEWDATE = @D + 6 + ( 7 - DATEPART(DW,@D)) ELSE SET @NEWDATE = @D + 6 - DATEPART(DW,@D)END ELSE IF UPPER(@DAY)='SATURDAY' OR UPPER(@DAY)= SAT BEGIN IF ( 7 - DATEPART(DW,@D)) = 0

210

Developing: Applications Migrating Oracle SQL and PL/SQL

SET @NEWDATE = @D + 7 ELSE IF ( 7 - DATEPART(DW,@D))< 0 SET @NEWDATE = @D + 7 + ( 7 - DATEPART(DW,@D)) ELSE SET @NEWDATE = @D + 7 - DATEPART(DW,@D) END ELSE SET @NEWDATE = NULL RETURN @NEWDATE END

Triggers
Oracle and SQL Server provide two primary mechanisms for enforcing business rules and data integrity through constraints and triggers. A trigger is a special type of stored procedure that automatically takes effect when an event occurs. The event can be INSERT, UPDATE, or DELETE. The trigger and the statement that triggers are treated as a single transaction, which can be rolled back from within the trigger. If an error is detected, the entire transaction is automatically rolled back. Oracle uses two types of triggers on tables: table-level triggers and row-level triggers. If the key word FOR EACH ROW is specified, then it is a row-level trigger. Row-level triggers are fired for each row when it is affected. For example, if an update statement changes 5 rows, then the trigger will be fired 5 times. Table-level triggers are only fired once. In a table-level trigger, individual values cannot be captured. Row-level triggers capture the individual rows with the help of the keywords :NEW and :OLD. Unlike Oracle, SQL Server does not have two types of triggers. The individual values can be captured from a special table called INSERTED. Old values can be fetched from the DELETED table and the new changed values can be retrieved from the INSERTED table. The deleted values can be referred with the special variable :OLD in Oracle. SQL Server has a special table called DELETED. The DELETED table stores copies of the affected rows during DELETE and UPDATE statements. During the execution of a DELETE or UPDATE statement, rows are deleted from the trigger table and transferred to the DELETED table. The DELETED and INSERTED tables in SQL Server are only conceptual; they do not physically exist. A mapping of the functionality of triggers in Oracle and SQL Server as well as a comparison of the syntax is available under the topic "Triggers" in Chapter 6: "Developing: Databases Migrating Schemas." The syntax for the triggers differs as follows: Oracle triggers Separate syntax is used for each trigger command: INSERT trigger syntax
CREATE OR REPLACE TRIGGER <TRIGGER_NAME> [ BEFORE | AFTER ] INSERT ON <TABLE_NAME> [ FOR EACH ROW ] DECLARE BEGIN EXCEPTION END;

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows UPDATE trigger syntax
CREATE OR REPLACE TRIGGER <TRIGGER_NAME> [ BEFORE | AFTER ] UPDATE OF <COLUMN_NAME> ON <TABLE_NAME> [ FOR EACH ROW ] DECLARE BEGIN EXCEPTION END;

211

DELETE trigger syntax


CREATE OR REPLACE TRIGGER <TRIGGER_NAME> [ BEFORE | AFTER ] DELETE ON <TABLE_NAME> [ FOR EACH ROW ] DECLARE BEGIN EXCEPTION END;

SQL Server Unlike Oracle, only one syntax is needed for trigger functions in SQL Server:
CREATE TRIGGER <TRIGGER_NAME> ON <TABLE> {FOR|AFTER|INSTEAD OF} {[INSERT][,][UPDATE][,][DELETE]} AS [{IF UPDATE(column)[{AND|OR}UPDATE(column)]}] EXECUTABLE STATEMENTS

The following example further illustrates the use of trigger functions. In this example, the PRODUCT tables need to be updated when an order is issued against any product. The following code uses an INSERT trigger to perform this task. Oracle
CREATE TRIGGER UPDATE_STOCK AFTER INSERT ON ORDERDETAILS FOR EACH ROW BEGIN UPDATE PRODUCT SET UNITINSTOCK = UNITINSTOCK - :NEW.QUANTITY WHERE PRODUCTID = :NEW.PRODUCTID; EXCEPTION WHEN OTHERS THEN NULL; END;

SQL Server
CREATE TRIGGER UPDATE_STOCK ON ORDERDETAILS AFTER INSERT AS BEGIN UPDATE PRODUCT SET UNITINSTOCK = UNITINSTOCK QUANTITY

212

Developing: Applications Migrating Oracle SQL and PL/SQL

FROM INSERTED WHERE PRODUCT.PRODUCTID = INSERTED.PRODUCTID END

INSTEAD OF Triggers
In general, complex views are not updateable. Complex views are defined as normal views that include more than one table or have GROUP BY expressions. Complex views can be updated using INSTEAD OF triggers. Both SQL Server and Oracle have the option of INSTEAD OF triggers. The INSTEAD OF trigger is generally defined on the view instead of on a table. In Oracle, INSTEAD OF triggers can be applied only to the view and not to tables. In SQL Server, INSTEAD OF triggers can be applied to views and tables. When an insert, update or delete statement is executed against the table, the INSTEAD OF trigger is executed in place of the triggering statement. Also, INSTEAD OF triggers are executed instead of the triggering action. These triggers are executed after the inserted and deleted tables are created, but before any other actions are taken. They are executed before any constraints, so they can perform preprocessing that supplements the constraint action. The triggering mechanism on INSTEAD OF triggers is implemented such that the trigger is not called recursively. The statement is processed as if there is no INSTEAD OF trigger defined for the table. If an UPDATE statement is executed on a view that has an INSTEAD OF trigger, the trigger will be called, but it is not executed recursively. The UPDATE statement is resolved against the base tables underlying the view. In this case, the view definition must meet all of the restrictions for an updateable view. The UPDATE executed by the trigger is processed against the view as if the view did not have an INSTEAD OF trigger. The columns changed by the UPDATE must be resolved to a single base table. Each modification to underlying base tables starts the chain of applying constraints and firing AFTER triggers defined for the table. SQL Server provides most of the functionality present in Oracle with respect to triggers with very little differences in their implementation. SQL Server, however, does not have the BEFORE triggers and can be programmed using the INSTEAD OF triggers on tables. In SQL Server triggers on views have the same kind of features as seen on triggers on tables as shown by the following syntax:
CREATE TRIGGER <TRIGGER NAME> ON <VIEW> {FOR|AFTER|INSTEAD OF} {[INSERT][,][UPDATE][,][DELETE]} AS BEGIN [{IF UPDATE(column)[{AND|OR}UPDATE(column)] EXECUTABLE STATEMENTS END

An example of the use of INSTEAD OF triggers on view is given in: http://msdn.microsoft.com/library/default.asp?url=/library/enus/vdbt7/html/dvconusinginsteadoftriggersonviews.asp.

Error Handling
Run-time errors in Oracle's PL/SQL programs are done through exceptions and exception handlers. Exceptions are raised when errors occur. Although there are predefined system errors, users can also define their own errors in the declaration section using the syntax:
exception_name EXCEPTION;

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

213

The user-defined exceptions can be triggered using the syntax:


RAISE exception_name;

Each BEGIN END block has an associated exception handler section defined at the end whose syntax is as follows.
EXCEPTION WHEN exception_name THEN statements END;

Although these exception handler sections are absent in SQL Server at the block level, error handling is performed through the use of the global variable @@Error, which is passed to the calling program and can be handled there. SQL Server sets the value of the @@Error variable after each TSQL statement. For example:,
INSERT . IF @@Error <> 0 BEGIN SELECT 'Unexpected error occurred: Insert failed', @@Error Return @@Error END

Oracle has many predefined exceptions, such as NO_DATA_FOUND, TOO_MANY_ROWS, ZERO_DIVIDE, INVALID_CURSOR, etc. The predefined exception OTHERS is of special interest because it can be used as a catch-all exception for any exceptions for which there is no definition in the exception section. SQL Server error messages can be viewed by selecting sysmessages system table. Each SQL Server internal error is qualified into various severity levels. The severity of errors ranges from 1 to 25. Errors with severity 19 and above are considered critical and may result in complete abortion of the batch. In such cases, @@Error may not capture the error raised. Similar in functionality to user-defined exceptions and the RAISE_APPLICATON_ERROR function in Oracle, SQL Server provides ways to create customized error messages that can be raised within TSQL code. These are called userdefined errors and can be raised using RAISERROR statement. If you are referencing an error number in the RAISERROR function in TSQL, you must create the user-defined message. You can also dynamically specify a text message in the RAISERROR function. This is done by using sp_addmessage system stored procedure. For more information on how to create and raise user-defined errors, see http://msdn.microsoft.com/library/default.asp?url=/library/enus/acdata/ac_8_con_05_3f76.asp.

Step 5: Job Scheduling


Oracle uses application code and OS utilities to schedule a job. SQL Server uses SQL Server agent to perform these jobs. The information in this section describes creating a job and scheduling it in SQL Server.

Scheduling Functions and Jobs


Some frequently used scheduling functions that are packaged with Oracle are discussed in this section. They include: DBMS_JOB. This is an Oracle supplied package for scheduling jobs. Several different procedures are packed inside this package. Included are:

214

Developing: Applications Migrating Oracle SQL and PL/SQL DBMS_JOB.INSTANCE. This procedure is used to assign a particular instance to execute a job DBMS_JOB.CHANGE. This is used to alter the schedule of a job. DBMS_JOB.SUBMIT. After the creation of job, this procedure submits a job to the Oracle instance. DBMS_JOB.RUN. Jobs can be executed on demand using this procedure.

cron: cron is a daemon, which means that it only needs to be started once, and will lay dormant until it is required. Generally, a SQL file is written that calls a particular procedure. This SQL file is called from a shell script and this script is scheduled using cron. This is how a job is scheduled in Oracle running on UNIX.

In SQL Server, jobs can be executed in the following ways: Using the Enterprise Manager Using the stored procedures SP_ADD_JOB and SP_ADD_JOBSCHEDULE

Using SQL Server Enterprise Manager


A job can be scheduled by using the following steps.
To execute a job in the background on scheduled basis, SQL Server agent must be running.

1. Log in to Enterprise Manager. 2. Expand the server group and then expand the server. Expand Management. Expand SQL Server Agent and select the Jobs node. 3. Create a new job by right-clicking Jobs. 4. Enter a name for the job in the Name field. 5. Check the Enabled check box to make the job available for execution either immediately or by scheduling it. A disabled job runs only if a user explicitly executes it.. 6. In the Owner list, select a user to be the owner of the job. 7. Write the description about the job in Description text area. 8. Select the Steps tab. A job step is an action that the job takes on a database or a server. Every job must have at least one job step. Job steps can be operating system commands, Transact-SQL statements, or ActiveX Scripts. 9. Click New to get the job step dialog box. Enter a job step name in the Job step name box. Select the type in the Type list, For example, you can click Operating system command (CmdExec). 10. Enter a value from 0 to 999999 in the Process exit code dialog box. 11. Enter the operating system command or executable program in the command box. If you select Transact-SQL statements or ActiveX scripts, you need to specify the commands as per the selection. 12. In the Schedules tab, the schedule can be specified. Jobs can be scheduled to run weekly, daily, or recursive. 13. In the Notifications tab, you can specify an e-mail address to send a notification if a job fails or succeeds. Messages can also be sent to a pager.

Using SP_ADD_JOB
Jobs in SQL Server can also be scheduled using the stored procedure SP_ADD_JOB. This simply adds a job in SQL Agent service. Job steps and scheduling can be done only with other procedures. The syntax for use and descriptions are provided in the following example:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

215

SP_ADD_JOB [ @JOB_NAME = ] 'JOB_NAME' [ , [ @ENABLED = ] ENABLED ] [ , [ @DESCRIPTION = ] 'DESCRIPTION' ] [ , [ @OWNER_LOGIN_NAME = ] 'LOGIN' ] [ , [ @NOTIFY_LEVEL_EVENTLOG = ] EVENTLOG_LEVEL ] [ , [ @NOTIFY_LEVEL_EMAIL = ] EMAIL_LEVEL ] [ , [ @NOTIFY_LEVEL_NETSEND = ] NETSEND_LEVEL ] [ , [ @NOTIFY_LEVEL_PAGE = ] PAGE_LEVEL ] [ , [ @NOTIFY_EMAIL_OPERATOR_NAME = ] 'EMAIL_NAME' ] [ , [ @NOTIFY_NETSEND_OPERATOR_NAME = ] 'NETSEND_NAME' ] [ , [ @NOTIFY_PAGE_OPERATOR_NAME = ] 'PAGE_NAME' ] [ , [ @DELETE_LEVEL = ] DELETE_LEVEL ] [ @JOB_NAME = ] 'JOB_NAME'

@JOB_NAME is the name of the job. The name must be unique and cannot contain the percent (%) character. The commonly used parameters include: [ @ENABLED = ] ENABLED @ENABLED indicates the status of the added job. ENABLED has a default of 1 (enabled). If 0, the job is not enabled and does not run according to its schedule, but it can be run manually. [ @DESCRIPTION = ] 'DESCRIPTION' @DESCRIPTION is the description of the job. This description can be a maximum of 512 characters. [ @OWNER_LOGIN_NAME = ] 'LOGIN' @OWNER_LOGIN_NAME is the name of the login that owns the job. [ @NOTIFY_LEVEL_EVENTLOG = ] EVENTLOG_LEVEL @NOTIFY_LEVEL_EVENTLOG is a value indicating when to place an entry in the Windows application log (Event viewer) for this job. The value can contain one of the values described in Table 11.18: Table 11.18: Values for EVENTLOG_LEVEL for SP_ADD_JOB Value
0 1 2 (default) 3

Description
Never On success On failure Always

[ @NOTIFY_LEVEL_EMAIL = ] EMAIL_LEVEL The value indicates when to send e-mail upon the completion of this job. This uses the same values as listed in Table 11.18. [ @NOTIFY_LEVEL_NETSEND = ] NETSEND_LEVEL The value indicates when to send a network message upon the completion of this job. This uses the same values as listed in Table 11.18. [ @NOTIFY_LEVEL_PAGE = ] PAGE_LEVEL The value indicates when to send a page upon the completion of this job. This uses the same values as listed in Table 11.18. [ @NOTIFY_EMAIL_OPERATOR_NAME = ] 'EMAIL_NAME'

216

Developing: Applications Migrating Oracle SQL and PL/SQL This is the e-mail address of the person to send e-mail to when EMAIL_LEVEL is reached. [ @NOTIFY_NETSEND_OPERATOR_NAME = ] 'NETSEND_NAME' Name of the operator to whom the network message is sent upon completion of this job. [ @NOTIFY_PAGE_OPERATOR_NAME = ] 'PAGE_NAME' Name of the person to page upon completion of this job. [ @DELETE_LEVEL = ] DELETE_LEVEL This value indicates when to delete the job. This uses the same values as listed in Table 11.18.

The following example shows how to create a job from the stored procedure. This creates a job named FILE2. The job is enabled with the notification level of zero. No notifications will be created upon success or failure. Operator DBO is assigned as owner for this job.
USE MSDB EXEC SP_ADD_JOB @JOB_NAME = 'FILE2', @ENABLED = 1, @DESCRIPTION = 'FILE DUMP JOB', @OWNER_LOGIN_NAME = 'DBO', @NOTIFY_LEVEL_EVENTLOG = 0, @NOTIFY_LEVEL_EMAIL = 0, @NOTIFY_LEVEL_NETSEND = 0, @NOTIFY_LEVEL_PAGE = 0, @DELETE_LEVEL = 0

After the job is created, it must have some steps to execute. The steps define the function of the job. The following example runs a Data Transformation package by using the DTSRUN utility. SUBSYSTEM should be CMDEXEC to run any utility from a job.
USE MSDB EXEC SP_ADD_JOBSTEP @JOB_NAME = 'FILE2', @STEP_NAME = 'STEP1 IN FILE2', @SUBSYSTEM = 'CMDEXEC', @COMMAND = 'DTSRUN /S "(LOCAL)" /N "FILETRANSFER" /U "SA" /P "SA" ', @RETRY_ATTEMPTS = 5, @RETRY_INTERVAL = 5

@SUBSYSTEM is used by the SQL Server agent service. The available values are described in Table 11.19. Table 11.19: Values for @SUBSYSTEM for SP_ADD_JOB Subsystem Type
ACTIVESCRIPTING CMDEXEC DISTRIBUTION SNAPSHOT LOGREADER MERGE 'TSQL' (default)

Description
Active Script Operating-system command or executable program Replication Distribution Agent job Replication Snapshot Agent job Replication Log Reader Agent job Replication Merge Agent job Transact-SQL statement

@COMMAND is the actual command to execute. In the following example, the DTSRUN utility is used. Any executable can be called using @COMMAND.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

217

Any number of steps can be added to a single job. The sequence of the steps is determined by the parameter @STEP_ID in this procedure. After the steps have been created, the job can be either executed manually or scheduled. If it needs to be scheduled, then SP_ADD_JOBSCHEDULE is used with a set of parameters. This example highlights how to schedule a job FILE2 to occur everyday at 1:00 A.M. in SQL Server.
USE MSDB EXEC SP_ADD_JOBSCHEDULE @JOB_NAME = 'FILE2', @NAME = 'DAILY LOAD', @FREQ_TYPE = 4, -- DAILY @FREQ_INTERVAL = 1, @ACTIVE_START_TIME = 10000

@FREQ_TYPE is the frequency type. It can accept any one of the values listed in Table 11.20: Table 11.20: Values for @FREQ_TYPE for SP_ADD_JOB Value
1 4 8 16 32 64 128

Description
Once Daily Weekly Monthly Monthly, relative to freq interval Run when SQL Server Agent service starts Run when the computer is idle

@FREQ_INTERVAL acts depend upon the value of @FREQ_TYPE. For example, if @FREQ_TYPE is set as once, then @FREQ_INTERVAL has no impact. If @FREQ_INTERVAL is set to run weekly, then @FREQ_INTERVAL is used to indicate on which day the job should run. For more information on this topic, refer to the MSDN library at http://www.msdn.com.

Step 6: Interface File Conversion


Any application will have inbound and outbound interface files. During application migration, it is the application developer's responsibility to consider all inbound as well as outbound files. External applications will use specified formats. Even after the conversion from Oracle, the external application will still need the interface file in the same format. Similarly, the external application might send some interface files to the database. Considering that the format might remain the same, application code must be converted in such a way that the application will handle the file, even though it is not in a SQL Server-specific format.

Step 7: Workflow Automation


When performing the application migration, workflow must also be examined. This section discusses how to send mail from the SQL Server database.

218

Developing: Applications Migrating Oracle SQL and PL/SQL

Sending Mail
Sending mail from an application is a very common requirement. Oracle includes a package named UTL_SMTP to provide e-mail functions. Using UTL_SMTP, application developers write their code to send notification mails from PL/SQL programs. Some setup is needed in the server side before UTL_SMTP can be used. SQL Server has several options to send e-mail. Before using these options, the mail accounts and mail service in the server where SQL Server resides needs to be set up and configured. After this is set up, the XP_SENDMAIL stored procedure can be used to send notification mails. XP_SENDMAIL has all the options of mail services, including attachments. For example, the error log file can be sent to the administrator in case of server side errors. Before using mail options, set up SQL Mail in SQL Server Enterprise Manager using the following steps: 1. Expand the server group and then expand the server. 2. Expand Support Services, right-click SQL Mail, and then click Properties. 3. In the Profile list, select the profile that you created earlier. 4. Click Test if you want to test the SQL mail. After SQL mail is set up, jobs can be configured to send mail alerts on job completion to indicate whether the job succeeded or failed. To configure the jobs, follow these steps: 1. Go to Jobs, select a particular job, and then click Properties. 2. On the Notification tab, you can enable the mailing option upon success or failure. 3. To send mail from Jobs, the Mail component needs to be started. This can be started automatically whenever SQL Server agent starts. Alternatively, XP_STARTMAIL can be used to start the mail agent of SQL Server. In addition, XP_SENDMAIL can be used to send mail from T-SQL batches, triggers, or procedures. The syntax is shown in the following example:
XP_SENDMAIL {[@RECIPIENTS =] 'RECIPIENTS [;...N]'} [,[@MESSAGE =] 'MESSAGE'] [,[@QUERY =] 'QUERY'] [,[@ATTACHMENTS =] 'ATTACHMENTS [;...N]'] [,[@COPY_RECIPIENTS =] 'COPY_RECIPIENTS [;...N]' [,[@BLIND_COPY_RECIPIENTS =] 'BLIND_COPY_RECIPIENTS [;...N]' [,[@SUBJECT =] 'SUBJECT']

The following example sends a mail to the mail ID SQLDBA@ABC.COM and a copy to another mail ID SQLDBA1@ABC.COM. This also sends an attachment with the name "error.log" that is in the directory C:\.
XP_SENDMAIL @RECIPIENTS='SQLDBA@ABC.COM', @MESSAGE='Test mail from SQL Mail', @ATTACHMENTS='C:\ERROR_LOG', @COPY_RECIPIENTS='@SQLDBA1@ABC.COM', @SUBJECT='Test Subject'

Step 8: Performance Tuning


After the code changes have been incorporated, the next step is to review and tune the performance. Make sure proper joins are in place and all possible indexed columns are used while executing the query. General techniques and guidelines are discussed in this section.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

219

After following these optimization techniques, test the application with a test database. If possible, use a copy of production data while testing so that the real response times can be recorded and more time can be spent tuning the application. The goal of performance tuning is to minimize the response time for each query and to maximize the throughput of the entire database server by reducing network traffic, disk I/O, and CPU time. This goal is achieved through understanding application requirements, the logical and physical structure of the data, and tradeoffs between conflicting uses of the database. Performance strategies vary in their effectiveness, and systems with different purposes, such as operational systems and decision support systems, require different performance skills. System performance is designed and built into a system. It does not just happen. Performance issues should be considered throughout the development cycle, not just at the end when the system is implemented. Many performance issues that result in significant improvements are achieved by careful design from the outset. Performance problems are usually the result of competition for, or exhaustion of, system resources. When a system resource is exhausted, the system is unable to scale to higher levels of performance. Although other system-level performance issues, such as memory or hardware, are certainly candidates for study, experience shows that the performance gains from tuning a query are only incremental when there are fundamental problems in its design. This section describes how SQL Server queries can be optimized when migrating from Oracle applications. For example, in Oracle, there are several areas where a DBA concentrates on tuning the database and queries, such as: Memory tuning I/O tuning SQL tuning Managing data and transactions

SQL Server automatically manages available hardware resources, reducing the need for extensive system-level manual tuning. Apart from this automatic management, SQL Server provides several ways to optimize performance during the development and after the system is released into production. These methods are discussed under the following headings.

Query Tuning
It may be tempting to address a performance problem solely by system-level server performance tuning; for example, by altering memory size, type of file system, and number and type of processors. Generally, most performance problems cannot be resolved this way. They must be addressed by analyzing the application, queries, and updates that the application is submitting to the database, and how these queries and updates interact with the database schema. Unexpected long-lasting queries and updates can be caused by: Slow network communication Inadequate memory in the server computer Lack of useful statistics Out-of-date statistics Lack of useful indexes

220

Developing: Applications Migrating Oracle SQL and PL/SQL Lack of useful data striping Improper database design

These issues can be resolved only by looking at the query and its path of execution. SQL Server has several ways to identify all of these issues. Two of the commonly used methods, SQL Profiler and the SET SHOWPLAN statement, are discussed here. Refer to Appendix B for a list of references about improving performance.

SQL Profiler
This graphical tool allows system administrators to monitor events in an instance of SQL Server. Each event can be captured to a file or another table. Using this data, the query can be analyzed. A stored procedure that hampers the performance can also be monitored. Only use the SQL Profiler for suspicious events. As SQL Profiler is used, it can slow down the server and increase the trace file size. Specific events can be filtered so only a subset of data can be monitored. After you have traced the events, SQL Profiler allows captured event data to be replayed against an instance of SQL Server, effectively re-executing the saved events as they occurred originally. SQL Profiler can be used to: Monitor the performance of an instance of SQL Server. Debug Transact-SQL statements and stored procedures. Identify slow-executing queries. Test SQL statements and stored procedures in the Developing Phase of a project by going through statements to confirm that the code works as expected. Troubleshoot problems in SQL Server by capturing events on a production system and replaying them on a test system. This is useful for testing or debugging purposes and allows users to continue using the production system without interference. Audit and review activity that occurred on an instance of SQL Server. This allows a security administrator to review any of the auditing events, including the success and failure of a login attempt and the success and failure of permissions in accessing statements and objects. Provide input to the Index Tuning Wizard to determine index usage suggestions.

Using SET SHOWPLAN in SQL Server


SET SHOWLAN is similar to the Oracle EXPAIN PLAN command, and it is used to display detailed information about a query. SET SHOWPLAN_ALL causes SQL Server not to execute Transact-SQL statements. Instead, SQL Server returns detailed information about how the statements are executed and provides estimates of the resource requirements for the statements. The syntax for this command is:
SET SHOWPLAN_ALL {ON | OFF}

When SET SHOWPLAN_ALL is ON, SQL Server returns execution information for each statement without executing it, and Transact-SQL statements are not executed. After this option is set ON, information about all subsequent Transact-SQL statements is returned until the option is set OFF.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

221

For example, if a CREATE TABLE statement is executed while SET SHOWPLAN_ALL is ON, SQL Server returns an error message. When SET SHOWPLAN_ALL is OFF, SQL Server executes the statements without generating a report. Use SET SHOWPLAN_TEXT to return readable output for Microsoft MS-DOS applications, such as the osql utility. SET SHOWPLAN_TEXT and SET SHOWPLAN_ALL cannot be specified inside a stored procedure; they must be the only statements in a batch. SET SHOWPLAN_ALL returns information as a set of rows that form a hierarchical tree representing the steps taken by the SQL Server query processor as it executes each statement. Each statement reflected in the output contains a single row with the text of the statement, followed by several rows with the details of the execution steps. For a more detailed view of the output obtained by turning on SET SHOWPLAN_ALL, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_setset_365o.asp.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

223

12
Developing: Applications Migrating Perl
Introduction and Goals
This chapter contains a detailed discussion of changes that must be made to Perl applications to work with Microsoft SQL Server. At the conclusion of this chapter, the Perl application should be capable of successfully connecting to the SQL Server database that was migrated from Oracle and the solution can be tested. As discussed in the "Define the Solution Concept" section of Chapter 2, "Envisioning Phase", there are four different strategies available for transitioning applications in an Oracle to SQL Server migration project. The strategies are: Interoperate the application with UNIX. Port or rewrite the application to the Microsoft .NET platform. Port or rewrite the application to the Microsoft Win32 platform. Quick port using the Windows Services for UNIX 3.5 platform.

Perl is an interpreted scripting language that runs on both UNIX and Windows. Because of the cross-platform capabilities of Perl, some of these strategies are more logical than others. For example, because Perl can be ported to Windows, there is no need to rewrite the application in the .NET framework or for the Win32 environment. Also, because the application can run within the Windows environment, a quick port using Windows Services for UNIX is not necessary and would only prevent the application from tightly integrating with Windows. Based on available migration strategies, two scenarios can be developed to migrate Perl applications. The solution scenarios are: Scenario 1: Interoperation of Perl on UNIX with SQL Server If the business requirements do not include eliminating the UNIX environment, an interoperation strategy can be implemented quickly. Few changes need to be made to the source code, and installing a new driver allows the Perl application to connect to a SQL Server database. Interoperation can also be used as an interim step if the migration is performed in phases. Scenario 2: Port the Perl Application to Win32

224

Developing: Applications Migrating Perl Perl applications can also be ported to run natively on the Windows platform. As with interoperation, few changes need to be made to the source code. Note If your Perl applications use UNIX system calls extensively (such as frequent use of syscall and exec), porting them to Windows Services for UNIX/Interix may be a suitable option because Interix has more support for the desired system calls. Chapter 2, "Envisioning Phase," provides a more detailed discussion of when choosing Services for UNIX would be more appropriate when moving to a Windows-based platform.

You will need the following to implement this option (porting Perl applications to Services for UNIX/SQL Server): A port of Perl for Interix (downloadable from Interop Systems at http://www.interopsystems.com/tools/warehouse.htm), which ships with the necessary DBI, DBD::ODBC and DBD::Sybase modules. A connectivity driver to the SQL Server database. This is provided by the port of FreeTDS on Interix and is downloadable from Interop Systems at http://www.interopsystems.com/tools/db.htm. FreeTDS provides two connectivity options for the Perl application to connect to the SQL Server database. One is a library called CTlib, which can be accessed through the DBD::Sybase module. The other is an odbc driver, which can be accessed though the DBD::ODBC module. If you use the ODBC driver, you will also need an ODBC driver manager. Two different ODBC driver managers iODBC and unixODBC are available for Windows Services for UNIX from http://www.interopsystems.com/tools/warehouse.htm.

This technology option, however, has not been fully tested as a part of development of this solution and therefore has not been detailed further.

Introduction to the Perl DBI Architecture


There are several different modules available for use with Perl that extend the capabilities of the language. One of these modules, the Database Independent Interface (DBI),

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

225

provides a common programming interface for several different proprietary databases, including Oracle and SQL Server. DBI offers a single set of functions or methods that can interact with the databases. These functions form an abstraction layer that allows the programmer to write code using generic DBI calls without needing specific API knowledge for the particular database. This layer makes the Perl code portable across databases. These generic DBI calls are then interpreted into RDBMS-specific functions using specialized Database Driver (DBD) modules. A DBD is a database-specific driver that translates DBI calls to the target database. The DBI API forwards the call to the appropriate database driver module, which then provides support, such as the dynamic loading of drivers, error-checking, and management. Figure 12.1 provides a visual representation of path from a Perl script to the backend database using DBI-DBD.

Figure 12.1
Perl communication with databases using DBI API and DBD drivers

DBI also supports the Open Database Connectivity (ODBC) interface. While DBI itself is an abstraction layer, ODBC forms another additional abstraction layer. This layer, consisting of an ODBC driver and ODBC driver manager mirrors the functions of the DBI and DBD within Perl. If the current solution utilizes ODBC, then a port using ODBC may provide the most direct solution. In other instances, ODBC may simply duplicate other layers and add additional processing overhead. In these situations, ODBC is not recommended. Figure 12.2 shows the path from Perl script to backend database involving ODBC drivers.

Figure 12.2
ODBC adds additional layers of connectivity between the application and database

Scenario 1: Interoperation of Perl on UNIX with SQL Server


Interoperation between a UNIX-based Perl application and a SQL Server database is possible because of the availability of a driver that interfaces between these technologies. Because of the DBI API, this type of migration is usually a relatively simple process. Only minimal changes may need to be made to the Perl application's source

226

Developing: Applications Migrating Perl

code. The cases described in this section discuss options available for Perl applications using ODBC or Oracle drivers. The following cases assume that the Perl application currently uses either the ODBC DBD or the Oracle DBD within the existing solution.

Case 1: Interoperating an ODBC DBD Application


If the original application was written using ODBC DBD, then the best migration option is to continue to use this driver. Because ODBC is not database-specific, no changes should need to be made to the application's source code. The only changes that need to be made concern connectivity with the SQL Server database. To interoperate the Perl application using ODBC, follow these common steps: 1. Install the ODBC driver. The DBD::ODBC module requires a driver manager and a driver to interact with SQL Server. DBD::ODBC is packaged with the Independent ODBC (iODBC) driver manager from OpenLink Software (http://www.openlinksw.com/). For more information about this driver, refer to http://www.freetds.org/userguide/perl.htm#DBD.ODBC. Another driver manager, unixODBC, that could be used in an interoperation scenario is available from http://www.freetds.org/userguide/prepodbc.htm. FreeTDS is a popular ODBC driver used to connect to SQL Server from a UNIXbased application. FreeTDS is an open source implementation of Tabular Data Stream (TDS) protocol that allows the application native access to SQL Server. OpenLink also offers drivers for SQL Server that use the FreeTDS implementation of TDS. Detailed instructions for building and configuring this driver are available at http://www.freetds.org/userguide/perl.htm. For detailed steps on installing the unixODBC or FreeTDS drivers, see Appendix D: "Installing Common Drivers and Applications." Even though the discussion is related to installing FreeTDS in a Windows Services for UNIX environment, it applies equally to UNIX as well. 2. Create a SQL Server data source. To function, ODBC needs a data source to connect to the database. The Data Source Name (DSN) is generally defined in an odbc.ini file that is used by the driver manager. The DSN is used by the driver manager to load the ODBC driver. iODBC offers a graphical user interface to set up the DSN. Complete instructions are available from http://www.iodbc.org/index.php?page=docs/odbcstory. The DSN can also be configured by manually modifying the odbc.ini file. The following example file uses an Oracle DSN.
[ODBC Data Sources] ORA_HR_DB=Sample Oracle8 dsn [ORA_HR_DB] Driver=/opt/odbc/lib/ivor8x01.so Description=Oracle8 ServerName=uxdbp1 LogonID=daveb Password=cougar

This configuration file can be modified to use the SQL Server DSN, as shown in the following example. Note that the [ORA_HR_DB] section is replaced with [SS_HR_DB]. Though most of the keys remain, the values are modified to allow for the SQL Server data source.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

227

[ODBC Data Sources] SS_HR_DB=Sample MS SQLServer [SS_HR_DB] Driver=/usr/local/freetds/lib/libtdsodbc.so Description=SQL Server 2000 Database=hrapp UID=daveb PWD=cougar Address=win2kdbp1,1433

Note The isql command line utility can be used to check the validity of the DSN entry when using unixODBC. This allows you to ensure that the entries in the odbc.ini are correct. The syntax for use is:
isql -v DSN username password

3. Test the connectivity. iODBC contains a utility named odbctest which can be used to test the DSN entries and interact with the database by connecting and issuing queries directly without any code. The syntax for performing this test is:
odbctest "DSN=SS_HR_DB;UID=daveb;PWD=cougar"

4. Change the existing application to use SQL Server as data source. The following sample source Perl program from an Oracle environment contains functions to connect, prepare, execute, bind, and disconnect. Compare these functions with the Perl code in the following example. Minor adjustments are made to allow for the data source change.
#Load the DBI module use DBI; my $connectstring_oracle = 'ora_hr_db'; my $username = 'daveb'; my $password = 'cougar'; my $tablename = 'Employee_Info'; #Connecting to Oracle DB using ODBC my $dbh = DBI->connect("dbi:ODBC:$connectstring_oracle", "$username", "$password"); #Preparing the SQL statement $sth = $dbh->prepare("insert into $tablename (last_name) values (?)"); #Binding the value to the parameter at run time. $sth->bind_param( 1,"LName); #Executing the query $sth->execute() or warn $sth->errstr(); # check for error $sth->finish();

228

Developing: Applications Migrating Perl

#Disconnect from Oracle database $dbh->disconnect or warn "Can't disconnect from the Oracle $connectstring_oracle database: $DBI::errstr\n";

The same functions are modified to use a SQL Server data source in the following example:
my $connectstring_oracle = 'ora_hr_db'; my $username = 'daveb'; my $password = 'cougar'; my $tablename = 'Employee_Info'; #Connecting to Oracle DB using ODBC my $dbh = DBI->connect("dbi:ODBC:$connectstring_oracle", "$username", "$password") to my $connectstring_ss = 'ss_hr_db'; my $username = 'daveb'; my $password = 'cougar'; my $tablename = 'Employee_Info'; #Connecting to SQL Server DB using ODBC my $dbh = DBI->connect("dbi:ODBC:$connectstring_ss", "$username", "$password")

5. Change all embedded SQL statements to T-SQL. This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Case 2: Interoperating an Oracle DBD Application


If the original application was written using the Oracle DBD, there are two possible options that can be evaluated. The recommended option is to replace the Oracle DBD with a Sybase DBD. The second option is to migrate to an ODBC framework similar to the one seen in "Case 1: Interoperating an ODBC DBD Application." Sybase DBD offers better performance than ODBC.

Migrate to SQL Server Using Sybase DBD


Because SQL Server was originally based on the Sybase data structure, it is possible to use the Sybase DBD to connect to SQL Server using a SQL Server driver. To use the Sybase DBD to connect to SQL Server using a SQL Server driver, follow these steps: 1. Install SQL Server library support. The FreeTDS driver is needed for this method. Instructions for installing and configuring the driver are located in Appendix D, "Installing Common Drivers and Applications." Even though the discussion is related to installing FreeTDS in a Windows Services for UNIX environment, it applies equally to UNIX as well. Because the driver is not being used with ODBC, the -disable-odbc switch can be used with the configure command while installing FreeTDS. Details on the use of this command are available at http://www.freetds.org/userguide/config.htm. 2. Configure the data source.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows After FreeTDS is installed, the freetds.conf file should be modified to include the SQL Server database information. A sample entry is shown in the following example:
[SS_HR_DB] host = win2kdbp1 port = 1433 tds version = 8.0 # or IP address

229

3. Install the Sybase DBD module. For more information on installing this Perl module, refer to http://search.cpan.org/~mewp/DBD-Sybase/Sybase.pm. In addition, the SYBASE environment variable should be set to the location of the FreeTDS installation. If using bash or ksh, the following commands can be used:
export SYBASE=/usr/local/freetds

4. Modify the application's source code. Minor changes will need to be made to the source code to allow for the data source change. The following sample code shows some different implementation options of the Perl language that may appear in your existing source code. Compare these to the second code sample that has been modified to use the Sybase DBD.
# load the DBI module use DBI; # Connect to the Oracle Database my $dbh = DBI->connect("dbi:Oracle: ora_hr_db ", 'userName', 'password', { RaiseError => 1, AutoCommit => 0 } ) || die "Database connection not made: $DBI::errstr"; #Prepare a SQL statement my $sql = qq{ SELECT id, name, title, phone FROM employees }; my $sth = $dbh->prepare( $sql ); #Execute the statement $sth->execute(); my( $id, $name, $title, $phone ); # Bind the results to the local variables $sth->bind_columns( undef, \$id, \$name, \$title, \$phone ); #Retrieve values from the result set while( $sth->fetch() ) { print "$name, $title, $phone\n"; }

230

Developing: Applications Migrating Perl

#Close the connection $sth->finish(); $dbh->disconnect();

5. Change the connection to the Sybase DBI and SQL Server DSN. The script in Step 4 has been rewritten to use DBD::Sybase. Note that the changes have been made in the header for the DBD and in the connection string for SQL Server. The rest of the Perl code is untouched by this change. This modified code is shown in the following example.
# load the DBI module use DBI; use DBD::Sybase; BEGIN { $ENV{SYBASE} = "/usr/local"; } # Connect to the SQL Server Database my $dbh = DBI->connect( dbi:Sybase:server=ss_hr_db;database=hrapp, 'userName', 'password', { RaiseError => 1, AutoCommit => 0 } ) || die "Database connection not made: $DBI::errstr"; #Prepare a SQL statement my $sql = qq{ SELECT id, name, title, phone FROM employees }; my $sth = $dbh->prepare( $sql ); #Execute the statement $sth->execute(); my( $id, $name, $title, $phone ); # Bind the results to the local variables $sth->bind_columns( undef, \$id, \$name, \$title, \$phone ); #Retrieve values from the result set while( $sth->fetch() ) { print "$name, $title, $phone\n"; } #Close the connection $sth->finish(); $dbh->disconnect();

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

231

6. Change all embedded SQL statements to T-SQL. Refer to Chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Migrate to SQL Server Using ODBC


To migrate an Oracle DBD connection to ODBC, follow these steps: 1. Install the ODBC driver. FreeTDS should be installed. For more information see Appendix D, "Installing Common Drivers and Applications." Even though the discussion is related to installing FreeTDS in a Windows Services for UNIX environment, it applies equally to UNIX as well. 2. Configure the DSN settings. The steps for creating an ODBC DSN have been shown in "Case 1: Interoperating an ODBC DBD Application." 3. Install the ODBC DBD module. Installation instructions are available at http://search.cpan.org/dist/perl/INSTALL and http://www.easysoft.com/products/2002/perl_dbi_dbd_odbc.phtml#3_0 In addition, the following environment variables need to be set: ODBCHOME. The directory where ODBC driver manager is installed. DBI_DSN. The dbi data source. DBI_USER. The username to connect to the database. DBI_PASS. The password for the database username.

4. Modify the source code. The original script written for use with DBD::Oracle has to be changed to connect to SQL Server through the ODBC DSN. Changes are limited to the connect() method as shown in the following code.
# load the DBI module use DBI; # Connect to the Oracle Database my $dbh = DBI->connect("dbi:ODBC:ss_hr_db", 'userName', 'password', { RaiseError => 1, AutoCommit => 0 } ) || die "Database connection not made: $DBI::errstr"; #Prepare a SQL statement my $sql = qq{ SELECT id, name, title, phone FROM employees }; my $sth = $dbh->prepare( $sql ); #Execute the statement $sth->execute(); my( $id, $name, $title, $phone );

232

Developing: Applications Migrating Perl

# Bind the results to the local variables $sth->bind_columns( undef, \$id, \$name, \$title, \$phone ); #Retrieve values from the result set while( $sth->fetch() ) { print "$name, $title, $phone\n"; } #Close the connection $sth->finish(); $dbh->disconnect();

Note ODBC connections can also be made without using DSN sources. The difference from a DSN is that the required configuration information is embedded in the Perl code instead of residing in a separate initialization file. The following example shows a non-DSN connection:
my $dsn = 'DBI:ODBC:Driver={SQL Server}'; my $host = 'hostname'; my $database = 'dbname'; my $user = 'username'; my $auth = 'password'; my($dbh) = DBI->connect("$dsn;$host;$database",$user,$auth, { RaiseError => 1, AutoCommit => 1});

5. Change all embedded SQL statements to T-SQL. Refer to chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Scenario 2: Porting the Perl Application to Windows


The migration of Perl applications in a UNIX environment to Windows and moving the backend to a SQL Server database is not much more effort than that needed under the interoperation scenario. It is possible to port the Perl application to both the .NET and the Win32 environment. The port to .NET is made possible by the availability of the utility PerlNET, which can wrap Perl programs into a .NET component. However, the application has to be ported from the UNIX environment to Windows environment before the conversion to .NET can be performed. Details about PerlNET are available at http://aspn.activestate.com/ASPN/docs/PDK/6.0/PerlNET.html#perlnet_top. There are a few additional steps that should be followed to successfully move the Perl application to Win32. These steps are discussed in the following two cases.

Case 1: Porting a Perl Application using ODBC DBD


To port the Perl application to the Windows environment, follow these steps: 1. Install Perl in the target Windows environment. ActiveState Perl is available from http://www.activestate.com. For more information on installing ActivePerl, see Appendix D, "Installing Common Drivers and Applications." 2. Install DBI and DBD::ODBC.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

233

The DBI and DBD::ODBC modules can be installed using Perl Package Manager (PPM), which is installed with ActivePerl. PPM is an interactive command line utility which can be launched from the command line by typing ppm. Download the DBI and DBD::ODBC files to a local directoryfor example, c:\perlppd. These files can be downloaded from http://ppm.activestate.com/PPMPackages/zips/. The modules can be installed by typing the following commands:
PPM> rep add new c:\perlppd PPM> install DBI PPM> install DBD-ODBC PPM> quit

3. Install the ODBC driver. There are several available options for acquiring and installing an ODBC driver for SQL Server. These options include: An ODBC driver for SQL Server is included as part of the SQL Server distribution CD and is installed as part of the SQL Server client install. Refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/howtosql/ht_install_029l.asp for information on installing the SQL Server client. OpenLink software offers an ODBC solution that eliminates the need to install SQL Server client software on the server on which the Perl application will run. Details on this driver are available at http://uda.openlinksw.com/odbc/st/odbcsqlserver-st/. DataDirect offers Connect for ODBC drivers that also provide clientless access to SQL Server. Details are available at http://www.datadirect.com/products/odbc/index.ssp.

4. Configure the ODBC data source. An ODBC data source can be configured by modifying odbc.ini as shown in the "Case 1: Interoperating an ODBC DBD Application" section of Scenario 1. The data source can also be configured using the ODBC Data Source Administrator utility that is bundled with Windows. This utility can be accessed by accessing from the Control Panel by choosing Administrative Tools, then Data Sources (ODBC). Some database vendors also provide utilities to configure DSNs.

234

Developing: Applications Migrating Perl

Figure 12.3
Configuring the ODBC System DSN

Note In Windows, DSNs should be created as a System DSN and not as a User DSN because of permissions restrictions associated with the latter. Further steps for creating a data source are available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/howtosql/ht_6_odbcht_1oqc.asp. 5. Change the application to use SQL Server as the data source. The move to Windows does not add any overhead in terms of modifications to the application. The changes are driven by the change in DSN and are similar to changes discussed in the "Case 1: Interoperating an ODBC DBD Application" section of Scenario 1. 6. Change all embedded SQL statements to T-SQL. This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Case 2: Porting a Perl Application Using Oracle DBD


When migrating from a UNIX platform to a Windows platform, there are two options to replace the DBD::Oracle module. One option is to use the DBD::ODBC module. The other is to use ActiveX Data Objects (ADO) through the DBD::ADO module. Using DBD::ODBC is recommended in most instances and is discussed in greater detail in the "Case 1: Interoperating an ODBC DBD Application" section of Scenario 1. DBD::ADO is a DBI driver that acts as an interface to other lower-level database drivers

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

235

within Windows. DBD::ADO requires the DBD::ODBC module to function. Using DBD::ADO will add an additional connectivity layer to the solution and should only be implemented if the application needs to take advantage of the OLE DB APIs. This driver can then be used to connect to and access data from any ADO data source.

Migrate to DBD::ADO
To migrate to DBD::ADO, follow these steps: 1. Install ActivePerl. ActivePerl is a Windows-based Perl engine available from http://www.activestate.com. For detailed installation instructions, see Appendix D, "Installing Common Drivers and Applications." 2. Install DBI. The DBI module can be installed using Perl Package Manager (PPM), which is installed with ActivePerl. 3. Install DBD::ODBC using PPM. 4. Install the ODBC driver. 5. Configure the ODBC data source. 6. Install ADO. The DBD::ADO module requires Microsoft ADO version 2.1 or later to work reliably. ADO drivers are available from Microsoft as part of the Microsoft Data Access Component (MDAC) available at http://msdn.microsoft.com/data/technologyinfo/mdac. Download the DBD::ADO module files to a local directoryfor example, c:\perlppd. The ADO module can be downloaded from http://ppm.activestate.com/PPMPackages/zips/. This module can be installed from Perl Package Manager (PPM). Type ppm at the command line to start the package manager, and then type the following commands:
PPM. Rep add new c:\perlppd PPM> install DBI PPM> install DBD-ADO PPM> quit

7. Modify the connection string to use ADO. No other changes should need to be made to the source code. A sample ADO connection string is shown in the following example:
use DBI; my $dsn ="Provider=sqloledb;Trusted_Connection=Yes;Server= win2kdbp1;Database=hrapps"; $dbh = DBI->connect("dbi:ADO:$dsn", $user, $passwd);

Calling stored procedures is supported by DBD::ADO using the ODBC style call procedure_name(). An example for calling a procedure is
my $sth = $dbh->prepare("{call procedure name (?)}");

Parameters can be either input or output parameters. Parameters are bound as seen in earlier examples. 8. Change all embedded SQL statements to T-SQL. This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

237

13
Developing: Applications Migrating PHP
Introduction and Goals
This chapter explores the options available to maintain connectivity of PHP: Hypertext Preprocessor (PHP) applications as part of the Oracle to Microsoft SQL Server migration project. In PHP, the logic to interact with databases is provided by databasespecific modules (or drivers). These modules have specialized functions to perform database-related tasks and can be replaced to accommodate a SQL Server backend with relative ease. PHP is an open source, server-side, cross-platform scripting language used to create dynamic Web pages. Like any other Common Gateway Interface (CGI), PHP can provide dynamic, data-driven characteristics to static HTML forms or pages. As discussed in Chapter 2, "Envisioning Phase," there are four different strategies available for transitioning applications in an Oracle to SQL Server migration project. The strategies are: Interoperate the application with the UNIX environment. Port or rewrite the application to the Microsoft .NET platform. Port or rewrite the application to the Microsoft Win32 platform. Quick port using the Microsoft Windows Services for UNIX 3.5 platform.

Because of the unique characteristics of PHP, some of these strategies are more feasible than others. For example, because PHP can be ported to Windows, there is no need to rewrite the application for .NET or Win32 unless the existing solution's source code is not available. Because the available drivers for .NET are still in beta, the porting has to be done to a Win32 environment, Also, because the application can run within the Windows environment, a quick port using Windows Services for UNIX is not necessary in most cases. Based on the available migration strategies, two scenarios can be developed to migrate PHP applications. These scenarios include: Scenario 1: Interoperating PHP on UNIX with SQL Server If the business requirements do not include eliminating the UNIX environment, an interoperation strategy can be implemented quickly. Few changes need to be

238

Developing: Applications Migrating PHP made to the source code and installing a new driver allows the PHP application to connect to a SQL Server database. If the migration is performed in phases, interoperation can be used as an interim step. Scenario 2: Porting the Application to Win32 PHP applications can also be ported to run natively on the Windows platform. As with interoperation, few changes need to be made to the source code.

Available options are discussed for each scenario in separate cases. Each case lists steps required to connect the PHP application to the SQL Server data source. Also included are discussions on the differences between the various Oracle and SQL Server functions. When applicable, sample source code is provided to illustrate changes that need to be made. Note If your PHP scripts use UNIX system calls extensively (such as frequent use of exec, passthru, popen, or the backtick (`)), porting them to Windows Services for UNIX/Interix may be a suitable option because Interix has more support for the desired system calls. Chapter 2, "Envisioning Phase," provides a more detailed discussion of when choosing Windows Services for UNIX would be more appropriate when moving to a Windows-based platform.

You will need the following to implement this option (porting PHP applications to Windows Services for UNIX/SQL Server): A port of PHP for Interix downloadable from Interop Systems at http://www.interopsystems.com/tools/warehouse.htm, which includes the capability of using DBLib, CTLib and ODBC. The PHP distribution will have to be compiled for using a specific database connectivity library. A connectivity driver to the SQL Server database. This is provided by the port of FreeTDS on Interix and is downloadable from Interop Systems at http://www.interopsystems.com/tools/db.htm. FreeTDS provides three connectivity options for the PHP application to connect to the SQL Server database. Two options go through the libraries DBLib and CTLib, which provide connectivity to both Sybase and SQL Server. The third is the ODBC driver, which can be used to access any ODBC database (including SQL Server).

This technology option, however, has not been fully tested as a part of development of this solution and therefore has not been detailed further.

PHP Modules
PHP contains a rich set of options for interfacing with common proprietary databases, including Oracle and SQL Server. These interfaces are available through PHP as internal functions or extensions. The database-related functions are grouped into modules which

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

239

must be compiled into the PHP installation before use. These modules are enabled by specifying them in the configuration file (php.ini).

Figure 13.1
PHP modules (functions) that can access Oracle

Figure 13.2
PHP modules (functions) that can access SQL Server

The following modules are commonly used with Oracle databases. Each is discussed in detail in this chapter: Oracle (ORA) functions ORA is a set of functions used to access Oracle databases. The php_oracle value in the php.ini file (extension=php_oracle.dll) is used to enable functions. The functions in this module can be identified by the ora_ prefix, such as ora_logon(). Oracle 8 (OCI8) functions

240

Developing: Applications Migrating PHP OCI8 is a set of functions based on Oracle Call Interface (OCI) used to access Oracle databases, including 8i and 9i releases. This module is more flexible than the standard Oracle functions. The php_oci8 value in the php.ini file (extension=php_oci8.dll) is used to enable the functions. The set of functions that are in this module can be identified by the oci_ prefix, such as oci_logon(). Before PHP 5.0, every oci_ function was named with an oci prefix. Although the functions with the oci prefix are still available, they are now deprecated. ODBC functions A modified version of ODBC that provides a unified driver for native support for several databases is available for PHP. ODBC support is integrated with PHP and, unlike the modules needed for ORA and OCI8, no extensions need to be specified. ODBC has the advantage of allowing applications to be written which, theoretically, are independent from the RDBMS. If the RDBMS changes, only the driver will need to be changed, not the code. ODBC function calls are handled by a driver manager that passes the calls to a specific driver for the target database for execution. The iODBC driver manager is commonly used with PHP to connect to databases on Windows. iODBC (http://www.iodbc.org) is an open source ODBC driver manager and the SDK is maintained by OpenLink Software (http://www.openlinksw.com). iODBC calls use an odbc_ prefix. For example, odbc_connect() utilizes the ODBC driver manager. Another open source project, named after the unixODBC project, has developed a driver manager called unixODBC (http://www.unixodbc.org). The unixODBC calls use the three-letter SQL prefix. For example, SQLconnect() uses the unixODBC driver manager. Note The function phpinfo() can be called to reveal the extensions that have been loaded into PHP. A simple script containing the following line is sufficient to implement this call:
<php phpinfo(); ?>

Scenario 1: Interoperating PHP on UNIX with SQL Server


It is possible to interoperate between PHP applications in a UNIX environment and SQL Server databases using Microsoft SQL Server functions or through an ODBC interface. Each API has its own function calls to connect to the database and execute statements. These APIs vary in support for common functions such as those related to cursors and statements.

Case 1: Interoperating a PHP Application Using ORA Functions


If the existing PHP solution uses ORA functions, then the best alternative is to replace them with Microsoft SQL Server (MSSQL) functions. This modification will require replacing the current driver with one that supports the MSSQL functions. One of the drivers that can be used is FreeTDS (http://www.freetds.org). FreeTDS is a set of libraries for UNIX and Linux that allows programs to interact with Microsoft SQL Server. Using the FreeTDS libraries, PHP's mssql_xxx() functions can be used to directly access a SQL Server database from a UNIX or Linux computer. FreeTDS

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

241

is an open source implementation of the Tabular Data Stream protocol used by SQL Server for its clients. The FreeTDS C libraries are freely available under the terms of the GNU GPL license. The protocol version also affects how database servers interpret commands. Microsoft SQL Server 2000 is known to behave differently with versions 4.2 and 8.0 of FreeTDS. version 8.0 is recommended for compatibility with SQL Server tools. To interoperate a PHP application using ORA functionality, follow these common steps: 1. Install the FreeTDS SQL Server libraries. Download the source code from http://www.freetds.org/software.html. The installation process is very similar to that detailed in Appendix D, "Installing Common Drivers and Applications," except that the installation is occurring in UNIX instead of Windows Services for UNIX. Note FreeTDS should be compiled with the --enable-msdblib option. 2. Recompile PHP to use MSSQL functions. Refer to http://www.freetds.org/userguide/php.htm for more information on compiling PHP. Note PHP should be configured with the --with-mssql[=DIR] switch, where DIR is the location of the FreeTDS libraries. For example, from the PHP source directory, type: ./configure --with-mssql=/usr/local/freetds 3. Enable MSSQL functions. Before MSSQL functions can be used, they must be enabled and initialized for use by uncommenting or adding the following line in the initialization file php.ini.
extension=php_mssql.dll

4. Verify the installation. Verify that the module is installed by executing the phpinfo() function. MSSQL should appear in the list of modules. 5. Change all embedded SQL statements to T-SQL. Refer to chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant. 6. Modify the PHP application code. The following sample source code is used to illustrate the changes required when migrating PHP code from ORA functions to MSSQL functions. Pay special attention to the differences between these code sections. Note MSSQL does not have functions to create and drop cursors. Implicit cursors are used to handle results returned.

242

Developing: Applications Migrating PHP The following example is sample ORA-based PHP code that executes a simple query and retrieves results using a cursor operation:
<?php // Connection parameters $tns = "hrdb"; $username = "scott@$tns"; $password = "tiger"; // Create Statement $sql = "SELECT last_name, salary FROM hr.employee ORDER BY salary"; // Make connection $conn = ora_logon($username, $password); // Open cursor $mycursor = ora_open($conn); ora_parse($mycursor, $sql, 0); // Execute query ora_exec($mycursor); // Display the information while (ora_fetch($mycursor)) { echo "RESULT: " . ora_getcolumn ($mycursor, 0) . ", " . ora_getcolumn ($mycursor, 1) . "<br>\ "; } // Close cursor ora_close($mycursor); // Close connection ora_logoff($conn); ?>

The following example is the same code modified to MSSQL-based PHP code:
<?php // Connection parameters $sqlserver = "win2kss1"; $username = "scott"; $password = "tiger"; // Create Statement $sql = "SELECT last_name, salary FROM hr..employee ORDER BY salary"; // Make connection $conn = mssql_connect($sqlserver, $username, $password); // Execute query $sqldb = mssql_select_db("hr",$conn);

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

243

$mycursor = mssql_query($sql); // Display information while ($row = mssql_fetch_array($mycursor)){ echo "RESULT: " . $row['last_name'] . ", " . $row['salary'] . "<br>\n"; } // Close connection mssql_close($conn); ?>

Database server information can also be referenced from freetds.conf, as shown in the following configuration information.
/* freetds.conf [SQLServer] host = win2kss1 port = 1433 tds version = 8.0 */

To use the freetds.conf entry, the connection code will change as follows:
// specify path to file putenv(FREETDSCONF=/usr/local/freetds.conf) $conn=mssql_connect(SQLServer, $username, $password);

The following table provides a mapping between key ORA functions and the respective MSSQL functions. ORA functions that do not have a matching MSSQL function are denoted with a hyphen. A discussion of handling these disparate functions appears in the "Common Function Translation Issues" section later in this chapter. Table 13.1: Mapping Between ORA and MSSQL Functions Operation
Connection

Task
Open connection Close connection Persistent connection

ORA function
ora_logon() ora_logoff() ora_plogon() ora_open() ora_close() ora_parse() ora_bind() ora_exec() ora_do() ora_fetch() ora_fetch_into ora_getcolumn() ora_commiton() ora_commitoff()

MSSQL function
mssql_connect() mssql_close() mssql_pconnect() mssql_bind() mssql_execute() mssql_query() mssql_execute() mssql_query() mssql_fetch_row() mssql_fetch_array() mssql_fetch_field -

Cursor Parsing Binding Execution

Open cursor Close cursor Parse statement Bind variable Execute statement Prepare and execute

Fetching

Fetch row Fetch column

Transaction Management

Commit

244

Developing: Applications Migrating PHP

Operation

Task
Rollback

ORA function
ora_commit() ora_rollback() ora_error() ora_errorcode() ora_columnname() ora_columnsize() ora_columntype() ora_numrows() ora_numcols()

MSSQL function
mssql_field_name() mssql_field_length() mssql_field_type() mssqp_num_rows() mssql_num_fields()

Error Handling Others

Error checking Name of result column Size of result column Datatype of result column Number of rows effected Number of columns returned

Case 2: Interoperating a PHP Application Using OCI8 Functions


To interoperate using OCI8, follow these common steps:
As with Case 1 "Interoperating a PHP Application Using ORA Functions", the best alternative for OCI8 functions is to replace them with MSSQL calls. Most of the steps are the same between this procedure and the procedure in the "Case 1: Interoperating a PHP Application Using ORA Functions" section of this scenario. As a result, these steps are not described in detail here.

1. Install the FreeTDS SQL Server libraries. 2. Compile PHP to use MSSQL APIs. 3. Enable MSSQL functions within php.ini. 4. Verify the installation using phpinfo(). 5. Change all embedded SQL statements to T-SQL. Refer to Chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant. 6. Modify application code to reflect the new drivers. The following sample source code is used to illustrate the changes required in PHP code when replacing OCI8 functions with MSSQL functions. The following example is the original OCI8-based PHP code:
<?php // Create connection $conn = OCILogon("scott","tiger","hrdb"); if ( !$conn ) { $err = OCIError(); echo "Unable to connect " . $err[text]; die(); } // Insert SQL statement without binding

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

245

$sql1 = OCIParse($conn, "INSERT INTO countries VALUES (US,United States of America, 2)"); OCIExecute($sql1); // Insert SQL statement with binding $var1 = MX; $var2 = Mexico; $var3 = 3; $sql2 = OCIParse($conn, "INSERT INTO countries VALUES (:bind1, :bind2, :bind3); "); OCIBindByName($sql2, ":bind1", $var1); OCIBindByName($sql2, ":bind2", $var2); OCIBindByName($sql2, ":bind3", $var3); OCIExecute($sql2); echo OCIRowCount($sql2) . " rows inserted. <br />"; // Commit OCICommit($conn); // Close connection OCILogoff($conn); ?>

The following example is the same PHP source code modified to use the MSSQL drivers. Note the difference in the commit function.
<?php // Create connection $conn = mssql_connect("win2kss1", "scott", "tiger"); if ( !$conn ) { echo "Unable to connect - $conn"; die(); } //begin $sql = "begin tran"; $result = mssql_query($sql); // Insert SQL statement without binding $sql = "INSERT INTO hr.dbo.countries VALUES ('US','United States of America',2)"; mssql_query($sql); $result = mssql_query("SELECT @@ROWCOUNT"); echo mssql_rows_affected($conn) . " rows inserted. <br />"; $sql = "INSERT INTO hr.dbo.countries VALUES ('MX','Mexico', 3)"; mssql_query($sql); //commit $sql = "commit_tran"; $result = mssql_query($sql);

246

Developing: Applications Migrating PHP

// Close connection mssql_close($conn); ?>

Table 13.2 maps the equivalent calls between OCI8 and MSSQL functions. A hyphen is used when a similar function does not exist in MSSQL. A discussion on handling these disparate functions is available in the "Common Function Translation Issues" section later in this chapter. Table 13.2: Mapping Between OCI8 and MSSQL Functions Operation
Connection

Task
Open connection Close connection Persistent connection

OCI8 Functions
ociLogon() ociLogoff() ociPLogon() ociNewCursor() ociFreeCursor() oci_free_statement() ociParse() ociExecute() ociFetch() ociCommit() ociRollback() ociError() ociColumnName() ociColumnSize() ociColumnType() ociRowCount() ociNumCols()

MSSQL Functions
mssql_connect() mssql_close() mssql_pconnect() mssql_free_statement() mssql_execute() mssql_query() mssql_fetch_row() mssql_field_name() mssql_field_length() mssql_field_type() mssql_num_rows() mssql_num_fields()

Cursor Statement Parsing Execution Fetching Transaction Management Error Handling Others

Open cursor Close cursor Free statement Parse statement Execute statement Fetch row Commit Rollback Error checking Name of result column Size of result column Datatype of result column Number of rows effected Number of columns returned

Case 3: Interoperating a PHP Application Using ODBC Functions


While migrating from Oracle to SQL Server, no changes will be needed to the ODBC calls themselves; the availability of SQL Server-specific drivers for the UNIX environment and the capabilities of the driver manager to work with these drivers define the changes that must be made. The best choice for interoperating with SQL Server is offered by the

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

247

FreeTDS driver, which is supported by the iODBC driver manager of the source environment. Other options available include Easysoft's ODBC-ODBC Bridge (OOB) and OpenLink's Universal Data Access drivers. To interoperate using ODBC functions, follow these common steps: The following steps are required to interoperate PHP applications with SQL Server. 1. If needed, install ODBC software. It is assumed that the iODBC driver manager is already installed and is being used to connect to Oracle. If this is not true, then either the iODBC or the unixODBC driver manager must be installed. The iODBC driver manager can be downloaded from http://www.openlinksw.com/iodbc/. Installation instructions are available in Appendix D, "Installing Common Drivers and Applications." The unixODBC driver manager can be downloaded from http://www.unixodbc.org/. Installation instructions are also available in Appendix D, "Installing Common Drivers and Applications." 2. Install the SQL Server driver. The most popular ODBC driver to connect to SQL Server from a UNIX operating system is FreeTDS. FreeTDS is an open source implementation of TDS that allows UNIX-based programs native access to SQL Server. OpenLink also offers drivers for SQL Server that use the FreeTDS implementation of TDS. Installation instructions are located in Appendix D, "Installing Common Drivers and Applications." An ODBC-ODBC bridge to SQL Server is available from Easysoft. For detailed instructions on enabling ODBC support under PHP, refer to http://www.easysoft.com/products/2002/apache.phtml. OpenLink (http://www.openlinksw.com) also offers single-tier and multitier Universal Data Access drivers. 3. Recompile PHP to use ODBC functions. To be capable to use the SQL Server driver, PHP must be recompiled with ODBC functions. This procedure will vary based on the ODBC driver that is installed. Refer the following resources for more information. FreeTDS http://www.freetds.org/userguide/php.htm Easysoft http://www.easysoft.com/products/2002/apache.phtml OpenLink http://docs.openlinksw.com

4. Configure the Data Source Name (DSN). ODBC needs an ODBC data source to connect to the database. The DSN is generally defined in an odbc.ini file that the driver manager uses to load the ODBC driver. When using FreeTDS, the DSN can also be configured by modifying the odbc.ini file. A SQL Server DSN should be similarly configured to the following example.
[ODBC Data Sources] SS_HR_DB=Sample MS SQLServer [SS_HR_DB] Driver=/usr/local/freetds/lib/libtdsodbc.so Description=SQL Server 2000 Server=win2kdbp1

248

Developing: Applications Migrating PHP

Port=1433 Database=hrapp LogonID=daveb Password=cougar

5. Check the connection. In iODBC, the command line query program odbctest can be used to test the DSN connectivity. If using unixODBC, the isql utility can be used to do this. 6. Modify the application source code to reflect the new data source. Because ODBC is a generic interface to any database, no changes are required in the application code except to connect to the SQL Server database. The proper DSN has already been set up in an earlier step. The following PHP code snippet uses the ODBC module to connect to an Oracle data source, and it should be similar to code in the existing solution.
<?php // create connection $conn = odbc_connect('ORA_HR_DB', '', ''); if (!$conn){ exit("Unable to connect to database: $conn"); } ?>

When migrated to connect to SQL Server, the connection string will reflect the changes that follow.
<?php // create connection $conn = odbc_connect('SS_HR_DB', '', ''); if (!$conn){ exit("Unable to connect to database: $conn"); } ?>

7. Change all embedded SQL statements to T-SQL. Refer to Chapter 11, "Developing: Applications Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant. Table 13.3 is a listing of the ODBC function calls that are used in PHP code to work with databases. Table 13.3: ODBC Function Calls in PHP Operation
Connection

Task
Open connection Open persistent connection Close connection Get connection information

ODBC Function
odbc_connect() odbc_pconnect() odbc_close() odbc_data_source() odbc_prepare() odbc_cursor() odbc_execute() odbc_exec() odbc_do()

Parsing Cursor Execution

Prepare statement Get cursor name Execute statement Prepare and execute

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

249

Operation
Fetching

Task
Fetch row

ODBC Function
odbc_fetch_row() odbc_fetch_array() odbc_fetch_into() odbc_fetch_object() odbc_commit() odbc_rollback() odbc_error() odbc_errormsq() odbc_field_name() odbc_columns() odbc_field_len() odbc_field_type() odbc_num_rows() odbc_num_fields()

Fetch column Transaction Management Error Handling Others Commit Rollback Error Checking Get last error Name of result column Name of table column Size of result column Datatype of result column Number of rows effected Number of columns returned

Common Function Translation Issues


Though SQL Server and Oracle share a great deal of functionality, some database operations differ. This section discusses some of the differences between coding with PHP extensions for Oracle and SQL Server.

Handling Transactions
The Oracle PHP extension module supports transactions using the following function calls: ora_commit() ora_commitoff() ora_commiton() ora_rollback()

The OCI PHP extension module supports transactions using the following function calls. OCICommit() OCIRollBack()

MSSQL API does not provide commit or rollback functions. These functions can be achieved using the following example code:
// Here $db is a PHP variable for connection object //To begin transaction $sql = "begin tran"; $result = mssql_query($sql,$db); //To commit transaction $sql = "commit tran"; $result = mssql_query($sql,$db);

250

Developing: Applications Migrating PHP

//To rollback transaction $sql = "rollback tran"; $result = mssql_query($sql,$db);

Migrating Cursors
Oracle requires using cursors with SELECT statements, regardless of the number of rows requested from the database. In SQL Server, a SELECT statement that is not disclosed on a cursor returns rows to the client as a default result set. This is an efficient way to return data to a client application. When porting a PL/SQL procedure from Oracle, first determine whether cursors are needed to perform the same function in Transact-SQL. If the cursor is used only to return a set of rows to the client application, use a non-cursor SELECT statement in TransactSQL to return a default result set. If the cursor is used to load data a row at a time into local procedure variables, use cursors in Transact-SQL. Unlike Oracle and OCI8 (which have methods for explicit use of cursors), MSSQL uses implicit cursors. Functions such as mssql_query() and mssql_execute() associate a handle to the returned result, which can be traversed similar to an explicit cursor. The following code shows the use of result set to achieve the same functionality as that of a cursor.
$conn=mssql_connect("myhost","user","pwd"); mssql_select_db("hr",$conn); // create the select statement $sqlquery="SELECT companyName FROM Customers;"; // activate the query and retrieve the result set $results= mssql_query($sqlquery); // Do a row fetch from the result set for display while ($row=mssql_fetch_array($results)){ echo $row['companyName']."<br>\n"; }

SQL Server cursors may also be used for one of the following reasons: Updateable cursors Scrollable, read-only cursors

Note ODBC Cursor Library cursors build on top of default result sets. Be sure to fetch to the end of the result set quickly to free up locks. When you use server cursors, use SQLExtendedFetch to fetch in blocks of rows instead of a single row at a time. This is the same thing as array-type fetching in Oracle. This saves on round-trip fetches to the server. For more information on SQLExtendedFetch, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/odbc/htm/odbcsqlextendedfetch.asp. Use result sets when you think it will fetch all rows at one time, but use server cursors when you know a result set would not fetch all rows at once.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

251

Connection Pooling
While PHP does not offer connection pooling, persistent Oracle connections can be opened with the following API function calls. Oracle API: ora_plogon() OCI8 API: ociPLogon()

To interoperate with SQL Server, these functions should be replaced by one of the following functions, depending upon the API used: ODBC API: odbc_pconnect() MSSQL API: mssql_pconnect()

Unlike connection pooling, a persistent connection is local to a process and cannot be shared between different processes.

Stored Procedures
Stored procedures can be handled by the MSSQL API as well as the ORA and OCI8 APIs. The following example shows sample stored procedures using MSSQL calls.
// Create Query for Stored Procedure In Code // Stored procedure name: CustOrdersDetail from Northwind database $stmt = mssql_init("CustOrdersDetail",$conn); mssql_select_db("Northwind", $conn); // Parameter values $orderid = 1; // Binding parameter value to stored procedure mssql_bind($stmt, "@OrderID", $orderid, SQLINT4); // Execute SQL Statement & Capture Results $result = mssql_execute($stmt); // Fetch return data and place into array for access while($row = mssql_fetch_array($result)){ // Assign Variables $unitprice = $row["UnitPrice"]; echo "Unit Price" . $unitprice; $quantity = $row["Quantity"]; echo "Quantity" . $quantity; $discount = $row["Discount"]; echo "Discount" . $discount; $price = $row["ExtendedPrice"]; echo "Price" . $price; }

252

Developing: Applications Migrating PHP

Scenario 2: Porting the Application to Win32


Because PHP is a cross-platform language, porting applications from a UNIX environment to Windows is not much more complex than the steps required for interoperation. The strategy, steps, and actual changes required are very similar to those discussed for interoperation. The only additional step is installing PHP and the application code in the Windows environment.

Case 1: Porting a PHP Application using ORA Functions


Most of the following steps are covered in more detail in the "Case 1: Interoperating a PHP Application Using ORA Functions" section of Scenario 1. To port the application using ORA functions, follow these common steps: 1. Install PHP on target Windows server. For more information on installing PHP, refer to http://uk2.php.net/manual/en/install.php. 2. Install SQL Server libraries. Instead of the FreeTDS libraries, the Windows-based libraries offered by Microsoft in SQL Client Tools can be installed on the system where PHP is installed. The Client Tools can be installed from the MS SQL Server CD or by copying ntwdblib.dll from \winnt\system32 on the server to \winnt\system32 on the target Windows server. However, copying ntwdblib.dll will only provide access. Configuration of the client will require installation of all the tools. 3. Compile PHP to use the MSSQL module. 4. Enable MSSQL functions in php.ini. 5. Verify the installation using phpinfo(). 6. Transport source code to target server. 7. Modify the application's source code to reflect the MSSQL API instead of ORA functions..

Case 2: Porting a PHP Application Using OCI8 Functions


Most of the steps needed to migrate this application are discussed in more detail in the "Case 2: Interoperating a PHP Application Using OC18 Functions" section of Scenario 1. The following steps are required to migrate PHP applications to Windows using OC18 functions. To port the application using OCI8 functions, follow these common steps: 1. Install PHP on target Windows server. 2. Install SQL Server libraries from SQL Client Tools. 3. Compile PHP to use the MSSQL module. 4. Enable MSSQL functions in php.ini. 5. Verify the installation using phpinfo(). 6. Transport the source code to the target Windows server. 7. Modify the application code to reflect the MSSQL API instead of OCI functions.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

253

Case 3: Porting a PHP Application Using ODBC Functions


Most of the steps listed as follows are discussed in more detail in the "Case 3: Interoperating a PHP Application Using ODBC Functions" section of scenario 1. The following steps are required to migrate PHP applications to Windows using ODBC functions. To port the application using ODBC functions, follow these common steps: 1. Install PHP on target Windows server. 2. Install ODBC software. For installation instructions, refer to http://support.microsoft.com/default.aspx?scid=kb;en-us;313008. 3. Install the SQL Server driver. In addition to the drivers mentioned for interoperation, the Microsoft ODBC driver for SQL Server may also be used. This driver is available along with the SQL Server client CD. Installation instructions are available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/howtosql/ht_install_029l.asp. 4. Compile PHP to use ODBC in the php.ini file. 5. Configure the DSN to connect with the SQL Server database. 6. Check the connection using command line utilities. 7. Transport the source code to the target Windows environment. 8. Modify the application source code to reflect the MSSQL API.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

255

14
Developing: Applications Migrating Java
Introduction and Goals
This chapter contains a detailed discussion of changes that must be made to Java applications when the database is migrated from Oracle to Microsoft SQL Server. At the conclusion of this chapter, the Java application should be capable to successfully connect to the SQL Server database that was migrated from Oracle. The solution can then be tested. Java is a portable language. The capability for the Java programming language to interact with an RDBMS such as Oracle and SQL Server is provided by the Java Database Connectivity (JDBC) API. Applications written in Java can execute SQL statements, retrieve results, and propagate changes back to the underlying data source using the JDBC API. The combination of the Java platform and JDBC API allows a program to operate on a number of platforms, and with any number of SQL-supported databases, without modifying the source code. When migrating from an Oracle to a SQL Server environment, there will still be some alterations made to the application. These basic changes will need to be made whether the Java application interoperates with SQL Server from a UNIX environment, or if the Java application is ported to Windows. As discussed in the "Define the Solution Concept" section of Chapter 2, "Envisioning Phase," there are four different strategies available for transitioning applications in an Oracle to SQL Server migration project. The strategies are: Interoperate the application with UNIX Port or rewrite the application to the Microsoft .NET platform Port or rewrite the application to the Microsoft Win32 platform Quick port using the Microsoft Windows Services for UNIX 3.5 platform

Because of some of the unique characteristics of Java, some of these strategies are more logical than others. For instance, because Java can be ported to Windows, there is no need to rewrite the application in the .NET framework or for the Win32 environment.

256

Developing: Applications Migrating Java

Based on the available migration strategies, two scenarios can be developed to migrate the Java application. These scenarios are: Scenario 1: Interoperating Java on UNIX with SQL Server If the business requirements do not include eliminating the UNIX environment, an interoperation strategy can be implemented quickly. Few, if any, changes need to be made to the source code and it may be necessary to install a new driver to allow the Java application to connect to a SQL Server database. If the migration is performed in phases or for multiple applications, interoperation can be used as an interim step. Scenario 2: Porting the Application to Win32 Java applications can also be ported to run natively on the Windows platform. As with interoperation, few changes need to be made to the source code.

Scenario 1: Interoperating Java on UNIX with SQL Server


Interoperation between a UNIX-based Java application and a SQL Server database is possible because of the availability of a driver that interfaces between these technologies. The JDBC driver provides the cross-RDBMS connectivity needed.

Case 1: Interoperating a Java Application Using the JDBC Driver


Because of the construction of the JDBC API, which exposes a common set of methods that are not affected by the change of source database, this type of migration is usually a simple process. Most changes made relate to the connectivity. To interoperate a Java application using the existing JDBC driver, follow these steps: 1. Evaluate the current driver. Before beginning the migration process, evaluate the existing JDBC driver that is in use and its capability to operate with SQL Server. Some considerations include: Will the current JDBC driver interoperate with SQL Server? If this information is not known, check with the vendor. Sun Microsystems also maintains a database of SQL Server compliant JDBC drivers. For more information, refer to http://servlet.java.sun.com/products/jdbc/drivers. If the current JDBC driver will interoperate with SQL Server, what are the performance characteristics of the driver in the SQL Server environment? Try to perform prototype tests of the driver to ensure that the level of performance is acceptable. What features of the driver are utilized by the Java application, and are these supported for SQL Server? For more information on specific Java extensions used by your JDBC driver, refer to your vendor documentation.

What type of JDBC driver is required (Type 1, 2, 3, 4)? If a new driver is needed, there are several different JDBC drivers commercially available. Each has slightly different features and characteristics. Some commonly used JDBC drivers include: JSQLConnect from JNetDirect, which is available at http://www.jnetdirect.com/products.php?op=jsqlconnect.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

257

SQL Server 2000 Driver for JDBC. Microsoft offers this native driver which will work on both UNIX and Windows. This driver is available at http://www.microsoft.com/downloads/details.aspx?FamilyId=07287B110502-461A-B138-2AA54BFDC03A&displaylang=en.

2. Determine how the application connects to the data source. Connections can be achieved using the DriverManager class or with a datasource implementation. If connectivity is obtained using the DriverManager class, two changes are needed to connect with the SQL Server database. These changes are: Loading the JDBC driver Making the connection The following example shows these changes:
// Loading the driver Class.forName("DriverClassName"); // Making the Connection String url = "jdbc:odbc:DSN"; String user = "sUser"; String password = "sPwd"; Connection con = DriverManager.getConnection(url, user, password); // // // Closing the Connection con.close();

In this example code, the URL uses the syntax jdbc:<subprotocol>:<subname>. Using the connection object con, you can create JDBC statements and pass them to the RDBMS. If connectivity is obtained through a datasource, the application server should be appropriately configured. Before using a datasource object, it must be deployed. The following list describes the deployment steps: 1. Create an instance of the DataSource class. 2. Set the datasource properties. 3. Register the datasource with a naming service that uses the Java Naming and Directory Interface (JNDI) API. Datasource deployment is usually performed by the system administrator. A JDBC driver that supports JDBC 2.0 API will provide different types of datasource implementations which can be used for connection pooling and distributed transactions. To obtain connectivity, the Java application performs a lookup of the logicalName. This information can be retrieved by passing the user name and password through the getConnection() method. An example of this method is presented as follows:
Context ctx = new InitialContext(); DataSource ds = (DataSource)ctx.lookup("jdbc/dbServer"); Connection con = ds.getConnection("sUser", "sPwd");

3. Configure the application server. If third-party JDBC drivers are used to connect to SQL Server, the drivers must be set in the CLASSPATH variable so the driver classes can be loaded. The JDBC

258

Developing: Applications Migrating Java driver installed for SQL Server will install a driver jar file, which contains driver classes for the database interaction. If the jar file is missing in the classpath, then the application will flag a 'class not found' exception. Based on the application's architecture, the CLASSPATH must be set. The syntax of this setting can vary between applications, servlets, JSPs, EJBs, and applets. a. For applications run directly from the operating system prompt, make sure that the driver can be found in the system's CLASSPATH. This can be performed by moving the file to its original location, which is already listed in the classpath, or adding the driver's current location to the CLASSPATH. b. For applications run within the IDE, the driver classpath has to be set in the vendor's IDE classpath. Refer to your vendors IDE documentation. c. For servlets and JSPs run within a servlet/JSP engine such as Tomcat, the classpath can be set in the vendor's classpath setup screens or by copying the driver jar file into the LIB folder of the engines installation. d. For EJBs, the JDBC driver jar file should also be set in the EJB vendor's classpath. e. For applets that run within a Web browser, copy the JDBC driver jar to the Web server root and specify the driver jar file in the applets HTML archive tab. The following HTML code example shows this option used on an applet tag.
<applet ... archive="JSQLConnect.jar">

4. If the SQL statements passed to the JDBC methods use any Oracle-specific syntax, then these items will need to be updated to reflect T-SQL syntax. Migration of PL/SQL to T-SQL is covered in detail in Chapter 11, "Developing: Applications Migrating Oracle SQL and PL/SQL." 5. The metadata information returned by the metadata methods should refer to SQL Server and the new JDBC Driver. To discover information about a database, a DatabaseMetaData object must be obtained. ResultSetMetaData provides information about the ResultSet, such as number of columns returned, an individual column's suggested display size, column names, and column types. If any of these metadata are used in the application, they have to be validated for integrity of the return values. To discover information about a database, a DatabaseMetaData object must be obtained. The getMetaData method on the Connection object returns a DatabaseMetaData object.
DatabaseMetaData dbmd = con.getMetaData();

ResultSetMetaData provides information about the ResultSet. Use the getMetaData method on the resultSet to get the ResultSetMetaData.
ResultSetMetaData rsmd = rs.getMetaData();

6. After the application changes are complete, create a new release build and fully test the application.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

259

Scenario 2 Porting the Application to Win32


Java is portable, so when the application is migrated to Windows, the Java language features remain the same. Changes should be minimal. Most of the changes listed in the following procedure are similar to those performed under the interoperation scenario described in the "Scenario 1: Interoperating Java on UNIX with SQL Server " section earlier in this chapter. To port the application to Win32, follow these common steps: 1. Install Java on the Windows server. If the migration path includes porting the Java application to the Windows environment, then the Java application server should be installed on Windows. Ensure that the hardware satisfies the installation prerequisites for the application server. 2. Determine if the existing JDBC driver will perform the needed functions, or if a new JDBC driver will be required. 3. Configure the application server for use with Windows. Update the path and classpath information. 4. Test the installation of the application server using the test utility provided with the application server. 5. Determine how the application connects to the data source. SQL Server has two types of authentication, Windows authentication and SQL Server authentication. When the application exists on the UNIX platform, the Windows authentication method is not possible without the installation of additional software such as the Vintela Authentication Services (VAS) product from Vintela (http://www.vintela.com/products/vas/). However, with the application migrated to the Windows platform, the Windows authentication feature of SQL Server can be taken advantage of. Windows authentication is specified using the connection property trustedAuthentication. The property may be set in either a driver manager connection string or a datasource. There is no need to specify the user and password properties when using trusted authentication. Windows authentication can be used by the JDBC clients on Windows only. 6. Move the Java code from UNIX to Windows. All the files and folders which the Java application interacts with should be migrated to Windows. Make sure that the paths in the Java application and the configuration files point to the corresponding folders in the Windows file system. If the Java application uses any native code written in other languages, then that code also has to be migrated to Windows. 7. Oracle-specific SQL and stored programs will need to be rewritten in T-SQL syntax. Migration of PL/SQL to T-SQL is covered in detail in Chapter 11, "Developing: Applications Migrating Oracle SQL and PL/SQL." 8. The metadata information returned by the metadata methods should refer to SQL Server and the new JDBC Driver. 9. After the application changes are complete, create a new release build and fully test the application.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

261

15
Developing: Applications Migrating Python
Introduction and Goals
This chapter contains a detailed discussion of changes that must be made to existing Python applications to work with Microsoft SQL Server. At the conclusion of this chapter, the Python application should be capable to successfully connect to the SQL Server database that was migrated from Oracle. The solution can then be tested. As discussed in the "Define the Solution Concept" section of Chapter 2, "Envisioning Phase," there are four different strategies available for transitioning applications in an Oracle to SQL Server migration project. The strategies are: Interoperate the application with UNIX Port or rewrite the application to the Microsoft .NET platform Port or rewrite the application to Microsoft Win32 Platform Quick port using the Microsoft Windows Services for UNIX 3.5 platform

Python is a portable, platform-independent, and general-purpose language with support for writing database client applications. Database capabilities are modularized, and they can be augmented through the use of additional APIs. Python database modules that are based on the Database API (DB-API) specification can be used to access relational databases, including SQL Server and Oracle. As long as the database module used to access the Oracle database adheres to the DB-API specification, porting to SQL Server is straightforward and can be done with minimal changes. If the existing database drivers do not meet DB-API specifications, the driver will need to be replaced and configured. Because of the cross-platform capabilities and the use of modular database drivers, some of the migration strategies are more feasible than others. For example, because Python can be ported to Windows, there is no need to rewrite the application in the .NET framework or for the Win32 environment. Also, because the application can run within the Windows environment, a quick port using Windows Services for UNIX is not necessary in most cases. Based on the available migration strategies, two scenarios can be developed to migrate the Python application. These scenarios include:

262

Developing: Applications Migrating Python Scenario 1: Interoperating Python on UNIX with SQL Server If the business requirements do not include eliminating the UNIX environment, an interoperation strategy can be implemented quickly. Few changes need to be made to the source code, and installing a new driver allows the Python application to connect to a SQL Server database. Interoperation can also be used as an interim step, if the migration is performed in phases. Scenario 2: Port the Python Application to Win32 Python applications can also be ported to run natively on the Windows platform. As with interoperation, few changes need to be made to the source code. These changes are usually related to connectivity issues.

Note: If your Python applications use UNIX system calls extensively (such as frequently calling exec), porting them to Windows Services for UNIX/Interix may be a suitable option because Interix has more support for the desired system calls. Chapter 2, "Envisioning Phase," provides a more detailed discussion of when choosing Windows Services for UNIX would be more appropriate when moving to a Windowsbased platform.

You will need the following to implement this option (porting Python applications to SFU/SQL Server): A port of Python for interix has been made available by Interop Systems and can be downloaded from http://www.interopsystems.com/tools/warehouse.htm. A connectivity driver to the SQL Server database. This is provided by the port of FreeTDS on Interix and is downloadable from Interop Systems at http://www.interopsystems.com/tools/db.htm. FreeTDS provides two connectivity options for the Python applications to connect to the SQL Server database. One is a library called CTlib, The other is an ODBC driver. If you chose the CTLib option for your database connectivity driver, you will need the Sybase Connectivity Module for your Python application to make appropriate calls to CTLib. The source code for this Sybase Connectivity Module can obtain from Object Craft at http://www.object-craft.com.au/projects/sybase/. This solution is not available pre-compiled from Interop Systems but is a simple port for a UNIX developer. If you chose ODBC for your database connectivity, you will need an additional interface for the Python application to make ODBC calls. This is available as a commercial package called mxODBC and can be obtained from http://www.egenix.com.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows If you use the ODBC driver, you will also need an ODBC driver manager. Two different ODBC driver managers iODBC and unixODBC are available for Windows Services for UNIX from http://www.interopsystems.com/tools/warehouse.htm.

263

This technology option (porting Python applications to SFU), however, has not been fully tested as a part of development of this solution and therefore has not been detailed further.

Scenario 1: Interoperating Python on UNIX with SQL Server


Two common modules used for connecting a Python application to an Oracle database in the UNIX environment are: DCOracle2 mxODBC

DCOracle2 (http://www.zope.org/Products/DCOracle/) does not support SQL Server, but mxODBC (http://www.egenix.com/) supports the ODBC interface and can be used with SQL Server. If the existing application uses DCOracle2, this interface will need to be replaced to allow connectivity with SQL Server. Because ODBC is not database-specific, only minor modifications are needed to connect to the migrated database.

Case 1: Interoperating Using the mxODBC Module


To interoperate a Python-based application using DCOracle2 module with SQL Server, follow these steps:
Note that these steps assume that a DCOracle2 module will be replaced with the mxODBC module.

1. Install the ODBC driver. In order for mxODBC to connect to SQL Server, an ODBC driver manager must be installed. Two available driver managers are: iODBC, available at http://www.iodbc.org/. DataDirect ODBC for UNIX Platforms, available at http://www.datadirect.com/techzone/use_case/odbc-unix-sqlservermain/index.ssp. For detailed installation instructions, refer to http://www.datadirect.com/techzone/use_case/odbc-unix-sqlserverinstall/index.ssp.

2. Install the mxODBC module based on the Python version being used. For installation instructions, refer to http://www.egenix.com/files/python/mxODBCZope-DA.html#Installation. 3. Configure the ODBC driver to work with mxODBC through the driver manager. 4. Create a SQL Server data source. To function, ODBC needs a data source to connect to the database. The Data Source Name (DSN) is generally defined in an odbc.ini file that is used by the driver manager. The DSN is used by the driver manager to load the ODBC driver. iODBC offers a graphical user interface to set up the DSN. Complete instructions are available at http://www.iodbc.org/index.php?page=docs/odbcstory.

264

Developing: Applications Migrating Python The DSN can also be configured by manually modifying the odbc.ini file. The following example file uses a SQL Server DSN.
[ODBC Data Sources] SS_HR_DB=Sample MS SQLServer [SS_HR_DB] Driver=/opt/odbc/lib/mxODBC.so Description=SQL Server 2000 Database=hrapp LogonID=daveb Password=cougar Address=win2kdbp1,1433

5. Test the connectivity. iODBC contains a utility named odbctest which can be used to test DSN entries and interact with the database by connecting and issuing queries directly without any code. The mxODBC package consists of a test script that can also be used to verify the database connectivity. To perform the test, execute the following command: python mx/ODBC/Misc/test.py 6. Modify the application to use mxODBC instead of the existing DCOracle2 API. Note The examples in this section use the iODBC driver. Syntax may vary slightly based on the ODBC driver that is used in your solution. There are several, minor changes that may need to be made to the existing Python application to allow connectivity with the SQL Server database. These common issues that should be modified include: Import statements The import statement, generally found at the head of the application code, should be modified to include the new database module. If the application currently uses the DCOracle2 module, the import statement is as follows:
import DCOracle2

This entry should be modified to allow for the mxODBC module. The import statement should be changed as follows:
import mx.ODBC.iODBC

Connection objects The connection object will also need to be modified to point to the new data source. The current entry should be similar to the following example:
db = DCOracle2.Connect('scott/tiger@orahrdb')

This entry should be modified to reflect the new data source. In the following example, the DriverConnect() API is used to pass additional information to SQL Server.
db = mx.ODBC.iODBC.DriverConnect('DSN=sqlserverDSN;UID=scott;PWD=tiger')

This iODBC string will have to be modified based on the driver manager in use. There are other connection methods, such as ODBC() and connect(), which are also supported by mxODBC. The advantage with DriverConnect() is that it allows more configuration information, such as the log file name, to be passed as part of the connection string.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

265

Cursor Execution Change the cursor object execute() and executemany() method calls to use a question mark (?) for parameters instead of numeric parameters (:1) or named parameters (:column_name). The following example should be similar to your existing Python code being used with Oracle.
c = db.cursor() custId = "ALFKI" c.execute("Select * from customers where customerID = :1", custId) ... id = "123" name = "Bill" desc = "CoffeeShop" c.execute("insert into categories " + \ " (categoryid, categoryName, description)" "values (:cid,:cname,:cdesc)" \ ,cid = id , cname = name ,cdesc= desc \ ) + \

The code should be modified for use with SQL Server. In the following code, note that numeric and named parameters have been replaced.
c= db.cursor() custId = "ALFKI" c.execute("Select * from customers where customerID = ?",(custId,)) ... id = "123" name = "Bill" desc = "CoffeeShop" " (categoryid, categoryName, description)" "values (?,?,?)" \ ,(id, name, desc) \ ) + \ c.execute("insert into categories" + \

Cursors multiple executions The executemany() function is used with DCOracle2 to perform multiple inserts with a single call by passing a list of parameters. For example,
cursor.executemany("insert into inventoryinfo values (:1,:2)",\ [('Bill','Dallas'),('Bob','Atlanta'),('David','Chicago'), \ ('Ann','Miami'),('Fin','Detroit'),('Paul','Dallas'),\ ('Scott','Dallas'),('Mike','Dallas'),('Leigh','Dallas')])

For using multiple cursors on a connection to SQL Server, use the setconnectoption() method to set the SQL.CURSOR_TYPE to SQL.CURSOR_DYNAMIC as shown as follows and replace the parameter specification style in the executemany() call:
# Connect to the database db = mx.ODBC.iODBC.DriverConnect('DSN=sqlserverDSN;UID=scott;PWD=tiger' # Set SQL Server connection option DB.setconnectoption(SQL.CURSOR_TYPE, SQL.CURSOR_DYNAMIC) cursor = db.cursor () cursor.executemany("insert into inventoryinfo values (?,?)",\ [('Bill','Dallas'),('Bob','Atlanta'),('David','Chicago'), \

266

Developing: Applications Migrating Python

('Ann','Miami'),('Fin','Detroit'),('Paul','Dallas'),\ ('Scott','Dallas'),('Mike','Dallas'),('Leigh','Dallas')])

Stored Procedures The DB API 2.0 callProc() method for executing stored procedures is not implemented in mxODBC. To overcome this deficiency, a stored procedure can be executed similar to a SQL statement using one of the execute methods. However, a requirement in this case is that the stored procedure returns some result. All calls to callProc() need to change to use cursor.execute, as shown in the following example:
c.execute("{call proc_name(?,?)}", (1,2)})

7. Change all embedded SQL statements to T-SQL. This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications Migrating Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Scenario 2: Port the Python Application to Win32


Because Python is a cross-platform language, porting applications from a UNIX environment to Windows is not much more complex than the steps required for interoperation. The API used in the discussion on porting to the Windows environment is the mxODBC. The strategy, steps, and actual changes required are very similar to those discussed for interoperation. Zope offers an ADO adapter for SQL Server on the Windows platform which can give better performance than the use of ODBC. Details on this can be obtained from http://zope.org/Members/freiser/ZADODA.

Case 1: Porting a Python Application using mxODBC


Most of the following steps are covered in more detail in the "Case 1: Interoperating Using the mxODBC Module" section discussed in scenario 1. To port the application to Windows, follow these steps: 1. Download Python for Windows from http://www.python.org. 2. Install Python on Windows. 3. Add the Python folder to the Windows CLASSPATH environment variable. 4. Install the appropriate mxODBC for the version of Python installed. 5. Create an ODBC DSN for the target database. Create an ODBC data source to connect to SQL Server. After the DSN is created, a Python application using mxODBC can connect to SQL Server and run queries. 6. Transport source code to the target server. 7. Update the application code. If your application uses DCOracle2, then the application changes required will be similar to those discussed in the "Scenario 1: Interoperating Python on UNIX with SQL Server" section earlier in this chapter. Some variations based on the Windows environment include the import statement and the connection object as shown in the following examples. The import statement should be changed from:
import DCOracle2

to
import mx.ODBC.Windows

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows The connection object must be initialized as in the following example:
db = mx.ODBC.Windows.DriverConnect('DSN=sqlserverDS')

267

Note This DSN uses Windows Authentication Mode to connect to SQL Server. 8. Change all embedded SQL statements to T-SQL. This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications Migrating Oracle SQL and PL/SQL" for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

269

16
Developing: Applications Migrating Oracle Pro*C
Introduction and Goals
Third generation languages (3GL) such as C and C++ do not have the capability to interface with databases. Databases can only communicate using the SQL fourth generation language (4GL). Using embedded SQL allows a database to interface directly with the 3GL languages. Pro*C is Oracle's proprietary interface to the C language which permits the use of embedded SQL. As discussed in the "Define the Solution Concept" section of Chapter 2, "Envisioning Phase," there are four different strategies available for transitioning applications in an Oracle to Microsoft SQL Server migration project. The strategies are: Interoperate the application with UNIX Port or rewrite the application to the Microsoft .NET platform Port or rewrite the application to the Microsoft Win32 platform Quick port using the Microsoft Windows Services for UNIX 3.5 platform

Because Pro*C is specific to Oracle, migration strategies are limited. Interoperation and porting cannot be utilized because no methods exist that allow Pro*C to communicate with SQL Server. A complete rewrite is the only option. This chapter focuses on the preferred scenario that can be used in Pro*C application migrations: Scenario 1: Rewriting the application to the .NET platform. Rewriting an application is a challenging endeavor because a one-to-one mapping from the source environment to the target environment is very difficult to achieve. When an application is rewritten, there is always a risk that some functionality from the original application may be lost during the transition. Commonly, Pro*C programs are procedural and do not have a graphical user interface. Migrating procedural code is easier than migrating GUI-based applications. Extensive testing needs to be performed on the rewritten application to ensure that all of the business logic and functionality has accurately been recreated. This chapter only discusses the transformation that needs to occur to the most common database-related components when rewriting a Pro*C application for the .NET environment. A comprehensive study of the entire rewrite is beyond the scope of this guidance.

270

Developing: Applications Migrating Oracle Pro*C

Understanding the Technology


Pro*C is available on the Windows platform. Unfortunately, there are no methods available to allow Pro*C code to interface with SQL Server. Microsoft offers a product similar to Pro*C called Embedded SQL for C (ESQL/C) which allows Transact-SQL statements to be embedded in C language programs. ESQL/C can be used as the target language for rewriting Pro*C programs because of its close resemblance to Pro*C. However, using this language is not recommended because Microsoft no longer develops this product, and future versions of SQL Server will not support ESQL/C. Visual Basic .NET is recommended for rewriting the application. Visual Basic .NET provides a robust scripting language and can easily be interfaced with SQL Server. When migrating the database from Oracle to SQL Server, the Pro*C code and its interactions with Oracle should be examined to ensure that the functions can be recreated in the rewritten application.

Understanding Pro*C
Before the application can be rewritten, you must evaluate the functionality currently provided by the Pro*C application. While the computational and business logic is provided by the host C language, Pro*C provides the syntax to implement the database transactions. The embedded SQL in Pro*C offers the following functionality that will have to be migrated: Connect to and maintain a connection to Oracle database. Declare, open, and use cursors to manipulate data. Interact with Oracle databases through SQL statements. Provide mechanisms (host variables) to interface between the host code and the database code. Close any open cursors. Handle any database exceptions. Close the connection to the database.

Each of these functions will need to be recreated in the new application. Pro*C is mostly used as an imperative programming language and is used for constructing programs which involve computation. The target programming environment should have programming capabilities and features that are similar to those of C/C++ and also interface with SQL Server for retrieving and manipulating data. The GUI capabilities will not be discussed in this guide.

Understanding .NET and ADO.NET


The Microsoft .NET Framework provides a programming infrastructure for building, deploying, and operating applications and services. When working with databases, ADO.NET is a key component. ADO.NET is a set of libraries included with the Microsoft .NET Framework that enables communication with various data stores from .NET applications. The ADO.NET libraries include classes for connecting to a data source, submitting queries, and processing results. ADO.NET provides the tools to efficiently implement data access and functionality provided by Pro*C. Data access in ADO.NET relies on the following two entities: DataSet

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Data provider

271

DataSet
The DataSet is a copy of the data retrieved from the database cached in memory. Data can be loaded into a DataSet from any valid data source, such as a SQL Server database, or an XML file. The DataSet persists in memory, and the data therein can be manipulated and updated independent of the database. When appropriate, the DataSet can then be used for updating the database.

Data Provider
A data provider forms the link between the Visual Basic .NET program and the database. A data provider is not a single component; rather it is a set of related components that work together to provide data in an efficient, performance-driven manner. Two data providers are available for Visual Basic .NET. The SQL Server Data Provider is designed specifically to work with SQL Server 7 or later, and the OLE DB Data Provider, which can also connect to other types of databases. Each data provider consists of versions of the following generic component classes: The Connection object provides the connection to the database. The Command object executes a command against a data source. It can execute non-query commands, such as INSERT, UPDATE, or DELETE, or return a DataReader with the results of a SELECT command. The DataReader object provides a forward-only, read-only, connected recordset. The DataAdapter object populates a disconnected DataSet or DataTable with data and performs updates.

The following classes are also offered: Transaction, CommandBuilder, Parameter, Exception, Error, and ClientPermission. For more information on data provider components, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpguide/html/cpconusingadonetproviderstoaccessdata.asp. Even though the syntax and use of the functions differ, ADO.NET provides equivalent database functions for SQL Server. It is important to ensure that these functions are translated to the new environment during the rewriting process.

Scenario 1: Rewriting Pro*C to the .NET Platform


Rewriting the application for Windows makes use of native SQL Server APIs, ADO.NET, Microsoft ActiveX Data Objects (ADO), or Open Database Connectivity (ODBC) running natively in the Windows subsystem. Rewriting the application can result in a more stable and integrated configuration after the project is complete.

Case 1: Rewrite the Application using Visual Basic.NET


Visual Basic.NET provides all the functionality of the C programming language as well as several interfaces to SQL Server databases to execute SQL statements. This scenario describes how to rewrite a Pro*C application with Visual Basic .NET. The figure that follows shows a graphical representation of how the Pro*C code will migrate over to Visual Basic .NET during the rewrite.

272

Developing: Applications Migrating Oracle Pro*C

Figure 16.1
Database access in Pro*C and Visual Basic .NET

Each function that currently resides in Pro*C code will need to be rewritten, from the user interface to the database connection. There are two options available to connect to the SQL Server data source from the .NET framework. These options are: The SQL Server .NET Data Provider is implemented in the SQLClient namespace.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows The OLE DB provider is in the OLE DB namespace. There are OLE DB providers for both Oracle and SQL Server.

273

For more information on data access services available with ADO.NET, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpguide/html/cpconaccessingdatawithadonet.asp. To rewrite an application using Visual Basic .NET, follow these steps:
The examples used in these procedures are based on SQL Server .NET DataProvider and DataReader objects. If an OLE DB provider is used, there may be slight variations to the following steps.

1. Install the .NET Framework SDK and redistribution. The SDK contains the resources for building, testing, and deploying .NET Framework applications. For access to the latest SDK, refer to http://msdn.microsoft.com/netframework/downloads/updates/default.aspx. The Microsoft Visual Basic .NET software is required for programming the application. For more information, refer to http://www.microsoft.com/downloads/details.aspx?FamilyID=dd6ec730-fd9a-4e368552-8accc4cd8fb3&DisplayLang=en. At this point, it is assumed that you have all the components in place to write a VB .NET application using the SQL Server .NET DataProvider. It is also assumed that the migrated SQL Server database is in place. For access to several references for creating Windows forms applications, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpguide/html/cpconCreatingWindowsFormsApplications.asp. 2. Rewrite connectivity statements within application code. In Pro*C, database open and close statements are similar to the following example:
/* assign username and password to host variables */ strcpy(username.arr,"SCOTT"); username.len=strlen(username.arr); strcpy(password.arr,"TIGER"); password.len=strlen(password.arr); /* execute connect statement */ EXEC SQL CONNECT :username IDENTIFIED BY :password; /* execute close statements */ EXEC SQL COMMIT WORK RELEASE;

These same functions can be recreated using ADO.NET. Here are examples of the same functions as written for .NET.
' declare a connection object Dim cn As New SqlConnection ' define a connection string Dim strConn As String 'This is the connection string strConn = "Data Source=(local)\DataSourceName;" & _ "Initial Catalog=databaseName;" & _ "Trusted_Connection=Yes;" ' instantiate connection object

274

Developing: Applications Migrating Oracle Pro*C

cn = SqlConnection(strConn) ' open connections cn.Open() ' close connections cn.Close()

For more details on SqlConnection, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpref/html/frlrfsystemdatasqlclientsqlconnectionclasstopic.asp 3. Rewrite the queries and cursor functions. Pro*C uses embedded SQL to store the output of a query statement in a cursor from which rows can be fetched. Visual Basic.NET does not support cursors; instead, it uses several other methods for retrieving data sets from SQL queries. One of these data retrieval methods is the DataReader object, which is a lightweight object that is read-only and, hence, ideal for queries. The following link provides access to more information on retrieving data using DataReader object: http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpguide/html/cpcontheadonetdatareader.asp. Here are some common cursor operations encountered in Pro*C applications: Defining a cursor:
EXEC SQL DECLARE cust_cursor FOR SELECT CustomerID, CompanyName FROM Customers;

Opening the cursor:


EXEC SQL OPEN cust_cursor;

Fetching rows using the cursor:


for (;;){ EXEC SQL WHENEVER NOT FOUND DO break; EXEC SQL FETCH cust_cursor INTO :customerid, :companyname; printf("CustomerID: %s CompanyName: %s\n", customerid, companyname); }

Closing the cursor:


EXEC SQL CLOSE cust_cursor;

Although SQL Server does not support cursors, equivalent functions can be created using ADO.NET: Defining the query:
strSQL = "SELECT CustomerID, CompanyName FROM Customers"

Creating a command object:


Dim cmd As New SqlCommand(strSQL, cn)

Creating a data reader object:


Dim rdr As SqlDataReader = cmd.ExecuteReader()

Displaying the data reader result set:


While rdr.Read() Console.WriteLine(rdr("CustomerID") & " " & rdr("CompanyName")) End While Do While rdr.Read() Console.WriteLine(rdr.GetInt32(0) & " rdr.GetString(1) & " " & _ " & rdr.GetFloat(2))

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

275

Loop

Closing the data reader object:


rdr.Close()

For more information on implementing a DataReader, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpguide/html/cpconimplementingdatareader.asp. 4. Rewrite the embedded DML statements The embedded DML is used to insert, update, and delete data. The SqlCommand object can be used to perform these SQL operations. The DataAdapter and DataSet objects are required for executing DML and stored procedures. Migration of these operations is shown in the following examples: The DML operations using Pro*C: Executing an INSERT DML statement:
EXEC SQL INSERT INTO Customer VALUES (:newcompanyname, :customerid);

Executing an UPDATE DML statement:


EXEC SQL UPDATE Customer SET CompanyName = :newcompanyname WHERE CustomerID = 'ACME';

Executing a DELETE DML statement:


EXEC SQL DELETE FROM Customer WHERE CustomerID = 'ACME';

For each of the DML operations shown, the equivalent operations can be performed using ADO.NET: Executing an INSERT DML statement:
Dim strSQL As String strSQL = " INSERT INTO Customer VALUES Dim cmd As New SqlCommand(strSQL, cn) Dim param1, param2 As SqlParameter Param1 = cmd.Parameters.Add("@newcompanyname", SqlDbType.VarChar, 10) Param2 = cmd.Parameters.Add("@customerid", SqlDbType.NChar, 5) Param1.Value = COMPANY1" Param1.Value = "VALUE1" cmd.ExecuteNonQuery() " & _ " (@newcompanyname,@customerid)"

Executing an UPDATE DML statement:


Dim strSQL As String strSQL = "UPDATE Customer SET CompanyName = @newcompanyname " & _ "WHERE CustomerID = 'ACME'" Dim cmd As New SqlCommand(strSQL, cn) Dim param1 As SqlParameter Param1 = cmd.Parameters.Add("@newcompanyname", SqlDbType.VarChar, 10) Param1.Value = "NewName" Dim InsertRecordsAffected As Integer = cmd.ExecuteNonQuery()

Executing a DELETE DML statement:


Dim strSQL As String strSQL = " DELETE FROM Customer WHERE CustomerID = 'ACME'" Dim cmd As New SqlCommand(strSQL, cn) Dim InsertRecordsAffected As Integer = cmd.ExecuteNonQuery()

276

Developing: Applications Migrating Oracle Pro*C 5. Rewrite Transaction Management operations The Transaction Management operations in Pro*C are: Beginning a transaction:
EXEC SQL SET TRANSACTION READ ONLY;

Committing a transaction:
EXEC SQL COMMIT WORK RELEASE;

Setting a savepoint:
EXEC SQL SAVEPOINT save_point;

Rollback a transaction:
EXEC SQL ROLLBACK WORK RELEASE;

Rollback to a savepoint:
EXEC SQL ROLLBACK TO SAVEPOINT save_point;

Transaction Management in ADO.NET Changes are made through the DataSet object. The Update method is used for copying the changes made to the dataset back to the database which implicitly commits the changes. Stored procedures are executed in Pro*C using the syntax shown in the following example. In this example, the host variables represent IN or OUT parameters.
EXEC SQL EXECUTE BEGIN GET_ROOM_DETAILS(:HotelId, :RoomType, :AvailableRooms, :StandardRate); END; END-EXEC;

6. Rewrite calls to stored procedures.

Stored procedure calls in ADO.NET: SqlCommand objects can be used to call SQL Server and MSDE stored procedures. The SqlCommand object exposes a CommandType property that can be used to help simplify the code to call stored procedures. Set SqlCommand objects CommandText property to the stored procedure name, and the CommandType property to StoredProcedure, and then call the stored procedure, as shown in the following code:
Dim strConn As String strConn = "Data Source=(local)\NetSDK;Initial Catalog=Northwind;" & _ "Trusted_Connection=Yes;" Dim cn As New SqlConnection(strConn) Dim cmd As New SqlCommand("GET_ROOM_DETAILS", cn) cmd.CommandType = CommandType.StoredProcedure Dim param As SqlParameter param = cmd.Parameters.Add("@HotelId", SqlDbType.NChar, 5) param.Value = "HOTEL1" cn.Open() Dim rdr As SqlDataReader = cmd.ExecuteReader() Do While rdr.Read() Console.WriteLine(rdr("OrderID")) Loop

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

277

rdr.Close()

A more detailed discussion on using stored procedures in Visual Basic.NET is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnadvnet/html/vbnet09102002.asp. 7. Rewrite exception handling. There are two ways to handle Oracle errors in Pro*C: SQL embedded statements. The syntax is:
EXEC SQL WHENEVER [condition] [action];

Checking error codes in SQLCA The generic syntax is


if ( sqlca.sqlcode == [a value { Handler code } i.e., -1] )

Exception handling in Visual Basic .NET Use a Try block to add exception handling to a block of code. Add Catch blocks, as necessary, to trap individual exceptions. The .NET runtime handles Catch blocks in order, looking for an "is a" match against the current exception. It uses the first block it finds that matches. You can nest Try blocks, making it easy to effectively push and pop exception-handling states. Add a Finally block to your Try block to run code unconditionally, regardless of whether an error occurs or not. You can create your own exception classes that inherit from the base exception class (or any class that inherits from that class) to add your own functionality:
Try ' Code that throws exceptions Catch E As OverflowException ' Catch a specific exception Catch E As Exception ' Catch the generic exceptions Finally ' Execute some cleanup code End Try

A more detailed discussion of error handling in VB.NET can be found at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnvbdev01/html/vb01f1.asp. 8. Change all embedded SQL statements to T-SQL. This is a step common to all migrations. Refer to Chapter 11, "Developing: Applications Migrating Oracle SQL and PL/SQL," for a detailed discussion on modifying Oracle SQL to be SQL Server-compliant.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

279

17
Developing: Applications Migrating Oracle Forms
Introduction and Goals
Oracle Forms are applications built using Oracle Developer, a Rapid Application Development (RAD) environment available from Oracle. Oracle Forms-based applications can retrieve and manipulate data in Oracle databases. These forms can also be deployed across the Web. It is not possible to modify an Oracle Forms application for interoperation with a Microsoft SQL Server database, and it is recommended that applications of this type are rewritten. Microsoft Visual Basic .NET is an ideal choice for rewriting an Oracle Forms application because it is very similar in nature to Oracle Developer, both in terms of the development tools and wizards, as well as the end product itself. This chapter is for developers who want to migrate their Oracle Forms application running on an Oracle database to Visual Basic .NET running on SQL Server. The tools, processes, and techniques required for a successful conversion are described. In this guide, the Oracle Forms application is first analyzed for its components, and then a similar analysis of the Visual Basic .NET application environment is conducted. Strategies for migrating from Oracle Forms to Visual Basic .NET are then examined. A detailed discussion on how to perform the migration is beyond the scope of this guide. The goal of this chapter is to provide a blueprint for performing the migration.

Migration Approach
Migrating an Oracle Forms application to Visual Basic .NET is complex, and the treatment of the subject here should be seen as a design sketch from which a detailed analysis and plans for an application migration can be made. The basic approach is feasible for the following two reasons: 1. Oracle Forms and Visual Basic user interfaces share many common elements. 2. The Oracle Forms Designer is capable to generate Visual Basic code. The form is the basis for the user interface (UI) in both Oracle Forms and Visual Basic. The form is graphical in nature and is used to present data and accept user input, and the form can contain elements which are both graphical and non-graphical in nature. Oracle

280

Developing: Applications Migrating Oracle Forms

Forms and Visual Basic have several common elements between them. However, their organization, naming, and properties differ. Figure 17.1 illustrates how different components of Oracle Forms map to components in Visual Basic .NET.

Figure 17.1
Mapping of Oracle Forms to Visual Basic .NET WinForm

This diagram shows how all the high-level components of an Oracle Forms application map to components in Visual Basic .NET. This mapping serves as a convenient guide for the work to be done and also helps maintain the logical composition of the application. The mapping also helps avoid introducing too many unintended side effects, such as firing of triggers at the wrong time and complicating the migration. This diagram is used in this chapter to document the source Oracle Forms application, define the target Visual Basic .NET application, and plan for performing the migration.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

281

Examining Oracle Forms


Oracle Forms applications are Oracle database optimized transactional applications built using the tools provided by Oracle's Developer Suite. Oracle Forms Builder, a part of Oracle Developer suite, enables you to quickly and easily create database forms and business logic. The development is done using wizards, drag-and-drop components, and built-in functions. A visual editor can be used to refine the forms with graphics and other components. The coding language is PL/SQL. A client/server Forms application can run on the Web with very little or no modification. Like most GUI applications, an Oracle Forms application is composed of two parts: the application logic and the user interface. Oracle Forms maintains this separation throughout its design. Hence the Forms Runtime is the same in both the two-tier and three-tier architectures. An understanding of the basic concepts and components that make up a form is important to planning an application migration. Figure 17.2 shows a simplified view of the Oracle Forms environment.

Figure 17.2
High-level organization of Oracle Forms components

An Oracle Forms application is an organization of items. The organization of the items has two perspectives: that of the developer, called the form view; and that of the user, called the visual (or presentation) view. In the form view, the items are organized into logical blocks and have no visual structure. The visual structure is provided by constructing the visual view where items are placed on a canvas and presented to the user in windows. It is very important to maintain the same visual view when an Oracle Forms application is migrated. The Visual Basic .NET developers need to look at the visual view to understand how many screen layouts they need in Visual Basic .NET. Each item in a canvas will correspond to one control in .NET. The migration should be attempted by creating two models: one is based on the form view, called the Form Model; and the other is based on the visual view, called the Visual Model. Figure 17.1 shows the form view of the mapping from Oracle Forms to Visual Basic .NET. The equivalent visual view is shown in Figure 17.3.

282

Developing: Applications Migrating Oracle Forms

Figure 17.3
Mapping of visual view of Oracle Forms to Visual Basic .NET

The form view is used by the developer to access the business logic and the data access. The visual view is used to build the user interface. Note Providing a complete description of how to transform every item and every piece of code from an Oracle Forms application to a Visual Basic .NET application is beyond the scope of this guide. In this section, the entire Oracle Forms application is examined, with an emphasis on understanding the database-related components. An Oracle Forms application is built using components called modules. The following four types of modules are visible when a form application is opened up in Oracle Forms Builder: 1. Object Library 2. PL/SQL Library 3. Form Module 4. Menu Module Figure 17.4 shows the details of an Oracle Forms application as seen in the Ownership View using Oracle Forms Builder. This tool will be employed in documenting the current application.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

283

Figure 17.4
An Oracle Form as visible in Oracle Forms Builder

An application can be built using multiple forms, menus, and library modules. The Oracle Forms Builder can be used to browse the component model and understand the modules and their characteristics so that the Forms Model can be built. The elements offered by these modules account for the capabilities, features, and functioning of the application and should be the focus of the developer involved in migrating an application.

284

Developing: Applications Migrating Oracle Forms

Table 17.1 is a compilation of the files that make up an Oracle Forms application. Table 17.1: Oracle Forms File Extensions Module
Form

Extension
.fmb .fmx .fmt

Characteristic
Form design file Form executable runfile Form text export file Menu design file Menu executable runfile Menu text export file PL/SQL library design file (can also be executed contains both source and executable code) PL/SQL library executable runfile (contains no source code) PL/SQL library text export file Object library design file Object library text export file

Menu

.mmb .mmx .mmt

PL/SQL Library

.pll .plx .pld

Object Library

.olb .olt

Oracle Forms Builder can be used to view all the components in a Forms application by opening the form and clicking View, and then Show PL/SQL only.

Object Library
The object-oriented programming concept of reusing code is implemented in Oracle Forms by the object library and PL/SQL library modules. The Object library contains reusable objects and the PL/SQL library contains reusable code. Hence, these are examined first for a migration. The Object library is a collection of forms objects that can be used in other form, menu, or library modules. The Object Library is used to store, maintain, and distribute standard forms objects, which can then be reused across the entire application. They also help conserve runtime memory and encourage reuse. The objects in an Object Library are commonly organized as folders. The organization may be by type of item or by application functional area. A good example of the use of an object library is a control block used to enter complex search criteria in a query-only form. This control block is then used in multiple forms. Oracle Forms Builder allows creation of a subclassed version of a library object for reuse in any desired module. To be able to tell whether an object is subclassed or not, look at the subclass information property of the data block. The object-oriented features of Visual Basic .NET allow you to model library object inheritance closely to maintain the same structure. The object library can be viewed as a DLL.

PL/SQL Library
The PL/SQL library contains reusable code invoked by other form, menu, or library modules. The code, also called a program unit, can be user-named functions, procedures, or packages. The program unit is stored and executed on the client. The program units are not for database-related code alone. They may contain purely business logic. For example, the code could be used to check the validity of a user input in a form. Any SQL contained in these program units is passed to the database.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

285

Libraries make all the PL/SQL code available to the form without being part of the form. The libraries are loaded dynamically. Multiple forms can attach the same PL/SQL library. Table 17.1 identifies the files that are related to the PL/SQL Library that contain code that needs to be migrated.

Form Module
A form (or form module) is the main component that anchors an application and provides the necessary code to interact with the datasource and the user interface. The underlying database data is reflected in multiple items, including fields, check boxes, and radio groups. A form is logically organized into blocks.

Blocks
A block is a container for a group of related items such as text boxes, lists, and buttons. The block itself does not have a visual representation. Oracle Forms Builder consists of two main types of blocks. These are: 1. Data blocks 2. Control blocks These two kinds of blocks are very different from each other. The data block serves as a bridge to the underlying data and provides an abstraction for how the data is reached. The control blocks are more like programming units that can organize controls into logical groups forming part of the same user interface navigation cycle. Whether a block is a data block or a control block can be determined by verifying the Database Data Block property for the block. It is a data block if the setting is "Yes" and a control block if the setting is "No."

Data Block
A data block can be associated with a specific database table (or a view), a stored procedure, a FROM clause query, or a transactional triggers. By default, the association between a data block and the database allows for automatic access and manipulation of data in the database. Triggers are used when a data block needs access to tables not directly associated with it (non-base tables). Various relationships can exist between data blocks and their underlying base tables. A common relationship is the master-detail relationship where each row in the master table corresponds to one or more rows in the detail table. The master-detail relationship allows the primary key and foreign key values to be linked across data blocks. Oracle Forms Builder automatically generates the objects and the PL/SQL code needed to support the master-detail relationships when these blocks are built using the Data Block Wizard.

286

Developing: Applications Migrating Oracle Forms

Figure 17.5 shows the different components that make up a master-detail block as created by the Data Block Wizard.

Figure 17.5
Code generated by Oracle Forms Builder Data Block Wizard in creating a master-detail block

By capturing a query on a table, a block can also be designed to show one or more records at a time. These are called single-record blocks or multi-record blocks, respectively. Visual Basic .NET developers can use Oracle Forms Builder to look at the generated code to find out about the relationships between the blocks. For example, in Figure 17.5, code in the ON-POPULATE-DETAILS trigger on the DEPT block suggests that it is the master block of the EMP block. In Visual Basic .NET, the equivalent of an Oracle Form data block is a Dataset into which data can be read from the database, and on the visual side, a DataGrid control bound to the Dataset will be capable to display the data. Dataset is just one option. In addition to the Dataset, ADO.NET offers DataReader for read-only data access. Additional properties convey the association between a data block and a table, such as the WHERE clause and ORDER BY clause that will be used to create SQL for the Visual Basic .NET Dataset. The Insert Allowed, Update Allowed, and Delete Allowed properties will tell the Visual Basic .NET developer whether the block will insert, update, or delete any records.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

287

Figure 17.6 shows all the properties of a data block.

Figure 17.6
Properties of a data block

Control Block
The second type of block is the control block. The items in a control block are not associated with the database, and its items do not relate to any columns within any database table. The items in a control block are called control items. For example, buttons in a module can initiate certain actions and can be logically grouped in a control block. The control block is only a logical grouping and the physical or visual placement may differ. It is also a mechanism to segregate the control block items from the items that are dependent on the database. Control items are used to perform functions, such as accepting input from the user and displaying calculated values and look values.

Program Units
Program units are part of forms modules and they contain named procedures, functions, or packages. The program units are similar to the program units created in the libraries but are local to the form in which they are created. They are stored and executed on the client with any embedded SQL passed to the database for processing. If this code contains database interactions, then it needs to be converted to T-SQL. If it contains only non-database related code, then it needs to be converted to its equivalent in Visual Basic .NET.

Triggers
The trigger object is a PL/SQL block executed on an event. Triggers can be owned by the form module, a data block, or an item, depending upon their scope.

288

Developing: Applications Migrating Oracle Forms

If this code contains database interactions, then it needs to be converted to T-SQL. If it contains only non-database code, then it needs to be converted to its equivalent in Visual Basic .NET. When a trigger is activated, it executes the code it contains. Each triggers name defines what event will fire it; for example, a WHEN-BUTTON-PRESSED trigger executes its code each time the user clicks the button to which the trigger is attached. A trigger can be replaced by an equivalent event in Visual Basic .NET. The equivalent of an Oracle Forms WHEN-BUTTON_PRESSED trigger in Visual Basic .NET is the Click event procedure. Double-click the page Button control in design view to open the code behind the page and create a Click event procedure.

List of Values (LOV)


A LOV is a pop-up window that provides the user with a selection of values. The values can be static or populated by querying the database. LOVs are populated using columns returned by record groups. Check the Record Group property of the LOV for the record group which is used to provide values. Visual Basic .NET offers pop-up windows that can be programmed to display a list of values obtained by the DataReader.

Record Groups
A record group is a data structure in the form module which behaves like a local table for data retrieved from the databases. Record groups are a way to share small List of Values (LOV). The values can be retrieved using a query or a subprogram. Look into the property Record Group Query to see the SQL statement upon which the record group is based. A record group can be replaced by DataReader in Visual Basic .NET.

Menu Module
The menu module consists of a hierarchy of menus. Each menu consists of the items that can be selected. Menu modules are usually attached to form modules. Every form module has a default menu that includes the commands for all basic database operations, such as insert, update, delete, and query. Visual Basic .NET offers a wide range of functionality involving menus. After inserting, updating, or deleting records from an Oracle form, changes are not made in the underlying database until the changes are committed with a save command. While rewriting this module in Visual Basic .NET, it is important to know whether inserts, updates, and deletes are permitted on a data block. The data block properties of Insert Allowed, Update Allowed, and Delete Allowed determine these. Any item in a menu can also be referenced through PL/SQL code to enable/disable menu items, attach commands to menu items, initialize check or radio menu items, and change menu start-up code.

Windows and Canvases


Windows and canvases form the basis for the visual presentation of the form. The visual model is built by looking at the windows and canvases through Form Builder. The developer can then create an identical representation in Visual Basic .NET based on the visual model using forms. Oracle Forms has two types of windows: document windows (the main application areas) and dialog windows (messages and other actions).The end user interacts with the application using these windows. In addition to incorporating all the functionality in the original application, it is also important to maintain the general look and feel of the application.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

289

The document window is composed of work areas called canvases where visual objects such as graphics and items are arranged. Several canvases can be located in a form module. All canvases may appear in a single window (by default) or spread across multiple windows (to view simultaneously). From the start, a new document window or dialog has a relationship with a form in which relationships to blocks tie the visual representation together with the underlying database data. Figure 17.7 shows a window which displays the master-detail view of the data. The Dept part is the header and the Employees part is the detail.

Figure 17.7
Sample application window

Frames
A frame is a visual object found on a canvas. Frames are used to arrange items in terms of blocks. The frame defines visual characteristics such as margins, offsets, and distances between items. Frames may also be defined in the Object Library to enforce standards across the application. The equivalent of frames in Visual Basic .NET is the panel.

Item
Items are interface objects that present the data values to the users. Items also enable the user to interact with the form. The type of interaction varies according to the type of item. Visual Basic .NET offers a wide range of equivalents for the form items called controls. The type of an item can be examined by locating the Item Type property of the item. Items have both a visual representation and a forms representation (code) and will appear in both models. However, the properties captured in each of the models will differ. Only the common items, such as text item, check box, and radio groups, which could be dependent on the database are discussed here. For each of the items, the visual and functional characteristics are attached to the visual model and the logical characteristics are captured in the form model. For example, the height, width, font size, and format mask are the visual characteristics. The database item (table and column) that is used to populate the text item is the form characteristic.

Text Item
Text items allow for text to be displayed and manipulated. Text items are called by the name TextBox in Visual Basic .NET. When examining text items, Visual Basic .NET developers need to determine whether this item corresponds to a column in a database table by looking at the Database Item property. If so, it needs to be tied to a dataset or other data source in Visual Basic .NET. The name of the text item is not necessarily the same as the name of the database column. Look at the Column Name property for the name of database column.

290

Developing: Applications Migrating Oracle Forms

Check Box Item


A check box provides the user with a Boolean control that has just two values, such as true and false, or on and off. The check box values can be set by fetching from the database. When creating the equivalent Check Box control, Visual Basic .NET developers need to look at the following three properties of a check box item: Value When Checked, Value When Unchecked, and Check Box Mapping of Other Values.

Radio Group Item


A radio group provides the user with a fixed set of options that are mutually exclusive, Radio group item values can also be set using database columns. Under the radio button, the developer should look at the Radio Button Value property for the value assigned to the button. In Visual Basic .NET the radio group can be replaced by a combination of GroupBox and RadioButtons.

Buttons
Buttons are interface items that can display list of values, commit data in a form, query the database, or invoke PL/SQL blocks. The WHEN-BUTTON-PRESSED trigger holds the code for these actions. Buttons are also found in Visual Basic .NET.

Understanding Visual Basic .NET


Visual Basic.NET offers Windows Forms (WinForms) Designer, which is very similar in nature to Oracle Forms Builder. The product of the Windows Forms Designer is a Windows-based form application that is similar in nature to an Oracle Forms application. Forms are the fundamental elements of the application on which controls (same as items in Oracle Forms) can be placed for user interaction. As in Oracle Forms, menus and toolbars provide a structure for the user to control the application. For readers who are unfamiliar with VB.NET, there is a lot of in-depth technical information available at http://msdn.microsoft.com/vbasic/using/ If the conversion of the forms is done correctly, then the WinForm is simply the presentation layer with all of the database access logic and rules put into a business object layer. The conversion to a Web Form solution, as a result, only requires associating the appropriate Web server control to an object holding the data retrieved from the business object layer. The controls on a form simply provide an interface to the event-driven model that is the equivalent of the triggers system in Oracle Forms. As changes are made to textbox input, the pressing of buttons, or the selection of items in a list, additional code is triggered (see "Triggers" in the "Form Module" section earlier in this chapter). The code is in the form of subroutines that handle the call from the raised event to provide some specific action. The PL/SQL routines will map to these subroutines. The action is typically defined as an algorithm or a business rule. In most applications, the business rules extract information from the database. A summary of the entire array of controls in WinForms is available at http://msdn.microsoft.com/library/en-us/vbcon/html/vboriControlsForWinForms.asp. The .NET Framework is so extensible that numerous Independent Software Vendors (ISVs) have created unique controls or have enhanced the functions in existing .NET controls. Developers themselves can generate their own controls for personal needs. Many of the most common controls emulate functionality that has been used in visual forms for years, whether in classic Visual Basic, Delphi, or Oracle. One of the most beneficial properties of most controls is the capability to bind them to data sources. Data

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

291

sources do not have to be ADO.NET objects. Sources can come from arrays and other collection-type objects.

Scenario 1: Rewriting to Visual Basic .NET


Converting from Oracle Forms to Visual Basic .NET is non-trivial. There is no tool that automates the conversion to VB.NET. In Oracle Forms, an application can be separated into a visual or presentation component and a code component. This separation makes the migration easier and is the approach used in this guidance. Two methods are available to convert Oracle Forms to Visual Basic.NET: Case 1: Using Oracle Designer Case 2: Manually Redesigning the application

Case 1: Using Oracle Designer


Visual Basic code is generated from the Oracle Form application using the Visual Basic Generator, which is a function of Oracle Designer. The Visual Basic code is then upgraded to Visual Basic.NET using the guidance available from Microsoft at http://msdn.microsoft.com/library/default.asp?url=/library/enus/vbcon/html/vbconupgradingapplicationscreatedinpreviousversionsofvisualbasic.asp. At the time of publication, this method is not perfect. Many manual changes will still need to be made after the automated conversion. Using Oracle Designer is still a useful exercise because it provides a valuable template structure of the migrated Visual Basic application even though subsequent changes to the code will be necessary.

Case 2: Manually Redesigning the Application


In this method, the entire Oracle Forms application is documented and the new application is built in an orderly fashion. To document and redesign the application, follow these steps: 1. Create a list of forms found in the application and note the relationship between them. 2. Create two models for each form.
The first model, which is called the Forms Model, will contain all the forms components. The Forms Model will document the four modules: forms module, menu module, object libraries, and PL/SQL libraries. The second model, called the Visual Model, will document all the layout-related components: windows, canvases, frames, and items

3. Document the Forms Model. Open a form in the Ownership View using Oracle Forms Designer and note the components under each module with their hierarchy from module down to items. Perform the following actions: a. Start with documenting all the modules: forms modules, menu modules, object libraries, and PL/SQL libraries using Figure 17.1 as reference. b. For each module, incrementally refine the model with the next level of detail using multiple passes. c. Document all the supporting components, such as LOVs, record groups, and program units, found under the form module. d. Based on the size of the form, submodels or documents may be prepared for subsections of the model.

292

Developing: Applications Migrating Oracle Forms e. A block document may be created for just the data blocks and control blocks. Use Oracle runtime in debug mode to understand the flow. When you run Oracle Forms in debug mode, the runtime environment displays the name of each trigger when it fires. Operate the form as a user and document the flow of execution in the blocks document. f. A procedures document containing a compilation of PL/SQL code or program unit found in triggers, user-named subprograms, packages, PL/SQL-type menu items, and menu startup code. All places in a form which contained PL/SQL code can be conveniently viewed by switching the form view to Show PL/SQL Only. For each program unit, note the existence of SQL or calls to database stored programs. 4. Document the Visual Model. Open the form in the Visual View using Oracle Forms Designer and note the components under each module, capturing the hierarchical relationship as shown in Figure 17.3. Document all the visual characteristics for each of the components. 5. Convert the shared code. Start the migration by converting the shared components that are under the object libraries and PL/SQL libraries. While migrating from Oracle Forms to Visual Basic .NET, continue to maintain the same logical grouping and organization of the various forms components, including the shared libraries. In Visual Basic .NET, create a class library project, one each for the object libraries and the PL/SQL libraries. Migrate the PL/SQL libraries. The procedures document contains the required information for migrating the PL/SQL libraries. For each PL/SQL library, create a Visual Basic .NET class in the target project. Each program unit in a library can be migrated into methods with the equivalent functionality. Migrate the object libraries. The migration of the object libraries can be performed using the Forms Model. For each object library, create a Visual Basic .NET class in the target project. Each object in a library can be migrated into an equivalent method in the class.

6. Convert the Visual Model. Create a Visual Basic .NET Windows application project. The goal in this step is only to build the shell visually identical to the Oracle Forms application. In this step, you are concerned with the form and not the function. The functionality will be addressed in the next step. For example, a DataGrid control will be added to a form in this step while the tasks to connect to SQL Server and access data are performed in the next step. For each window found in the Visual Model, create a form in the project. In Visual Basic .NET, the concept of canvas is handled by the form itself. The frames in the Oracle Forms application should be converted to panels in Visual Basic .NET. The items in a frame should be replaced with its equivalent controls. Also add the menu controls to match the Oracle Forms application. For steps on adding forms to a project, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/vbcon/html/vbtskaddingformtoproject.asp. For more information on how to add Windows Forms Controls, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/vbcon/html/vboriwinformscontrols.asp.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows For more information about learning the techniques for arranging the various controls on a Windows Form, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/vbcon/html/vbconformsdesigner.asp. For guidance on adding menus, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/vbcon/html/vbconwindowsaccessories.asp.

293

7. Convert the Form Model. At this stage, all the pieces are in place to code the business logic and application functionality. The majority of the work is related to converting the code in the blocks and items, which was documented in the block document. Because an almost one-to-one mapping of the shared libraries and its objects has been performed, the code in Visual Basic .NET can be written as a parallel to the code in the source application. a. Start by converting the components, such as control blocks, program units, record groups, LOVs, and triggers that could be tied to multiple controls. b. Code all the data components in the form because these could be possibly associated with multiple controls. A detailed account on how to perform this is provided in the "Adding Data Components to a Windows Form" section later in this chapter. c. Complete the migration of the data block by adding code to read data and bind the grid. d. Although the grid is bound to the dataset you created, the dataset itself is not automatically filled in. Instead, you must fill the dataset by calling a dataadapter method. Double-click the page to display the page's class file in the Code Editor. In the form's event handler, call the data adapter's Fill method, passing it the dataset you want to populate: SqlDataAdapter1.Fill(dsDept). e. For more details on data binding and Windows Forms, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/vbcon/html/vboriwindowsformsdataarchitecture.asp. f. The number of lines of code generated by the Data wizard is far in excess of what would be required if using Data Access Application Block (DAAB) for .NET. g. Examine every control and add code to the control and its event. h. Achieve the proper flow of execution through navigation rules similar to that seen when executing the Oracle Form application in the debug mode.

Adding Data Components to a Windows Form


You do not directly add a dataset to a page. Access to the dataset can be created in one of several ways. There are two preferred methods. One method is to use the set of visual objects available in the Toolbox under the Data tab. The other is to take advantage of the Microsoft Data Access Application Block for .NET.

294

Developing: Applications Migrating Oracle Forms

Using the Visual Studio .NET Data Wizards


Visual Studio .NET Data Wizards are ADO.NET code generators for creating access to SQL Server, OLEDB, Oracle, and ODBC data solutions. There is a small series of data objects needed to generate datasets to sources under ADO.NET. They are DataAdapters, connections, commands, and the datasets themselves. Connections provide the necessary information about data sources, authentication, and pooling options. Commands define whether the source is queried through stored procedures, direct table access, or through T-SQL Statements. DataAdapters provide a specific pipe for accessing data and transforming it back to the source by creating the Select, Update, Insert, and Delete commands; creating stored procedures; or using existing stored procedures. DataSets act as the virtual database to the client. To use the wizards to access SQL Server, follow these steps: 1. Create the SqlAdapter. From the Toolbox within VS.NET, choose the Data tab and drag the SqlDataAdapter onto the form area. If other data providers are needed, choose the appropriate OLEDB or ODBC Data Adapter. The wizard will start, and a SqlDataAdapter1 object will be created in the Component Tray area beneath the form. 2. Reference the connection. The next step of the wizard is to choose from a drop-down list of pre-existing connections used with VS.NET or to generate a new connection. Clicking New Connection brings up the DataLink Properties dialog box to create the new connection object. 3. Choose a query type. The wizard then allows the developer to choose between SQL statements or stored procedures. As previously mentioned, it is best to use stored procedures when possible. 4. Generate the SQL statements, stored procedures, or bind the commands. Depending on the choice in the previous step, the developer will be given the opportunity to provide or create an appropriate SQL statement. Inspecting the PL/SQL code is described earlier in the section "PL/SQL Library." 5. Finish the wizard. Now the Component Tray contains both the SqlDataAdapter1 and SqlConnection1 objects. The information created from the wizard can be viewed in from the Properties window of the designer or by looking at the generated code. The Properties window of the DataAdapter shows the commands chosen (Delete, Insert, Select, Update) and their respective properties. 6. Generate the DataSet. Now that the reference has been made to the source and the pipe has been created to work with the source, it is necessary to provide the client with the result. The container necessary for this is a dataset. Because the dataset is based on the information provided by the adapter, Visual Studio .NET has a wizard to take advantage of the information stored within the adapter. This is done by rightclicking the SqlDataAdapter1 inside the component tray, clicking Generate dataset, and then choosing most of the defaults (except when the object is renamed).

Using the Microsoft Data Access Application Block


The Data Access Application Block encapsulates performance and resource management preferred practices for accessing SQL Server databases. If used, it will reduce the amount of custom code needed to create, test, and maintain.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

295

Specifically, the Data Access Application Block helps you: Call stored procedures or SQL text commands. Specify parameter details. Return SqlDataReader, DataSet or XMLReader objects.

Instead of using the wizards (which create numerous references to adapter, command, and connection objects), the Data Access Application Block (DAAB) helps limit the creation of datasets to as little as one line of code. The Data Access Application Block is available at http://www.microsoft.com/downloads/details.aspx?FamilyId=F63D1F0A-9877-4A7B88EC-0426B48DF275&displaylang=en. After downloading, compile the assembly in the VB assembly. It will now be available to be referenced by any Visual Basic .NET application.

Calling T-SQL Stored Procedures from Windows Forms


To call a stored procedure, you first need to identify the stored procedure, and then create a DataReader or DataAdapter object. Each requires a database connection and a reference to the stored procedure name through the CommandText property. Next, you set the CommandType property to CommandType.StoredProcedure. Finally, any stored procedure parameters need to be defined for input and/or output in a collection of parameter objects. If the DataReader is used, then the developer must fill the appropriate controls by iterating though the collection of rows returned and applying the values to the controls. If the DataAdapter is used, then a DataSet must be filled and the controls must be bound through the DataSource or DataBinding properties.

Testing the Visual Basic .NET Application


The Visual Basic .NET application is a newly written application instead of a migrated application. Even though Oracle Forms and Visual Basic are architecturally and functionally similar, none of the code is reusable. Hence the application should undergo unit testing using a procedure that would be followed for new development. However, this newly developed Visual Basic .NET application does differ from other new development projects. Remember that the Visual Basic .NET application was designed to mimic the form and function of the Oracle Forms application, while most applications are developed based on documented requirements. The objective of unit testing is to verify that the user interface closely resembles the Oracle Forms application and every component functions correctly. Start by visually comparing every frame in the Oracle Forms application to the corresponding window in Visual Basic .NET. Ensure that all panels are present. Ensure that all the controls (items) are accurately presented on the panel. Crosscheck all the visual components against the Visual Model. The event models differ from Oracle Forms to Visual Basic .NET. Oracle Forms contains a nested hierarchy, with events being fired when moving through the window and its objects. These have to be duplicated in Visual Basic .NET. Verify the behavior by performing a walk-through of the entire application with tracing and debugging enabled. The logs can later be reviewed for errors and execution sequences. Visual Basic .NET provides a set of methods and properties that help you trace and debug the execution. Interactive debuggers are also available in the Visual Studio .NET SDK. The application should then be tested for functionality. Start with testing the menus and then drilling down window-by-window. Test the functionality of every control. Use the

296

Developing: Applications Migrating Oracle Forms

same set of test cases in both applications. Populate the test data in the form and click all the controls. Verify that all the functional areas and navigation match the old application. Validate that the presentation domain screens are all called by the application. For detailed guidance about all aspects of testing your Visual Basic .NET application and the available tools, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/vsent7/html/vxoriTestingOptimizing.asp.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

297

18
Stabilizing Phase
Introduction and Goals
During this phase, the test team verifies that the solution meets the defined quality levels and that the risk of bugs is eliminated or minimized. Any existing bugs should not affect critical functionality or performance. After the solution has been stabilized, it will be ready for deployment into the production environment. This chapter highlights the processes of stabilization as they relate to an Oracle on UNIX to Microsoft SQL Server on Windows migration project. The main goals for the Stabilizing Phase include: Improve the overall quality of the migrated solution and stabilize for release. Ensure that the solution meets the requirements of the project outlined in the Envisioning and Planning Phases. Assemble all of the components of the solution and test the entire system before deployment. Complete and validate documentation that is required for deployment, operations, and end users. Evaluate and mitigate the risks involved in releasing the solution for deployment.

The deliverables produced during the Stabilizing Phase are listed in Table 18.1. Table 18.1: Deliverables for the Stabilizing Phase Task
Testing the solution

Deliverables
Release versions of source code, executables, release version of scripts, and installation documentation Release versions of enduser training materials and operations documentation Release notes Testing and bug reports Pilot Review Project documents Sign off document

Owner
Test Role

Testing the solution

User Experience Role

Testing the solution Bug tracking and reporting Piloting the solution Piloting the solution Piloting the solution

Release Management Test Role Release Management Program Management Project team

298

Stabilizing Phase

The Stabilizing Phase consists of two major activities: testing the solution and piloting the solution. During testing, the entire ecosystem is evaluated, including the hardware, software, database, connectivity layer, and the application. As a result of the testing, bugs and issues will be tracked and resolved. The second major activity of the Stabilizing Phase pertains to piloting the solution to a select group consisting of end users, deployment, and operations personnel in the production environment. Pilot testing also helps anticipate and resolve any issues that may occur during deployment. While piloting the solution is optional, this activity is highly recommended because components are deployed into the Windows environment that were originally developed for another platform.

Testing the Solution


Testing is an iterative refinement process and is initiated every time there is a change to the solution (including bug fixes). The occurrence of second and third iterations is common in data migrations involving complex data integrity rules. Application testing will vary based on the amount of modifications needed to migrate the solution application. For example, when an application is ported, the code coverage tests may be given limited importance because the code base has not changed from the existing application. The test team should accumulate complete knowledge of the software's functionality and the tests that should be performed to verify them. Often, the original design considerations and approach used to create the existing solution are not available to the migration team. This limits the effective knowledge of the test team. Testing should not be limited only to the parts that the developers have identified as affected by the migration. Changes in the backend database and the environment can manifest itself in unpredictable places. The project team should reuse test cases from the existing solution. If none exist, new test cases can be created from the business requirements of the existing solution. Tests can then be run against both the source and target systems in the migration project and the results compared to identify deviations and bugs in the new solution. Logs and audit reports that document any defects associated with the applications need to be created and published to the entire team in conjunction with the tests performed. This is because the source of the bug and the place where it manifests itself may be different. For example, even though a problem may be detected in the application, the source may be a database configuration or a hardware setting. Each iteration of testing helps make the subsequent iteration more robust and complete than the previous one. This process must continue until an iteration has no exceptions. The number of tests completed and the defect rate are used to measure progress and schedule conformance. A comprehensive series of tests is key to meeting the goals of the Stabilizing Phase. These tests provide assurances of the quality and stability of the solution. The following sections provide information about: Best practices Preparing for testing Types of testing Bug tracking and reporting User Acceptance Testing (UAT) and signoff

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

299

This information will help you assure the quality and stability of the solution you have developed.

Best Practices
When testing a solution, two important best practices are clearly defining the success criteria and approaching testing with a zero-defect mindset.

Success Criteria
Judging whether a project has been successful is almost impossible without something to measure the project's results against. Success criteria, also called key performance indicators (KPIs), can be created by defining conditions under which the proposed solution will achieve its goals. In a migration project, the success criteria can be gauged by the effectiveness of the new solution. Does it effectively replace the existing solution in terms of features, performance, and function based on results from the test cases? Note Though measured in the Stabilizing Phase, success criteria for a project is established during the Envisioning and Planning Phases.

Zero-defect Mindset
The concept of a zero-defect mindset should encompass the project team's commitment to producing the highest quality product possible. Each member is individually responsible for helping achieve the desired level of quality. The zero-defect mindset does not mean that the deployed solution must be flawless; rather, it specifies a predetermined quality bar for the deliverables. Oftentimes, the project schedule will play a determining factor in achieving a zero-defect solution. In situations where schedule does not allow for complete testing, tests should be prioritized to ensure that all critical functionality is evaluated. In addition, the zero-defect mindset concept should be carried throughout the dynamic life cycle of the solution. For example, as the database scales over time, additional tuning will be required to ensure optimum performance.

Preparing for Testing


Preparation for testing involves creating two key items: Test environment Bug tracking system

Each of these items is discussed under the following headings.

The Test Environment


The development and test plans provide a list of requirements that must be met by the test environment, and the test environment should be set up according to them. The test environment should be completely separate from the production environment. Although it may not mirror the production environment, it is best if it does. If the development phase is completed, the same hardware and software may be used to test the solution. If the needs for the testing environment are more demanding, then additional resources may be required. In some situations, it may be necessary to scale the testing based on the available hardware. In MSF, setting up the test environment starts toward the end of the Planning Phase. This is to make sure that a test environment is available to the development team while

300

Stabilizing Phase

individual components are being developed. However, a full-scale test environment to verify all aspects of an integrated solution will only be required in the Stabilizing Phase. Another consideration of the testing environment is ensuring that it is properly tuned before testing commences. Some recommended best practices for optimal tuning of SQL Server running under Windows 2003 include: Format disk partitions using NTFS 5.0. This file system provides performance enhancements in Windows 2003. Configure SQL Server as a stand-alone server. If configured as a domaincontroller, additional resources are utilized, and performance is reduced. Set the Application Response setting to optimize for background services. This setting allows background services to run at a higher priority than foreground applications and is accessed through the System icon in the Control Panel. This setting will improve the performance of SQL Server. Turn off additional security auditing. This will reduce I/O activities. Set the size of PAGEFILE.SYS. Monitor the usage of this swap file used by SQL Server and resize the resources slightly larger than your needs. Turn off unnecessary services. Review all services and determine which can be turned off. Turn off unnecessary network protocols.

For more information on tuning the test environment, refer to http://www.microsoft.com/windowsserver2003/evaluation/performance/tuning.mspx.

Bug Tracking Solution


An effective bug tracking system is needed to make sure that bugs are identified and issues are not dropped until they have been completely resolved. Projects typically have several hundreds or thousands of bugs. It is imperative to have a robust bug tracking system in place to address these issues. The bug tracking software for the project was selected during the Planning Phase. Using this software from the beginning of testing allows all bugs identified through the life cycle of the solution to be tracked in one location. This testing history can also be a useful reference for future releases after the solution has been deployed. Bug categorization is an important consideration during the configuration process of the bug tracking solution. Variables required by the categorization may affect the configuration of new and existing bug tracking solutions. These variables include: Repeatability. This is a variable that measures how repeatable the issue or bug is. Repeatability is the percentage of the time the issue or bug manifests itself. Visibility. This is a variable that measures the situation or environment that must be established before the issue or bug manifests itself. For example, if the issue or bug occurs only when the user holds down the Shift key while right-clicking the mouse and viewing the File menu, the percentage of obscurity is likely to be high. Severity. This is a variable that measures how much impact the issue or bug will produce in the solution, in the code, or to the users. For example, a bug that causes the application to crash would rank higher than situations that allow the application to recover.

These three variables are estimated and assigned to each issue or bug by the project team and are used to derive the issue or bug priority.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

301

Note The priority of a bug can be calculated by using the following formula: (repeatability + visibility) * severity = priority

Types of Testing
Often, the test environment is the first place that all of the separate components (that have been unit tested in the Development Phase) are combined into a fully functional version of the solution. The first task is to ensure that the disparate components of the solution integrate properly. Next, ensure that the solution performs as expected. The next series of tests check to ensure that the solution will work properly under heavy workloads. The final set of tests check the solution for operational aspects. The following types of testing are useful in an Oracle to SQL Server migration: Integration testing. Does the solution work as a cohesive unit? Performance testing. Does the solution meet the baseline requirements? Stress testing. How does the solution react to stresses and workloads? Scalability testing. How far will the solution scale? Can the system handle increased load by adding new hardware as required? Operational testing. Do the operational aspects of the system perform as expected?

Each of these testing types is described under the following headings.

Integration Testing
The first level of testing in the Stabilizing Phase is integration testing, an iterative process in which separate components are combined into larger solutions until the system is complete. In integration testing, the focus is on the interfaces between components. Integration testing proves that all areas of the system interface with each other correctly and that there are no gaps in communication. The final integration test proves that the entire system works as an integrated unit. Integration testing will also reveal any issues with shared resources. For instance, if Pretty Good Privacy (PGP) encryption is used by more than one application, instead of having a separate key for each application, multiple applications could potentially share a single key. If server consolidation is a business goal for this solution, then it has to be performed at this stage. For more information on server consolidation, refer to http://www.microsoft.com/downloads/details.aspx?FamilyId=0F70695E-5D0B-47818966-84BE43216F9E&displaylang=en.

Resolving Integration Issues


The major reasons issues arise during integration testing are incompatibilities or inconsistencies in the design or implementation of the interfaces between components. In a migration project, such issues are bound to occur because of interoperation between applications, changes in command usage, and differences in protocols between the two platforms. Such issues should be logged and forwarded to the development team for resolution. Another commonly encountered issue is resource shortage because several components are being assembled for the first time. Additional resources, such as processor, memory, and storage, should be made available to complete the testing.

302

Stabilizing Phase

Performance Testing
Performance testing involves evaluating how well the solution meets the expected criteria. Testing for performance can be further defined by two sub-types: Application performance testing Hardware utilization performance testing

Application performance testing in a migration focuses on comparing various speed and efficiency factors between the existing solution and the migrated solution. This ensures that the migrated solution complies with the expected level of performance. These key speed and efficiency factors include: Throughput. Database throughput measures the total number of transactions that the server can handle in a given time frame. Baseline figures from the existing solution are needed for comparison. Performance testing must be executed using a workload that represents the type of operations that are most frequently performed in the production environment. Note For a detailed discussion of baselines, see Appendix C, "Baselining." Response time. Response time measures the length of time required to return the first row of the result set.

Testing hardware performance in a cross-platform migration is recommended because additional adjustments may need to be made to the proposed solution. There are no reliable benchmarks that can provide equivalency performance statistics between the UNIX and Windows platform. The results of benchmarks of the popular Transaction Processing Performance Council (http://www.tpc.org/) can be used as a guideline. You should ask the assistance of the vendors in performing lab testing of your solution to get more accurate numbers on the proposed hardware. This testing validates the hardware requirements for the solution. While conducting performance tests, capture data regarding the utilization of resources such as CPU, memory, disk I/O, and network bandwidth. This is important because if your testing environment is not the same scale as the production load, then bottlenecks in these resources may not be discovered until deployment. Resource utilization data, captured at various load levels, can be used to draw conclusions about resource requirements at production loads. The following resources can be used to conduct performance testing: Checklist: SQL Server Performance http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnpag/html/scalenetcheck08.asp. Windows 2003 Performance and Scalability http://www.microsoft.com/windowsserver2003/evaluation/performance/perfscaling. mspx Testing tools http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_perfmon_1qgj.asp. CPU Monitor http://www.sysinternals.com/ntw2k/freeware/cpumon.shtml Disk Monitor http://www.sysinternals.com/ntw2k/freeware/diskmon.shtml

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Network Monitor http://msdn.microsoft.com/library/default.asp?url=/library/enus/netmon/netmon/about_network_monitor_2_0.asp File Monitor http://www.sysinternals.com/ntw2K/source/filemon.shtml

303

Resolving Performance Issues


Performance problems can occur because of several reasons: application code, database implementation, hardware configuration, software configuration, or resource availability. The testing team should be able to solve configuration and resource problems themselves, while issues with the application and database should be logged and forwarded to the development team, along with any analysis and supporting evidence for resolution. Additional resources may have to be procured to solve resourcerelated hardware issues. Performance tuning can be an ongoing series of refinements and improvements. A large amount of information regarding performance tuning is available. For more information, refer to the following resources: The Data Tier: An Approach to Database Optimization http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part10/c3361.mspx Improving SQL Server Performance includes schemas, queries, indexes, transactions, stored procedures, execution plans and tuning topics. http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnpag/html/scalenetchapt14.asp How To: Optimize SQL Queries http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnpag/html/scalenethowto04.asp How To: Optimize SQL Indexes http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnpag/html/scalenethowto03.asp Microsoft SQL Server 2000 Index Defragmentation Best Practices http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx Using Views with a View on Performance http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part10/c3661.mspx Microsoft Storage Solutions The Right Storage and Productivity Solution http://www.microsoft.com/windowsserversystem/storage/solutions/rightsolution/righ tsolution.mspx

Stress Testing
Stress testing is performed to determine the load at which the performance is unacceptable or the system fails. This involves loading the system beyond the use it was designed for and checking for issues. When performing stress tests, new bugs or issues often may surface because of the high stress and load levels. At a minimum, stress testing should load the system as defined in the business goals. If the test environment is scaled down in relation to the production environment, the limitations of the testing environment in relation to the production environment should be considered in the results.

304

Stabilizing Phase

For example, with respect to applications, there may be a failure when the number of simultaneous connections hit 100. This could be because of the limitation of some variable associated with the code segment related to connection handling. Similar problems may be encountered only during stress testing because there will be differences in resource consumption and low-level implementation of the same functions between the UNIX and Windows platform.

Resolving Stress Issues


Stress testing should be performed only after all issues encountered during performance testing have been fixed. Bugs encountered during stress testing can be because of the application, the hardware configuration, or resource availability. Issues or bugs found with the application have to be sent to the development team with any pertinent information. If the issues relate directly to hardware or hardware resources, they can be solved by configuration or adding additional resources. For example, SQL Server is not configured to take advantage of memory more than 2 GB by default. If your hardware contains additional memory, this issue can be resolved by correctly configuring SQL Server to take advantage of the server's available memory. Microsoft offers two utilities, Read80Trace and OSTRESS, to assist in stress testing SQL Server. To learn more or download these utilities, refer to http://support.microsoft.com/?kbid=887057.

Scalability Testing
Scalability testing is performed to determine if the solution scales to handle an increasing workload. The workload may be increased size, as in Very Large Databases (VLDB), or more activity, such as transactions. Activity scalability is measured in number of user connections, requests, reports, for example. The overall scalability is also dependent on the hardware and the application. The application's throughput change as the load is increased is a measure of its scalability. Also, the throughput may be measured with increased resources along with the increased load. Scalability testing, while similar to stress testing, provides additional information to assist in future solution expansion plans. Scalability testing is conducted to record how well the migrated solution will scale or increase throughput as the user workload increases. It differs from stress testing because scalability testing will generally load the solution far past the minimum load levels defined in the Planning Phase. If additional hardware is available, it is worthwhile to determine whether exceeding the limits of the current solution requires a simple addition of hardware or a complete redesign of the system. For detailed information on scalability, refer to the following resources: Scaling out on SQL Server http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part10/c3861.mspx. SQL Server Scalability FAQ http://www.microsoft.com/sql/techinfo/administration/2000/scalabilityfaq.asp. Microsoft SQL Server 2000 Scalability Project Server Consolidation http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnsql2k/html/sql_asphosting.asp. Windows 2003 Performance and Scalability http://www.microsoft.com/windowsserver2003/evaluation/performance/perfscaling. mspx.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

305

Resolving Scalability Issues


Issues of scaling should be documented as constraints of the system. If it is imperative that the entire system scale to a certain point based on business goals, but the solution does not meet these goals, then the resources (mostly hardware) have to be reevaluated by experts. In most cases, vendors can provide support and information in this area.

Operational Testing
Operational testing is required to ensure that day-to-day functionality and maintainability is developed and tested. This type of testing includes items such as: Backup routines Database maintenance tasks and schedules Documentation and processes developed for ongoing support Alert and monitoring processes Disaster recovery plans

If the solution is not going to be piloted, operational testing can be expanded to ensure that the operations team is comfortable with the processes and procedures needed to maintain the system.

Resolving Operational Issues


Issues during operational testing are normally because of incomplete documentation of the system requirements and configuration with respect to components that are found only in the production environment. In most cases, the issues will have to be handled by the operations staff who may seek information and expertise from the project team.

Bug Tracking and Reporting


There are several important interim milestones in the iterative process of testing and refining the solution before release. The interim milestones guide the tracking and testing process. These milestones include: Bug convergence Zero bug bounce Release candidates Golden release

These milestones are discussed under the following headings.

Bug Convergence
Bug convergence is the point at which the team makes visible progress against the active bug count. It is the point at which the rate of bugs that are resolved exceeds the rate of bugs that are found. Figure 18.1 illustrates bug convergence.

306

Stabilizing Phase

Figure 18.1
Bug convergence graph

Because the bug rate will still vary even after it starts its overall decline bug convergence usually manifests itself as a trend instead of a fixed point in time. After bug convergence, the number of bugs should continue to decrease until the zero bug bounce.

Zero Bug Bounce


Zero bug bounce (ZBB) is the point in the project when development resolves all the bugs raised by the Test role and there are no active bugs for the moment. Figure 18.2 illustrates ZBB.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

307

Figure 18.2
Zero bug bounce

After ZBB, the bug peaks should become noticeably smaller and should continue to decrease until the product is stable enough to release. Careful bug prioritization is vital because every bug that is fixed creates the risk of creating a new bug or regression issue. Achieving ZBB is a clear sign that the team is in the final stage as it progresses toward a stable product. Note New bugs will certainly be found after this milestone is reached. But it does mark the first time that the team can honestly report that there are no active bugs even if it is only temporary. This can help the team maintain focus on a zero-defect mindset.

Release Candidates
After the first achievement of zero bug bounce, a series of release candidates are prepared for release to the pilot group. Each of these releases is marked as an interim milestone. The release candidates are made available to the pilot group so they can test it. The users provide feedback to the project team, and the project team in turn continues to improve the product and resolve bugs that appear during the pilot. As each new release candidate is built, there should be fewer bugs to report, prioritize, and resolve. The pilot group is discussed in more detail in the "Piloting the Solution" section later in this chapter

Golden Release
Golden release is the release of the product to production. Golden release is a milestone in the Stabilizing Phase that is identified by the combination of zero-defect and success criteria metrics. At golden release, the team must select the release candidate that they will release to production. The team uses the testing data that is measured against the zero-defect and success criteria metrics to make this selection.

308

Stabilizing Phase

User Acceptance Testing and Signoff


User acceptance testing (UAT) is an additional testing process to determine if the solution meets the customer acceptance criteria. Because this is a migration project, the database and the existing solution have already passed these criteria. In most cases, only the solution's operating environment has changed. In migration projects, UAT should test whether the new solution produces the same results from use cases as the existing solution. Whenever possible, use acceptance tests from the existing solution as a base. UAT can also affect the database. Part of testing should ensure that the SQL Server database can be accessed by the client applications. User signoff is obtained when the users agree that the solution meets the needs of the end user. The user signoff is proof that the solution meets the user acceptance criteria in relation to their business requirements. The user signoff indicates that the solution conforms to the requirements of the end-user (functionality) and the enterprise (performance), and that the solution is ready to be deployed into production.

Piloting the Solution


A pilot program is a test of the solution in the production environment, and a trial of the solution by installers, systems support staff, and end users. The primary purposes of a pilot are to demonstrate that the design works in the production environment as expected and that it meets the organization's business requirements. Pilot deployments are often characterized by a reduced but key feature set of the system or a smaller end-user group. The pilot is the last major step before a full-scale deployment. Before the pilot, all testing must be completed. This provides the opportunity to integrate and test other pieces of the production environment that do not have any equivalent in the test environment. The pilot also provides an opportunity for users to provide feedback about how the solution works. This feedback must be used to resolve any issues or to create a contingency plan. The feedback can help the team determine the level of support that they are likely to need after full deployment. Some of the feedback can also contribute to the next version of the application. Note The success of the pilot contributes heavily to the deployment schedule. Issues discovered during the pilot can delay deployment until the problems are resolved. Every migration situation is unique, and some scenarios may not require a pilot program. The pilot helps minimize the risks involved with the Deployment Phase. For instance, if the migration involves a Perl application that is ported to run natively on the Windows platform, the differences within the application could be minimal and, depending on other mitigating factors, a decision may be made not to pilot the solution. A pilot program is highly recommended in situations where any of the following instances apply: The deployment plan is highly complex and the deployment team requires the experience of the pilot deployment. The solution is prominent or critical to the organization. If the rollout must go exactly as planned, a pilot will provide additional assurance. There is a large difference between the production and test environments.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows There are elements in the production environment that cannot be adequately verified in the test environment.

309

Preparing for the Pilot


A pilot deployment needs to be rehearsed to minimize the risk of disruption for the pilot group. At this stage, the development team is performing last-minute checks and ensuring that nothing has changed since pre-production testing. The following tasks need to be completed before starting the pilot: The development team and the pilot participants must clearly agree on the success criteria for the pilot. In a migration project, the main success criterion is that the new solution effectively replaces the existing solution. A support structure and issue resolution process must be in place. This process may require that the support staff be trained. The procedures used for resolution during a pilot can vary significantly from those used during the deployment and when in full production. To identify any issues and confirm that the deployment process will work, it is necessary to implement a trial run or a rehearsal of all the elements of the deployment. It is necessary to obtain customer approval of the pilot plan. Work on the pilot plan starts early during the Planning Phase so that the communication channels are in place and the participants are prepared by the time the test team is ready to deploy the pilot. Ensure that the plan effectively mirrors the deployment process. For instance, if the migration solution is scheduled to be deployed in phases, the entire process should be replicated for the pilot.

Note It is important to remember that a pilot program tests and validates the deployment process as well as the solution. A pilot plan should include the following: Scope and objectives Participating users, locations, and contact information Training plan for pilot users Support plan for the pilot Known risks and contingency plans Rollback plan Schedule for deploying and conducting the pilot

Conducting the Pilot


Conducting the pilot involves deploying the applications and databases that have been chosen to be part of the pilot. The golden release of the solution is used for pilot testing against an audience consisting of actual users using real-world scenarios. When the pilot is conducted in a production environment, care has to be taken to ensure that the existing application and databases are not jeopardized. Hence, adequate support has to be provided while conducting the pilot to monitor and fix any issues that arise. Conducting a pilot also includes testing the accuracy of supporting documentation, training and other non-code elements, such as cutover and fallback procedures. Any

310

Stabilizing Phase

changes made to these documents during the pilot have to be noted and the documentation updated accordingly. Note Ultimately, the pilot leads to a decision to either proceed with a full deployment or to delay deployment so that issues can be resolved.

Evaluating the Pilot


At the end of the pilot, its success is evaluated to determine whether deployment should begin. The project team then needs to decide whether to continue the project beyond the pilot. It is important to obtain information about both the design process and the deployment process. Review what worked and what did not work so that it is possible to revise and refine the plan before deployment. Examples of information to be gathered include: Training required for using the solution Rollout process Support required for the solution Communications Problems encountered Suggestions for improvements

The feedback is used to validate that the delivered design meets the design specification, and the business requirements. After the data is evaluated, the team must make a decision. The team can select one of following strategies: Stagger forward. Prepare another release candidate and release it to the original pilot group, then to additional groups. The release to more than one group might have been part of the original plan or might have been a contingency triggered by an unacceptable first pilot. Roll back. Return the pilot group to their pre-pilot state. Suspend the pilot. Put the solution on hold or cancel it. Patch and continue. Fix the build that the pilot is running and continue. Proceed to the Deploying Phase. Move forward to deploy the pilot build to the full live production environment.

Finalizing the Release


The Stabilizing Phase culminates in the Release Readiness Approved Milestone. This milestone occurs when the team has addressed all outstanding issues and has released the solution and made it available for full deployment. This milestone is the opportunity for customers and users, operations and support personnel, and key project stakeholders to evaluate the solution and identify any remaining issues they need to address before beginning the transition to deployment and, ultimately, release. After all of the stabilization tasks are complete, the team must formally agree that the project has reached the milestone of release readiness. As the team progresses from the release milestone to the next phase of deploying, responsibility for on-going management and support of the solution officially transfers from the project team to the operations and support teams. By agreeing, team members signify that they are satisfied with the work that is performed in their areas of responsibility.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

311

Project teams usually mark the completion of a milestone with a formal sign-off. Key stakeholders, typically representatives of each team role and any important customer representatives who are not on the project team, signal their approval of the milestone by signing or initialing a document stating that the milestone is complete. The sign-off document becomes a project deliverable and is archived for future reference.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

313

19
Deploying Phase
Introduction and Goals
After envisioning, planning, developing, and stabilizing, the solution is ready to be deployed. The Deploying Phase implements the tested solution in the production environment. The migration project Release Management team deploying the solution works with the operations team to successfully deploy and stabilize the solution. At the close of the Deploying Phase, customer approval for the migration is obtained, and the solution is transferred to the operations team. It is also possible that the operations team handles the deployment with the aid of the project team. Irrespective of which team does the deployment, the key goal of the Deploying Phase is to successfully migrate the new solution into the production environment as smoothly as possible, and with the least amount of disruption to the business environment or the end users. The deployment rollout should be conducted as a series of activities. Though you can abbreviate or combine some of these activities, you cannot skip any of them completely without increasing risk to the project's success. Nearly all of these activities are performed in conjunction between the project team and the operations team. Coordination between the two teams is now a critical success factor. All decisions should be agreed upon by both of the teams to minimize risk. Both teams are involved with completing the tasks and producing the deliverables shown in Table 19.1. Table 19.1: Major Tasks and Deliverables for the Deploying Phase Tasks
Deploy the solution Stabilize the deployment Obtain the solution signoff Transfer ownership

Deliverables
Deployment Checklist Deployment Checklist Customer Sign Off Project Documents

Responsible Roles
Release Management Release Management Product Management Program Management

Deploying the Solution


At the close of the Stabilizing Phase, the database and applications (server side and client side) are fully tested and are ready for deployment. The release versions of the source code, executables, scripts, and other installation information are complete and delivered to the deployment team.

314

Deploying Phase

While deployment is often discussed and planned singularly, there are actually three separate components and technologies that must be deployed into the production environment. These solution components are: Database server Server side application(s) Client side application(s)

Each of these components is discussed in this section. Each of these topics is divided into two subtopics: process and technology. The process subtopics provide an overview of the intricacies involved with the deployment of each of the solution components. The technology subtopics include information on software tools that can assist with the deployments of the solution components.

Deploying the Database Server


The database should be deployed first because the server side and client side applications are dependent on it. The process and technology required for setting up a database server and implementing databases are discussed under the following headings.

Process
During the Planning Phase of the migration, all hardware needs are evaluated and any required new equipment is acquired. During the Deploying Phase, the server must be set up and configured for the Windows Server 2003 environment. The following procedure describes the steps for deploying the database and its server: 1. The server should be built and delivered by your hardware vendor based on specifications defined during the Planning Phase. If the server will be built from existing equipment, ensure that the server is available and has the required capacity to host the solution. 2. Install any necessary hardware. Commonly, items such as additional memory, interface cards, additional storage, tape drives, or FDDA gigabit switches will need to be installed to meet operational specifications. 3. Install the operating system, including any necessary service packs and patches. This step can be performed manually or by using imaging software. This topic is discussed in greater detail in the technology portion of this section. 4. Configure any additional hardware that was installed. For instance, if NIC cards are installed, they must be configured to match the settings of the network switch they will connect to. 5. Install any additional applications needed for the server, such as monitoring applications, backup software, asset management software, and software delivery applications. 6. Select the deployment mechanism for the database. Some possible options include: Use automated processes, such as installation and configuration scripts, where possible. Use manual processes for some installation and configuration. These must have detailed, step-by-step instructions. Microsoft SQL Server installation and configuration scripts. Database administration tools and utilities.

7. Create the database deployment package by putting together the following:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Database administration scripts and a scripting language such as Perl. User database migration scripts. Database administration tasks and packages, such as a maintenance plan, jobs, and backup and restore procedures. Windows domain user accounts configuration scripts. Scripts for configuring ODBC data sources, ADO, and OLE DB connection strings. Scripts for configuring COM+ components for database connection.

315

8. Create rollback scripts to undo or remove changes introduced by the deployment. If a parallel cutover strategy is being employed, the scripts should disable the replication. Not every aspect of the fallback can be scripted. A checklist of tasks with detailed instructions should be prepared. 9. Add the deployment package to the configuration management database (CMDB). Maintain change control of the release. Where a CMDB does not exist, a simplified one should be created for release-tracking. The purpose of updating the CMDB is to ensure that accurate knowledge is stored regarding all of the components or configuration items that make up the releases. 10. Execute deployment package for the databases being migrated as per the phaseout strategy. 11. Execute the scripts to create the database and its objects. The serial cutover strategy will dictate the order in which the databases are migrated. 12. If a straight cutover strategy is used, shut down the Oracle databases. If a parallel cutover is used, the databases may not be shut down. However, a quiet period is required to enable the replication and ensure that the replication mechanism is working properly. Start migrating the data. 13. Monitor the data migration for errors. If the migration has to be aborted, the rollback scripts and other fallback steps should be performed for each of the databases affected by the failure (if the databases are related and have to be migrated together). Note If you must deviate from these scripts in the production environment, make sure to alter the scripts to reflect the change and enter a note in the script or documentation regarding what was altered, why, and whether you noticed any issues afterward.

Technology
Setting up and configuring servers can be time-consuming, especially if additional applications are needed for operations, monitoring, reporting, or other purposes. Based on the needs of the deployment, there are two ways to deploy the server: manual deployment and automated deployment using imaging tools. Imaging tools allow for applications, operating systems, configurations, or drivers to be installed in one package, instead of as individual installations. However, imaging can only be useful when setting up identical servers using identical hardware, as in a clustered set.

Manual Deployment
If a manual deployment is a consideration, a few guidelines should be noted. Often, hardware vendors will provide customized installation CDs that contain specific drivers and software needed for the configuration. Using these installation discs can reduce the time needed for installation.

316

Deploying Phase

For the detailed manual installation procedure and configuration of Windows 2000 Professional, Windows XP, Windows Server 2000 Advanced Server, and Windows 2003 Server Enterprise Edition, refer to the help documentation provided with the Windows operating system CD. Note Manual installations of the operating system can be partially automated using an answer file. This file provides the input that the setup needs to perform the installation and reduces the amount of user interaction needed during the installation process. For details about creating an answer file and the options available in an answer file, refer to http://www.microsoft.com/resources/documentation/windows/2000/server/reskit/enus/deploy/dghn_ans_wten.asp. Another tool that can be utilized to assist in manual deployments is Microsoft Remote Installation Services (RIS). RIS enables you to perform system installations across a network using an Active Directory service. For more information on RIS, refer to http://www.microsoft.com/windows2000/techinfo/planning/management/remotesteps.asp. The RDBMS should be installed manually using the Microsoft SQL Server Software Distribution CD or a network installation tool such as RIS. For more information on the installation process for SQL Server 2000, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/instsql/in_overview_0v3l.asp. Creation of the databases and database objects can be performed using Data Definition Language (DDL) scripts that have been generated from the test environment. This information can also be recreated manually using the Enterprise Manager GUI, but in most instances the large number of objects that need to be created would make this effort too time-consuming. To create a database using the CREATE DATABASE command, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/tsqlref/ts_create_1up1.asp. To create a database using the Create Database Wizard in Enterprise Manager, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/howtosql/ht_7_design_4g8p.asp. DDL scripts can be run using Query Analyzer, a utility installed as part of SQL Server. Third-party SQL Server tools can also be used. For more information on Query Analyzer, refer to Appendix B.

Automated Deployment
Imaging is an alternate method of deploying the operating system and all applications and configurations simultaneously. Imaging captures a snapshot of the entire system, including the operating system, applications, desktop settings, and user preferences. Imaging tools should be a serious consideration for deployments that consist of multiple servers that require the same configuration. Creating images can greatly reduce the time needed to prepare servers for deployment and also ensure consistent configuration. To create an image, a master system must be manually configured with all necessary software. After the server has been properly configured, the image is created using imaging software. Finally, the image can be deployed to as many servers as required. Each will be a clone of the master system. The major benefits of using the imaging tools include: Deployments can be performed simultaneously. All systems can be running in a short time.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows All applications are bundled into the image.

317

The major drawback of imaging is that each type of hardware configuration requires a separate image. You can only install images on target computers that have a compatible hardware abstraction layer (HAL) to the reference computer. For instance, if a server contains a different graphics card, it would require a separate image. There are many different tools and utilities available that provide the capability to record and deploy images. Here is a listing of some of these tools: Symantec Ghost (also called Norton Ghost) captures a complete system image for redistribution. For detailed information on using Symantec Ghost, refer to http://service1.symantec.com/SUPPORT/ghost.nsf/pfdocs/. Automated Deployment Services (ADS) is designed to deploy Microsoft Windows 2000 Server, Advanced Server and Datacenter Server, and Windows Server 2003 (all 32-bit editions). Other operating systems are not currently supported. For detailed information on how to use ADS, refer to http://www.microsoft.com/windowsserver2003/technologies/management/ads/defa ult.mspx.

Deploying the Server Side Applications


The steps for building the servers described in the previous section can also be used to prepare the servers for the server-side applications. Detailed guidance for setting up the server-side applications is discussed under the following headings.

Process
Though the server will be set up to meet the needs of your specific applications, additional steps may be needed to prepare the hardware for deployment. To deploy server-side applications, follow these steps: 1. Install the operating system and third-party components on the server as required by the application. 2. Configure the server-side application components. 3. Install the server-side components on the server. 4. Add all configuration information into the configuration management database (CMDB) and maintain change control of the release. If a CMDB does not exist, a simplified one should be created for release-tracking. Updating the CMDB ensures that accurate knowledge is stored regarding all of the components or configuration items that make up the releases. 5. Deploy the applications in conjunction with the deployment strategy being employed for the databases. 6. After the database migration has been confirmed, perform configuration changes to the environment to activate the application. 7. If a fallback is required, then the application should be shut down and the original configuration that points all users back to the original application should be restored.

318

Deploying Phase

Technology
The server side application components can be either deployed manually or by using imaging as discussed in the "Automated Deployment" section.

Manual Deployment
If manual deployment is chosen, each of the components of the server-side application should be individually installed. First, install the language environments (such as Perl and PHP) and any third-party utilities using the vendor's software distribution. Then install the server-side application using the golden version of the source code from the project team.

Automated Deployment
The deployment of the server-side applications can be automated by bundling it into the image. The technology is the same as discussed for deploying the database.

Deploying the Client Application


After the server-side applications and databases have been deployed to production and tested, the client application should be released. Often, this stage of the deployment can be complex because of the number of users and components involved. This is usually the point in the deployment process where the end users are affected. In many cases, this is the first time that they are involved in the migration process. While this guidance focuses on the application and database aspects of the migration, keep in mind that there are many other items to consider. For instance, documentation, training, and help desk support also need to be in place at the time of this deployment. It is quite likely that the client computers are already in use. If not, build the hardware for the client computers and deploy the required software.

Process
As with the other stages of deployment, the client application can be deployed manually or through the use of automated tools. Consider the following steps when deploying the client application. 1. Create the deployment package, which can include the following: The complete application, including the binary installation files and any automation files. Client configuration and support files. Windows domain user accounts configuration scripts. Scripts for configuring ODBC data sources, ADO, or OLE DB connection strings. Scripts for configuring COM+ components for database connection. Acceptance test scripts and packages.

2. Create rollback scripts to remove changes introduced by the deployment. Rollback scripts provide an alternate option if an uncorrectable issue arises during the server, database, or application deployments. 3. Add the deployment package to the configuration management database (CMDB). Maintain change control of the release. Where a CMDB does not exist, a simplified one should be created for release-tracking.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows 4. When the database and application migration are completed successfully, deploy the client application. In addition to the database and application deployment strategy, a horizontal phase-out strategy may be employed for client applications.

319

5. If a fallback is required, then the application should be rolled back to the previous one and the original configuration that points all clients back to the original application should be restored. If the fallback is due to an application issue, then a horizontal phase-out strategy reduces the rollback effort.

Technology
When deploying the client application, additional tools may be required to efficiently manage the deployment. Some common tools used for packaging applications and utilizing distribution channels include: MSI (or Windows Installer) is a Microsoft solution for application packages. This installer provides additional functionality and integration with the Windows environment, including elevated privileges for installation and deployment across an Active Directory. To download Windows Installer or learn more about it, refer to http://www.microsoft.com/downloads/details.aspx?FamilyID=5fbc5470-b259-4733a914-a956122e08e8&DisplayLang=en. Visual Studio Installer creates setups based on the Microsoft Windows Installer technology in the Visual Studio IDE. For detailed step-by-step instructions for Visual Studio Installer, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/vsinstal/html/vehowvisualstudioinstallerquickstart.asp. WinINSTALL LE creates packages by recording a snapshot of a systems environment before and after an application is installed, then records all of the changes between snapshots to generate the installation package. For more information about WinINSTALL LE, refer to http://www.ondemandsoftware.com/FREELE2003/. Wise Package Studio is a tool used for packaging/repackaging applications using Wise script or an MSI file. Scripts, checks, and conditions can be embedded into the package, which also helps in deploying or distributing the package. For more information on Wise Package Studio, refer to http://www.wise.com/wps.asp?bhcp=1. InstallShield AdminStudio can also create a scripted package file or an MSI package. The repackaging methodology is same as that of Wise Package Studio. For more information about InstallShield, refer to http://www.installshield.com/downloads/isas/evalguide/AdminStudio5_Evaluator_Guide.pdf. IntelliMirror uses a Windows Installer server to provide distributions to client computers. When an application is launched, IntelliMirror checks to ensure that the application is loaded and up to date. If not, the needed files are downloaded and installed on the system. Because IntelliMirror checks for updated versioning every time the application is launched, this software can be used to manage all future releases of the client application. For more details on IntelliMirror, refer to http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/p roddocs/enus/Default.asp?url=/resources/documentation/windowsserv/2003/standard/proddoc s/en-us/sag_imirror_top_node.asp. Windows Installer automatically reads an MSI file and deploys the application packaged as an MSI file. The MSI file can be customized for the features, installation location, and so forth. Windows Installer can also perform a silent install of the MSI file when provided with that option. For an overview of Windows

320

Deploying Phase Installer, refer to http://www.microsoft.com/windows2000/en/advanced/help/default.asp?url=/window s2000/en/advanced/help/sag_WinInstall_Technology.htm?id=3991. Active Directory is a core feature in the Windows server operating system which not only handles security and privileges of the domain, but also has the capability to distribute MSI files. Group Policy policy settings can be set in Active Directory to distribute the required MSI file. Active Directory can be used efficiently to distribute the files at off hours or weekends to avoid blocking network congestion. For further information about Active Directory, refer to http://www.microsoft.com/windowsserver2003/technologies/directory/activedirector y/default.mspx. Systems Management Server (SMS) is a tool from Microsoft for distributing software packages. SMS has various features, such as packaging, distributing, deploying applications, and monitoring. For step-by-step procedures for distributing a package using SMS, refer to http://www.microsoft.com/downloads/details.aspx?FamilyID=32f2bb4c-42f8-4b8d844f-2553fd78049f&DisplayLang=en

Note The tools listed previously have not been tested in a lab environment as part of this solution.

Change Management
Change management is the process for controlling changes to a protected environment. In the Stabilizing Phase, changes are made to the test environment related to bug fixes and promoting the builds. Changes made to the solution in the production environment are handled more strictly because they can affect the end users of the application as well as other users of the production environment. Any proposed changes must go through the change approval process to ensure that the changes will not adversely affect the production environment. For detailed information on Change Management, refer to http://www.microsoft.com/technet/itsolutions/cits/mo/smf/smfchgmg.mspx.

Stabilizing the Deployment


After the server-side application components, database, and client-side application components are deployed, every aspect of the deployment must be validated to ensure that the migration has been performed successfully. Common items to check are included in the deployment checklist discussed in the following sections.

Deployment Checklist
The product manager and the operations team should ensure that the solution meets the business requirements and standards determined during the Envisioning Phase. The checklist developed during the Planning Phase should contain the acceptance criteria that evaluate the performance of the solution. It should also provide a baseline for the customer to approve the solution. The following categories should be included in the checklist to test the solution before signoff: Server Database Application (client and server side)

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Additional considerations

321

These categories are discussed under the following headings.

Server
The following are some server-related items that should be considered for the checklist: Have all relevant service packs been loaded to the production servers? Are the OS and installed service pack level well known and documented? You will need this information if the systems fail and manufacturer support is required. Have the settings from the performance tuning in the test environment during the Stabilizing Phase been transferred correctly? At the same time, ensure that any configurations unique to the testing environment have not been transferred to the production environment. Have backup power systems been fully tested to ensure proper operation? Are an appropriate number of replacement hard disks on hand in case of failure in an array? Are the default client connection network library settings established on the server? Are all database and application server installation settings documented (such as sort orders and default language)? Has communication between servers been checked to assure that proper DNS resolution and routing is functioning as expected? Are alerts defined for key problems and conditions? Who monitors the e-mail address to which these are sent? Are pagers used? Has the alert system been tested? Has the team removed guest accounts from the system and checked to ensure that this does not affect the application? Is the appropriate level of directory security set? Has security logging been implemented? Have all non-essential services and open ports on the server been identified and closed? Has the team tested production integration with third-party systems? Have any and all event log messages been identified and investigated? Does the OS have sufficient client access licenses for the expected peak traffic loads? Are sufficient client access licenses and connections installed for the OS? Is this being monitored to assure that it does not reach its limit? Are system configurations for all tiers backed up? Were scripts created to reset configurations in an emergency or to bring up a new server? Are they in a wellknown location? Have server backups been tested? Are procedures in place for proper storage and retrieval? Have clustering software configuration and failover operations been tested?

322

Deploying Phase

Database
The following are some database-related items that should be considered for the checklist: Have all database creation scripts been executed properly and without errors? Were the database statistics updated before launch? Have the database backups been tested to ensure that backup mechanisms are operating properly? Have restoration procedures been tested on the solution server and on different hardware to test failover procedures? Are off-site backup procedures in place? Has database security been addressed and appropriate logon accounts created? Is the system administrator's (SA) account password blank? Are applications using the SA account or other account(s)? Did the team define, test, schedule, and sign-off on maintenance plans? Has a schedule been established for the transaction log dumping that will be sufficient to recover activity to the satisfaction of the business? Has it been tested? Are sufficient client access licenses installed for the database server? Are these being monitored to assure that they do not reach their limit? Has the latest build of the database from the development environment been migrated, installed, tested, and verified within the production environment? The user databases should be checked to ensure that all the data from the database is transferred from the source solution to the target solution during deployment. Has the database been loaded in the production environment with clean data, and have initial inventory levels been set? Have feeds from other systems been verified under load to ensure that system availability is unaffected at the time they run? Is the database running on the default port of 1433? If so, can this be changed to another port or ports to enhance security of the application without affecting proper operation? Are non-essential SQL Server services running (MS Search, OLAP)? Stop them if they are not required. Are non-essential databases installed on the server? Remove databases such as pubs or Northwind, but be careful not to remove needed system databases such as model, tempdb, or master. Is the database correctly tuned, and does it have the proper memory usage settings applied? Is disk space sufficient for the expected size of the data 6-12 months ahead? Have clustering software configuration and failover operations been tested with the databases?

Application (Client Side or Server Side)


The following list offers some client-side or server-side application-related items that should be considered for the checklist: Are the proper security settings for the applications set and documented? Has application logging been enabled or disabled, as required? Has the golden release of the applications and other components been given a release number, archived, then deployed? Confirm that the golden release of the source code matches the version of the application that was released.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows Are all required environmental settings for the applications set and documented?

323

Are all third-party software or components that are required by the system installed and documented (such as version numbers and vendor contact information)? Ensure that the client application properly connects to the database. Perform cursory testing of the application that is not intrusive and does not change the production database. For example, running production reports is a good way to test database functionality without changing the live data. Has connectivity between the middle-tier system and any back-end systems been tested under load? Has connection pooling been monitored to assure proper operation? Are the activation types for any COM+ applications set properly? Has the application been tested with the consoles logged on and logged off to assure the proper identity is used? Has the application integration been tested with third-party systems? Have clustering software configuration and failover operations been tested with the applications?

Additional Considerations
Here are some additional considerations that could apply to your environment. Some of the tasks performed here are related to operations. Validating these tasks during deployment ensures operations is prepared to manage the solution and SLAs can be met. Are scheduler jobs defined for common tasks? Jobs such as backups and health reporting are normally scheduled through a scheduler such as cron in UNIX. These may have been migrated over. Has a disaster recovery plan (DRP) been established that includes database procedures? Who is the DRP team leader? Is the leader's contact information well known, and is a contingency leader appointed? Has the team enabled change-control procedures for the operations environment? Has the team finalized the operational processes and guidelines? Has the team developed, tested, and simulated disaster recovery procedures? Was an external connection used when testing the application against the operations system? The external connection should be similar to what a customer will use. Was solution training delivered? Is the processing speed acceptable? Have the business objectives and requirements been met? Have operation tests been performed? Steps should be taken to make sure that the operations team can properly track the servers and services with their monitoring tools.

Quiet Period
The quiet period for the target environment begins after stabilizing the deployed solution and continues until the deployment is complete. It usually exists for a period of 15 to 30 days. During this period, the operations team manages the deployed solution. No significant changes should be made to the solution. However, a member from the project team may aid the operations team to manage and resolve problems that can affect the working of the deployed solution. This enables the organization to estimate the

324

Deploying Phase

maintenance costs of the solution and prepare a budget. Changes made to the solution will be through the process of Change Management. During the quiet period, the efficiency of the solution can be ascertained by evaluating the following: Solution's stability Solution's downtime Maintenance required by the solution

Transferring Ownership to Operations


The final transition occurs when the project team completes its tasks and transfers the infrastructure, documentation, and operations to the Operations team. The ownership of the solution is also transferred to the Operations team. The Operations team has to understand the functioning of the solution and manage it. The solution's documentation should supply information required to manage the solution. The Operations team will require the following documents to handle the solution after migration: User manual. The user manual details the procedures for working with the solution. It also details the procedures for installation and solution maintenance. Hardware specifications. This document describes the hardware used in the production environment. Software specifications. This document covers the different applications, such as third-party software used by the solution itself or used to stabilize the deployment. The software specifications also detail the different configurations and settings applied to the solution during or after the installation and deployment of the solution. Support policies and procedures. This document details the updated business policies and procedures that need to be followed after the migration is complete.

Project Team Tasks


The project team will examine the deployment to ensure that all areas of the deployment are successfully completed and functioning as required. A final run of the tests that caused discrepancies can be used to check that they have been fixed. The project team ensures that: All the procedures of the migration have been followed. Backups are performed as required. All security measures (hardware and software) are in place and operate without issues. All errors and bugs are fixed. Adequate training is provided to operations to manage the solution. Different user accounts have been created and checked for their functionality.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

325

Operations Team Tasks


The operations team will check the solution to ensure that it conforms to the business objectives. The operations team will: Conduct a final run through of the solution to check its performance and stability. Ensure that the project team has completed all the tasks needed to complete the deployment. Check that the tools to perform routine tasks are set up. Review the documentation. Decide the frequency for performing routine tasks.

Conducting Project Review and Closure


After the deployment of the solution is complete, the team conducts a review meeting to assess the migration project. This review covers all the phases of the migration project, including the Envisioning, Planning, Developing, Stabilizing and Deploying Phases. By taking the time to discuss the entire migration project, important lessons can be learned and applied to future endeavors. This will also highlight the positive actions taken to successfully migrate the solution, and the less positive decisions or actions that delayed or hindered the migration. The project review can also compare the estimated outcome of the migration project with the actual targets achieved. The customer sign-off signifies the end of the Deploying Phase. The solution's key stakeholders review the migrated solution and documentation and confirm that the needs of the project have been met. After this sign-off has been received, the project team can be disengaged. The exact terms of this sign off will depend on the migration requirements for the project.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

327

20
Operations
Introduction and Goals
After the solution has been deployed, the operations staff inherits ownership of the solution and is required to manage and maintain it. The operations team has been prepared for this task by its involvement in the Deploying and Stabilizing Phases and by reviewing the documentation prepared by the project team. Even if the Windows environment existed before the migration, this project will have a great impact on operational tasks and responsibilities. This chapter provides additional links to guidance on how to operate the production infrastructure containing Microsoft SQL Server and Windows technologies.

Operational Framework
With the penetration of technology into every aspect of modern day business, success is based not only on the technology but also on the people and processes that control the technology. The migration project has only contributed the technology and not the operational aspects of the deployed solution. Microsoft Operations Framework (MOF) is a collection of best practices, principles, and models. It provides comprehensive technical guidance for achieving mission-critical production system reliability, availability, supportability, and manageability for solutions and services built on Microsoft products and technologies. The following links provide information about the operational framework: Microsoft Solutions for Infrastructure and Management (MSIM) provides in-depth technical procedural guidance on the management and operation of Windowsbased servers and computer data centers. More information is available at http://www.microsoft.com/technet/itsolutions/cits/mo/default.mspx. A brief overview of the MOF process model and concepts is provided in "Appendix A" of the UNIX Migration Project Guide (UMPG): http://go.microsoft.com/fwlink/?LinkId=19832. An entire discussion of MOF, its Service Management Functions (SMFs), and all the resources that support the framework, is available at http://www.microsoft.com/technet/itsolutions/techguide/mof/mofpm.mspx.

328

Operations

Windows Environment Operations


This section provides information about various aspects of a Windows Server 2003 deployment in an enterprise environment.

System Administration
For common administrative tasks, refer to http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/wi ndowsserver2003/proddocs/entserver/ctasks_topnode.asp. For tools, refer to http://www.microsoft.com/windowsserver2003/downloads/tools/default.mspx. The Active Directory Operations Guide is available at http://www.microsoft.com/downloads/details.aspx?familyid=84dfe61e-fb7b-467389b8-55bcc801b431&displaylang=en. A technical overview of network load balancing is available at http://www.microsoft.com/technet/prodtechnol/acs/reskit/acrkappb.mspx.

Security Administration
The Windows Server 2003 Guide is available at http://www.microsoft.com/technet/security/prodtech/win2003/w2003hg/sgch00.msp x. A source containing a wide range of security topics and technologies is available at http://www.microsoft.com/security/guidance/default.mspx. For best practices for implementing a Microsoft Windows Server 2003 Public Key Infrastructure, refer to http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/se curity/ws3pkibp.mspx. Information on account passwords and policies is available at http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/se curity/bpactlck.mspx. Information on certificate templates is available at http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/se curity/ws03crtm.mspx. The Security Services FAQ is available at http://www.microsoft.com/resources/documentation/WindowsServ/2003/standard/p roddocs/enus/Default.asp?url=/resources/documentation/windowsserv/2003/standard/proddoc s/en-us/safer_howto.asp. Security checklists for securing Windows 2003 Server are available at http://www.microsoft.com/security/guidance/checklists/default.mspx.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

329

Monitoring
Information on the tools available for monitoring is available at http://www.microsoft.com/resources/documentation/WindowsServ/2003/enterprise/ proddocs/enus/Default.asp?url=/resources/documentation/WindowsServ/2003/enterprise/prodd ocs/en-us/tools_monitoring_status.asp. For monitoring of server performance, refer to http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/wi ndowsserver2003/proddocs/entserver/ctasks019.asp. For monitoring of security-related events, refer to http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/wi ndowsserver2003/proddocs/entserver/ctasks018.asp.

Additional Links
Information on Windows Server 2003 performance and scalability is available at http://www.microsoft.com/windowsserver2003/evaluation/performance/perfscaling. mspx. Information on account management for Windows Server 2003 is available at http://www.microsoft.com/business/reducecosts/efficiency/manageability/account. mspx. Best practices for managing Windows Server 2003 are available at http://www.microsoft.com/technet/community/events/windows2003srv/tnt1106.mspx.

SQL Server Environment Operations


This section provides information on various aspects of a SQL Server 2000 deployment that can be used to ensure optimal performance in an enterprise environment. Performance is also critical to scalability.

Administration
For detailed information on SQL Server administration, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_adminovw_7f3m.asp. Information about administration subjects is available at http://www.microsoft.com/sql/techinfo/administration/2000/default.asp. For detailed discussions on topics such as indexes, statistics, automation, and memory management, refer to SQL Server 2000 Operations Guide: System Administration at http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops4.mspx. Information about backup and recovery is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_bkprst_9zcj.asp. A discussion on recovery models is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnsqlmag2k/html/dbRecovery.asp.

330

Operations

Security
For detailed discussion on managing security, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_security_05bt.asp.

Monitoring
For monitoring of system performance and activity, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_mon_perf_00mr.asp.

Additional Links
The following links provide information on SQL Server 2000 environment operations: Information about security patch management for SQL Server is available at http://www.microsoft.com/business/reducecosts/efficiency/manageability/patch.ms px. The SQL Server 2000 Operations Guide is available at http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops0.mspx.

Also refer to Appendix B, "Getting the Best out of SQL Server 2000 and Windows," for links to additional information about performance, scalability, and high availability.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

331

APPENDICES
Appendix A: SQL Server for Oracle Professionals
One of the assumptions of this guide has been that both Oracle and Microsoft SQL Server database administration experience is available in the project team. The success of the migration depends on how well the requirements of the current environment are translated into the SQL Server environment. The knowledge and involvement of the custodians of the current environment, the Oracle DBAs, is very important. A separate set of Oracle and SQL Server DBAs working on the migration has the disadvantage of possible communication problems and cost. Hence training the Oracle DBAs in SQL Server will serve the purpose of performing the migration, retaining DBAs with valuable knowledge of the business and the databases, and preparing them to manage the new SQL Server environment. The purpose of this appendix is to provide a primer for Oracle DBAs in the workings of SQL Server and its administration. The transition from Oracle to SQL Server is eased by the several similarities that exist between the two RDBMSs. Some of the key commonalities include: Relational engine. Both Oracle and SQL Server use an optimizer to generate an optimal execution plan from alternative solutions using statistics and access paths. The execution plan can be influenced by optimizer hints. Process architecture. Both Oracle and SQL Server have specialized processes for user connections (shared) and dedicated database functions. SQL Server uses threads and provides CPU affinity, features that are found in Oracle on Microsoft Windows. Memory architecture. In both Oracle and SQL Server, memory is broken up into buffers or caches with separate memory areas for SQL, procedural SQL, data dictionary, and session. Database buffers or caches are manipulated in terms of pages/blocks. Both have similar buffer replacement policies (Least Recently Used policy). Storage architecture. In both Oracle and SQL Server, the physical database is structured as data files, system files, transaction logs, and control files. Logical structures to complement the physical structures are hierarchical in nature. Backup options. Oracle and SQL Server provide various options for backing up databases, such as online backups, full and partial backups, and transaction log backups. Recovery model. Both Oracle and SQL Server use transaction logs (redo) and rollback. Recovery is possible using single file backups, transaction logs, and so on. Tools. Both Oracle and SQL Server employ Enterprise Manager, SQL client

Architecture
An understanding of SQL Server architecture and how it compares and contrasts with Oracle is fundamental in shaping the migration as well as extracting the optimal performance out of the SQL Server platform.

332

Appendices

Oracle and SQL Server are very similar in their architecture and internal workings. However, the same terms have different meanings in the two environments. For example, in Oracle the term instance is used for the memory and processes that support an Oracle database. However, in SQL Server, the term instance contains the memory, processes, and also the user databases. Irrespective of the terminologies used, SQL Server also uses memory and process components in a manner similar to Oracle. In this discussion, SQL Server is presented to the Oracle DBA using an Oracle-like view.

Database and Instance


A database, by definition, is the repository for data and metadata (data dictionary). This definition is universal in nature. In Oracle, the term database is used to specifically refer to the files that are used to store the database's data and metadata. The term instance is used for the memory structures (System Global Area is the main component) and processes that are required to access and perform work against the database. In SQL Server, the instance is used collectively for the data files, memory, and processes. Figure A.1 illustrates the similarities between instances of Oracle and SQL Server with respect to memory and processes. Only the important components of an instancethe SGA and the processes are covered in the figure. Details of the SQL Server architecture can be found at http://msdn.microsoft.com/SQL/sqlarchitecture/default.aspx.

Figure A.1
Oracle and SQL Server 2000 High level comparison of Instance and Database in Oracle and SQL Server to show architectural similarities

Apart from the defined meaning of the terms instance and database in Oracle and SQL Server, the terms are used very loosely to mean an occurrence of the database. The phrase multiple instances or multiple databases running on a single database server is a typical usage of this terminology. Database administrators should be able to infer the meaning from the context.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

333

What can be confusing with SQL Server is the presence of several system and usercreated databases inside a SQL Server database. Hence the term database system has been coined here to mean an occurrence of Oracle or SQL Server. Multiple Database Systems (Instances) In Oracle, multiple database systems can be created on a single server and the creation is independent of the software installation (using the CREATE DATABASE SQL command). The same is not true with SQL Server. The initial database system (default or named) is created as part of the software installation. The software distribution is also needed to create additional database systems. For information on working with named and multiple instances of SQL Server 2000, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/instsql/in_runsetup_2xmb.asp. The following components are shared between all of the instances running on the same computer: There is only one SQL Server 2000 program group (Microsoft SQL Server) on the computer, and only one copy of the utility represented by each icon in the program group. There is only one copy of SQL Server Books Online. The versions of the utilities in the program group are from the first version of SQL Server installed on the computer. There is only one copy of the MSSearchService that manages full-text searches against all of the instances of SQL Server on the computer. There is only one copy each of the English Query and Microsoft SQL Server 2000 Analysis Services servers. The registry keys associated with the client software are not duplicated between instances. There is only one copy of the SQL Server development libraries (include and .lib files) and sample applications.

Extending out to the client tier, the Oracle clients and SQL Server clients connect to databases in similar, albeit proprietary, protocols: Transparent Network Substrate (TNS) for Oracle and Tabular Data Stream (TDS) for SQL Server. Figure A.2 compares the user-instance-database interaction paths.

334

Appendices

Figure A.2
Oracle and SQL Server 2000 User-instance database interaction compared

Unlike Oracle, SQL Server does not store client configuration information in an operating system file. SQL Server uses registry keys to store such information. By default, SQL Server is configured to listen on TCP/IP network protocol, which should suffice for most installations migrating from Oracle. The server network utility (part of the server installation) can be used to configure SQL Server to listen on named pipes, multiprotocol, NWLink, IPX/SPX, Banyan VINES, and Appletalk protocols. On the client side, the client network utility (part of the client installation) can be used to set up alias names for SQL Servers. The alias names can be mapped to an IP address or a named pipe address. The naming varies based on the type of installation. Clients can connect to: A default instance by specifying the servername. A named instance by specifying the servername\instancename.

The proper instance naming will have to be used while defining Data Source Names (DSN) in the connection string of ODBC, ADO, DBI::DBD, Enterprise Manager, Query Analyzer, and isql utility.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

335

Database Architecture
A database has one or more physical data files that contain all the database data. This fact has not changed from the early days of data repositories, such as Sequential Access Methods (SAM), Indexed Sequential Access Method (ISAM), and Virtual Storage Access Method (VSAM), to the modern day relational database management systems (RDBMS). Though hardware throughput has improved in the past several years, the improvement in data access rates is not purely hardware-related, but also in part to the evolution of the database storage architectures.

Physical Storage Architecture


The physical architecture is made up of files that contain the system (or catalog) and application data. As with Oracle, SQL Server also has support for raw devices. The physical architecture is used to provide separation of data based on its type, such as metadata from user data; heap data from index data; user data from DBMS data (including transaction logs), and permanent data from temporary data. The Oracle physical architecture is made up of data files, redo log files, and control files. SQL Server database systems have four system databases (master, model, tempdb, and msdb) and one or more user databases. Each of these databases has one or more files. Each database has its own transaction log files, which are separate from the data files.

Logical Storage Architecture


For the convenience of administration and efficiency of use, the physical storage is broken down into smaller logical structures. By dividing each physical data file into several logical structures, and allocating space to each database object in increments of these smaller logical structures, access to the database objects can be insulated from the physical file storage on the operating system. SQL Server has a storage hierarchy which is similar to Oracles block-extent-segment-tablespace implementation. This enables you to load smaller chunks of data into memory for faster data access and also enables you to move the physical location of a data file in the file system, transparent to the database objects or the applications that access them. Figure A.3 shows the hierarchy of storage structures available in Oracle and SQL Server.

Figure A.3
Physical and Logical storage structure hierarchies in Oracle and SQL Server

336

Appendices

In SQL Server, the term page is used instead of blocks. The data files are formatted into blocks (pages) of the same size (8 KB). The unit of transfer for data between storage and database memory is a block or page. The composition of the SQL Server page is similar to the Oracle block and is made up of page header, data rows and row offset array (row directory). Free and used space is also tracked and managed similarly. SQL Server does not allow rows larger than 8060 bytes. This restriction, however, does not apply to rows containing large data types such as text, image, and so on, which can be stored separately. Although the data is stored in blocks, the block is too small a unit for allocation to the database objects. A bigger unit called extent, which corresponds to a specific number of contiguous data blocks, is used for this purpose. SQL Server only supports fixed size extents of 64 KB (8 pages). For more details on the two types of extents and how they are used, refer to the Pages and Extents article at http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_da2_4iur.asp. Also refer to the Managing Extent Allocations and Free Space article at http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_da2_4lgl.asp. The SQL Server equivalent of the Oracle tablespace is called the filegroup. Each SQL Server database is created with a primary file belonging to the default primary filegroup. Optionally, secondary datafiles can be added to the primary filegroup or additional filegroups can be created. Files and filegroups in SQL Server are implemented along the same lines as datafiles and tablespaces in Oracle.

Instance Architecture
This section covers the two components that make up an Oracle instance: memory and processes and its SQL Server equivalents.

Memory Architecture
The design of database memory architecture in both Oracle and SQL Server are based on the same objective. This objective is to acquire memory from the system and make it available to the RDBMS to perform its work. Because the available memory is a very small percentage of the database size, the configuration of memory is very important to the performance of the database system. The memory performance has to be optimized not only for application data, but also the data dictionary and the needs of the relational engine. For SQL this includes procedures, execution plan, cursors, temporary objects, and sorting.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

337

The SQL Server memory address space is illustrated in Figure A.4:

Figure A.4
SQL Server Address Space

A 32-bit process is normally limited to addressing 2 GB of memory, or 3 GB if the system was booted using the /3G boot switch in boot.ini, even if there is more physical memory available. However, both Oracle and SQL Server can use the MS Windows 2000 Address Windowing Extensions (AWE) API to support up to a maximum of 64 GB of physical memory. The specific amount of memory is dependent on the hardware and operating system versions. The AWE-enabled server configuration parameter is available in SQL Server for the purpose. The Microsoft Knowledge Base Article 274750, "How to configure memory for more than 2 GB in SQL Server," is available at http://support.microsoft.com/default.aspx?scid=kb;en-us;274750. It provides information on the maximum amount of memory that various Microsoft Windows Operating System versions can support and how to configure memory options. An overview of the internals of memory management facilities of SQL Server 2000 is available at http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnsqldev/html/sqldev_01262004.asp. The SQL Server 2000 memory manager uses different algorithms to manage memory on different versions of Windows. While memory allocation to the RDBMS has been rigidly controlled in Oracle by the configuration (initialization) parameters, the sizes of the components in SQL Server 2000 address space are auto-tuned dynamically in cooperation between the RDBMS and the operating system. All memory areas within the memory pool are also dynamically adjusted by the SQL Server code to optimize performance and do not need any administrator input. The Memory Pool is the SQL Server equivalent of the SGA. The composition of the memory pool can be found at http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_sa_1zu4.asp.

338

Appendices

Process/Thread Architecture
The process architecture identifies the various database related processes and their functionality. As is true with Oracle on Windows, SQL Server also uses a thread-based architecture. SQL Server does not differ significantly from Oracle in its use of pools of processes (threads) for system functions and user requests. Table A.1 compares SQL Server functionality with respect to Oracle background processes. Table A.1: Mapping of Oracle and SQL Server Background Processes Oracle Process (Required Status)*
Process Monitor (M) System Monitor (M) Database Writers (M) Checkpoint Process (M) Recoverer (O) Log Writer (M) Archive Processes (O) Job Queue Processes (O) Job Queue Coordinators (O) Queue Monitor Processes (O) Parallel Query Slave Processes (O) Dispatcher (O) Shared Servers (O)

Oracle Identifier
PMON SMON DBWn CKPT RECO LGWR ARCn Jnnn CJQn QMNn Pnnn Dnnn Snnn

Min / Max
1/1 1/1 1 / 20 1/1 0/1 1/1 0 / 10 0 / 1000 0/1 0 / 10 0 / 3600 0/5 0 / OS

SQL Server Process (Required Status)*


Open Data Services (M) Database cleanup / shrinking (M) Lazywriter (M) Database checkpoint (M) MS DTC (O) Logwriter (M) N/A SQL Agent (O) SQL Agent (O) SQL Agent (O) Worker threads (M) Network thread (M) Worker threads (M)

Min / Max
1/1 1/1 1/1 1/1 0/1 1/1 0/1 0/1 0/1 32 / 32767 1/1 32 / 32767

SQL Server employs sophisticated shared server architecture. On startup, SQL Server creates a User Mode Scheduler (UMS) object for each processor using the affinity mask setting. A pool of worker threads is created by Open Data Services (ODS) to handle user commands, and their control is distributed among the UMS schedulers. This architecture mimics the shared server-dispatcher concept in Oracle. While in Oracle the shared server processes are scheduled by the operating system, SQL Server uses the UMS to schedule worker threads. The internal workings of the User Mode Scheduler are available at: http://msdn.microsoft.com/SQL/sqlarchitecture/default.aspx?pull=/library/enus/dnsqldev/html/sqldev_02252004.asp?frame=true. Additional references on topics related to process architecture include: Server Memory Options (Administrating SQL Server): http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_config_9zfy.asp. SQL Server Memory Usage: http://support.microsoft.com/default.aspx?scid=kb;enus;321363. 64-bit Overview: http://www.microsoft.com/sql/64bit/productinfo/overview.asp.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows IO Affinity: http://support.microsoft.com/default.aspx?scid=kb;[LN];298402. Allocating threads to a CPU: http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_sa_8agl.asp.

339

Relational Engine Architecture


The Relational Engine is the part of the RDBMS that is responsible for parsing, optimizing, and executing the SQL statements received from end users and returning results (also known as fetching) to the end users. Figure A.5 illustrates the components of SQL Server relational engine.

Figure A.5
Components of SQL Server Relational Engine

Figue A.5 illustrates the main components of the relational engine portion of SQL Server. The illustrated components can be organized into three groupings of subsystems: Compilation, Execution, and SQL Manager. The components parser, T-SQL compiler, normalizer, and query optimizer belong to the compilation subsystem, which processes the SQL statements. These statements typically come in as TDS messages. The SQL Manager, in the middle of the figure, forms the second subsystem, which controls the flow of everything inside the SQL Server. Remote Procedure Call (RPC) messages are handled directly by the SQL Manager. T-SQL execution, query execution, and expression service form the execution subsystem. The query results come out of the expression service and are sent back out by ODS, after formatting the results into TDS messages. The expression services library performs data conversion, predicate evaluation or filtering, and arithmetic calculations. The catalog services component handles data definition statements (DDL). The UMS is a scheduler internal to SQL Server that handles the threads and fibers. The system-stored procedures are self evident. For a more detailed discussion of the relational engine, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnsql7/html/sqlquerproc.asp. The concepts of execution plan, cost based optimization, and hints are common to Oracle and SQL Server. A white paper on query optimizer and statistics is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnsql2k/html/statquery.asp.

340

Appendices

Transaction Architecture
Users and applications interact with the database using transactions. Both Oracle and SQL Server offer both optimistic and pessimistic concurrency control. Pessimistic locking is the default for SQL Server. Microsoft SQL Server supports all four levels of isolation: read uncommitted, read committed, repeatable read, and serialization. Read committed is the default level of isolation for SQL Server. Like most RDBMSs, Oracle and SQL Server achieve isolation by controlling concurrent access to shared resources (such as schema objects) and their subcomponents (such as data rows), and internal database structures using locks. Transactions acquire locks at different levels of granularity. In Oracle, the granularities are row and table, while in SQL Server the granularities are row (RID and rowid), key (row lock within an index), page, extent, table, and database. A major difference between Oracle and SQL Server in their use of locks is that Oracle does not escalate locks, while SQL Server escalates locks to reduce the system overhead of maintaining a large number of finer-grained locks. SQL Server automatically escalates row locks and page locks into table locks when a transaction exceeds its escalation threshold. Table A.2 compares the types of locking available in Oracle and SQL Server. Table A.2: Comparison of the Modes of Locking in Oracle and SQL Server Oracle
Share (S) Row Share (RS)

SQL Server
Shared (S) Update (U)

Purpose
Used for operations that do not change data, such as SELECT statements Used on resources that can be updated, such as SELECT FOR UPDATE in Oracle and SELECT statements with the UPDLOCK lock hint in SQL Server Used for data modification operations, such as INSERT, UPDATE, DELETE Only allows non-update S and RS locks Disables all other updates Indicates intention to read some resources lower in the hierarchy Indicates intention to modify some resources lower in the hierarchy Indicates intention to read all resources at a lower level and modify some of them using IX locks For performing DDL For compiling queries For bulk copying data into a table

Row Exclusive (RX) Share Row Exclusive (SRX) Exclusive (X) N/A N/A N/A

Exclusive (X) N/A Exclusive (X) Intent Shared (IS) Intent Exclusive (IX) Shared with Intent Exclusive (SIX) Schema Modification (Sch-M) Schema Stability (Sch-S) Bulk Update (BU)

Exclusive DDL Breakable Parse N/A

SQL Server Books Online has very useful information on lock modes, lock hints, lock compatibility, and deadlocks. More information is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/architec/8_ar_sa2_2sit.asp.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

341

Security Architecture
The database has security mechanisms such as logins, privileges, and roles to provide control over privileges to connect to the database, access schema objects, and manipulate their structure and data. Both Oracle and SQL Server utilize a layered approach to security, from logins to roles to system (statement) and object privileges.

Logins
Both Oracle and SQL Server provide logins for authorized users to connect to the database. In Oracle, the login is called user or username, and in SQL Server, it is called login identifier or simply login. Any operation the user can perform is controlled by the privileges granted to the login.

Authentication
Both Oracle and SQL Server allow authentication by the operating system or by the database (server). In SQL Server, the operating system mode is called Windows Authentication Mode and the database mode is called SQL Server Authentication Mode. SQL Server can operate in either Windows authentication mode or Windows and SQL Server authentication mode.

Passwords
The features and functionality related to passwords, such as complexity, aging, or lock out, that exist with Oracle logins, can only be found in Windows logins and not SQL Server authenticated logins.

Privileges
Oracle and SQL Server have a similar model to secure schema objects and application data and system objects and metadata from unauthorized users. This is achieved by creating two sets of privileges: system (statement) privileges (permissions) and object privileges (permissions). Privileges can be assigned to users and roles using the GRANT statement and removed using the REVOKE statement. Roles are used to grant privileges to users, but indirectly, are discussed next. For more information on the privileges available in SQL Server and their management, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_security_94dv.asp.

Roles
Oracle provides predefined roles, the most familiar being CONNECT, RESOURCE, DBA, and so on. Similarly, SQL Server has several predefined roles with specific permissions. There are two types of predefined roles: fixed server roles and fixed database roles. Both Oracle and SQL Server offer user-defined roles. For information on creating user-defined roles in SQL Server, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_security_6x5x.asp. Microsoft SQL Server 2000 SP3 Security Features and Best Practices is available at: http://www.microsoft.com/sql/techinfo/administration/2000/security/securityWP.asp. It provides a detailed account of SQL Server security model and best practices.

342

Appendices

Data Dictionary
The data dictionary, referred to in SQL Server as the system catalog, is broken up into a system-level component in the master database and individual database-level components in each of the other databases. It has the following characteristics: Each of the databases contains tables to maintain its own database objects (tables, indexes, constraints, users, privileges, replication definition) and other system structures (filegroups, files). The centralized system catalog in the master database contains information which is a combination of Oracle control files and data dictionary, such as individual database names and primary file location, server level logins, system messages, configuration (initialization parameter) values, remote servers, linked servers, system procedures (such as the DBMS_ Oracle stored programs), and so on.

The features of the data dictionary that an Oracle DBA is familiar with system tables, views, functions, and procedures can also be found in SQL Server in the following forms: System tables These serve the same function as Oracles data dictionary tables. The system tables should be used for information only. System tables available under the master database store server-level system information. For example: master..syslogins Available login accounts master..sysdatabases Available databases

The following tables store database-level system information for the respective database: sysusers Available user accounts with privileges on database sysobjects Available objects in database sysindexes Available indexes in database

For a complete listing of system tables and their use, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/tsqlref/ts_sys_00_690z.asp. Information schema views SQL Server offers information schema views which are equivalent to the ALL_ views found in Oracle. Information can be retrieved from these views by querying the corresponding INFORMATION_SCHEMA.view_name. For example, the views that are visible to a user can be verified by using INFORMATION_SCHEMA.VIEWS. Below is a list of common information schema views. information_schema.tables Available tables in a database information_schema.columns Available columns in a database information_schema.table_privileges Available privileges on tables in a database

The information schema views topic is discussed at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_iaiz_4pbn.asp. System functions

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

343

Equivalents for built-in Oracle SQL functions can be found in SQL Server under the System functions heading. Some commonly used functions include: User_name(id) getdate() system_user()

For a list of all available SQL Server system functions, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_fafz_79f7.asp. System stored procedures The system stored procedures aid the administrator in performing common functions by supplementing the DDL. They also provide information from system tables, prepackaged to save the administrator from writing his or her own queries and views. The system stored procedures can be considered to be the equivalent of Oracle's DBMS and UTL packages. These procedures are designed to be comprehensive and remove the burden of having to remember DDL syntax and system table names. For a complete listing of system stored procedures, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/tsqlref/ts_sp_00_519s.asp

Administration
This section provides brief introductions to topics such as export, import, backup, recovery, and monitoring.

Export/Import
In SQL Server, data can be imported and exported using the Data Transformation Services (DTS) tool, Transact-SQL statements (INSERT INTO and BULK INSERT), and Bulk Copy utility (bcp), which provides the same functionality as SQL*Loader.

Export
SQL Server does not have an equivalent of the Oracle export utility to move data into a binary format file. Individual schema objects can be backed up to text-based flat files in any of the several available file formats, or they can be exported to any of the several OLE DB destinations and restored using tools and utilities. One of the following methods can be used to extract or spool data into flat files: Using the tool osql.exe is similar to SQL*Plus in the way the users can run commands at the prompt. Using bcp, the bulk copy utility Using the Data Transformation Services (DTS) tool

Import
SQL Server does have an equivalent to the Oracle import utility, but as mentioned in the previous section, individual schema objects that have been exported to flat files with any of the several supported file formats or to any OLE DB destination can be imported into a database using one of the many tools and utilities. Three ways that data can be imported into SQL Server include:

344

Appendices Using the BULK INSERT command, which acts as an interface to the bcp utility. The structure of the BULK INSERT command is similar to the structure of the control file used in SQL*Loader. Using bcp, the bulk copy utility. Using the Data Transformation Services (DTS) tool.

The functionality and use of bcp, BULK INSERT, and DTS for moving data from Oracle into SQL Server have been demonstrated in Chapter 8. DTS has also been discussed in more detail in Appendix B: "Getting the Best out of SQL Server 2000 and Windows." Some additional references on these utilities are: the "DTS Package Development, Deployment and Performance" article available at http://support.microsoft.com/default.aspx?scid=kb;en-us;242391&sd=tech. For information on different switches available with BULK INSERT, refer to http://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_babz_4fec.asp. For tips on optimizing bulk copy read, refer to http://www.databasejournal.com/features/mssql/article.php/3095511.

Backup
In Oracle, backup methods can be categorized at a high level as physical and logical backups. Comparable methods for backing up the database can be found in SQL Server. Table A.3 provides a comparison of the available methods: Table A.3: Backup Methods in Oracle and SQL Server Backup Method
Logical Physical Physical Physical Physical

Oracle
Export Cold Online Incremental Archive log

SQL Server
bcp or DTS Cold Full Transaction log or Differential Transaction log

Logical Backups
The goal of a logical backup is to be able to recover at the individual schema object level. Although Oracle's export and import utilities are designed for moving Oracle data, they can be used as a supplemental method of protecting data in an Oracle database. It is not recommended to use logical backups as the sole method of data backup. SQL Server does not support logical backups to proprietary binary format files. Individual schema objects, however, can be backed up to flat files in any of the several supported file formats and restored using tools such as the bcp utility and DTS tools.

Physical Backups
In Oracle, a physical backup involves making copies of database files including datafiles, control files, and, if the database is in ARCHIVELOG MODE, archived redo log files. The same is true in SQL Server, though a backup is viewed to be at the database level. Larger databases can utilize filegroup backups to back up sections of a database. The physical backups available are: Cold (offline) backups. A cold backup or a closed backup can be described as a backup of one or more database files taken while the database is closed and is not

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows available for user access. Even though the term cold backup is not mentioned in the documentation, the method can be applied for performing backups in SQL Server. Online backups. A backup is termed as an online backup or a hot backup if it is taken while the database is open and is accessible to users. SQL Server full backup backs up a complete database and includes transaction log entries. File and filegroup backups can be made using the BACKUP DATABASE statement or through Enterprise Manager ->Backup Database -> File and filegroup. Transaction logs can be backed up separately as well.

345

Incremental Backups
Physical incremental backups are performed to capture only the changed blocks, thereby reducing the time and space needed for the backups. Incremental backups are performed after an initial complete backup has been performed. In SQL Server, differential backups contain only data that has changed since a last full backup. Differential database backups can be made using the BACKUP DATABASE statement or through Enterprise Manager -> Backup Database -> Database differential.

Recovery
The three recovery models offered by SQL ServerFull, Bulk-logged, and Simpleare discussed below. Full recovery model. This is used when the data is critical and must be recoverable to point of failure. All recovery options are available in this recovery model. This is equivalent to Oracles ARCHIVELOG mode (when the NOLOGGING option is not specified at any level), where all transactions are logged and logs are archived for full recoverability. Bulk-logged recovery model. This is the mid-level recovery model available for bulk operations such as bulk copy, SELECT INTO, and text processing. This recovery model does not provide point-in-time recovery past the beginning of any bulk operation. This is similar to setting the NOLOGGING option at the tablespace, object level, or for individual commands to avoid logging of bulk operations. Simple recovery model. This is used when it is not important to completely recover the database or the lost data can be recreated. This recovery model has the lowest logging overhead. This is equivalent to running the database in NOARCHIVELOG mode.

For more information on the backup and recovery architecture of SQL Server, refer to http://msdn.microsoft.com/library/en-us/architec/8_ar_aa_9iw5.asp. A detailed account of the backup and recovery options and techniques, as well as guidance for performing these administrative tasks, is available at http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops4.mspx.

Monitoring
Monitoring should be performed for availability, errors, and performance.

346

Appendices

Availability
Availability should cover the server(s), node(s), database services, and database. Server. Monitoring of the server should also include the network access path from the application or client to the server. The most common method to achieve this is a ping test. Database Services. A common mode of monitoring Oracle databases is to check on the instance-specific processes such as SMON or PMON. Because SQL Server uses thread architecture, only the main process (sqlservr.exe) can be monitored. Database. Even when the services are running, connecting to a database may fail due to some errors. A second level monitoring involving connecting to the database and performing some simple tasks should expose such errors.

The command line utility scm (Service Control Manager) can be used to check the health of a SQL Server instance. Scripts can also be executed using SQL Server command line utilities such as isql or osql.

Errors
In a database environment, errors, failures, or faults can occur in a number of areas. Errors have to be monitored by viewing (or mining) the error logs. The server event logs and the database instance logs are a good source for errors and violations. The event log, which contains entries for all database server instances and other applications running on the server, can be accessed through the Microsoft Windows Event Viewer utility. This can be accessed through Start -> Program -> Administrative Tools -> Event Viewer. The events in the event log can be filtered by type, source, date range, and so on. SQL Server has predefined error codes and error messages (see master..sysmessages for a complete list) that give information such as unique error number, severity level, error state number, and error message on all errors. The SQL Server error logs provide complete information regarding all events, auditing messages, errors, and so on, that have occurred against an instance. The error logs can be viewed using a text editor or through Enterprise Manager. In Enterprise Manager, the logs can be found using SQL Server Group -> Server Instance -> Management -> SQL Server Logs.

Performance
Performance Monitor and Task Manager are two of the several tools that can be used to monitor resource usage at the server level as well as the SQL Server instance level. The following is a listing of the methods by which various server and database resources can be monitored: CPU. Task Manager: Performance, Performance Monitor: Processor Memory. Task Manager: Performance, Performance Monitor: Memory Process. Task Manager: Processes, Performance Monitor: Process Virtual Memory. Task Manager: Performance, Performance Monitor: Paging File Network. Task Manager: Networking, Performance Monitor: Network Interface I/O. Task Manager: Processes, Performance Monitor: LogicalDisk and Peformance Monitor: PhysicalDisk Storage. My Computer, Windows Explorer

Some references on monitoring SQL Server performance are:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows "Job to Monitor SQL Server 2000 Performance Activity" is available at http://support.microsoft.com/default.aspx?scid=kb;EN-US;283696. "How to monitor SQL Server 2000 blocking" is available at http://support.microsoft.com/default.aspx?scid=kb;en-us;271509&Product=sql2k. "How to View SQL Server 2000 Activity Data" is available at http://support.microsoft.com/default.aspx?scid=kb;en-us;283784&Product=sql2k. "How to View SQL Server 2000 Blocking Data" is available at http://support.microsoft.com/default.aspx?scid=kb;en-us;283725&Product=sql2k. "How To: View SQL Server 2000 Performance Data" is available at http://support.microsoft.com/default.aspx?scid=kb;en-us;283886&Product=sql2k. Tools and functions to automate administration is available at http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops4.mspx.

347

348

Appendices

Appendix B: Getting the Best Out of SQL Server 2000 and Windows
This appendix provides references on various aspects of a SQL Server 2000 deployment that can be exploited to ensure optimal performance in an enterprise environment. Performance is also critical to scalability. Several of the links provided here have been referenced elsewhere in this guidance, but they are repeated here for your convenience.

Performance
The performance of a SQL Server installation is dependent on several factors covering database design, application design, query design, access methods (indexing schemes and views), hardware, and software resources. The following references cover the entire breadth of topics that are critical to getting the best performance out of SQL Server. "The Data Tier: An Approach to Database Optimization" is available at http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part10/c3361.mspx. "Improving SQL Server Performance" includes schemas, queries, indexes, transactions, stored procedures, execution plans, and tuning topics: http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnpag/html/scalenetchapt14.asp. "How To: Optimize SQL Queries" is available at http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnpag/html/scalenethowto04.asp. "How To: Optimize SQL Indexes" is available at http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnpag/html/scalenethowto03.asp. "Microsoft SQL Server 2000 Index Defragmentation Best Practices" is available at http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx. "Using Views with a View on Performance" is available at http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part10/c3661.mspx. "Checklist: SQL Server Performance" is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnpag/html/scalenetcheck08.asp. SQL Server 2000 Operations Guide: System Administration has very detailed discussions on topics such as indexes, statistics, automation, and memory management: http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/sqlops4.mspx. "Microsoft Storage Solutions The Right Storage and Productivity Solution" is available at http://www.microsoft.com/windowsserversystem/storage/solutions/rightsolution/righ tsolution.mspx. "Windows 2003 Performance and Scalability" is available at http://www.microsoft.com/windowsserver2003/evaluation/performance/perfscaling. mspx.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

349

Scalability
Scalability is the capability to handle increased volume of data of Very Large Databases (VLDBs) and activity. The size of the VLDBs may be due to a few large tables, or a large number of smaller tables, or a combination of both. Activity scalability is measured in the number of user connections, response time, throughput, and so on. Oracle and SQL Server are continually working on adding features and functionality which are directed at improving their respective products to meet these demands. While some of these features, such as clustering, replication, and parallelism, are highlighted in specification sheets and documentation, much of the scalability is built in at the lower levels of the architecture, such as use of bitmaps instead of lists to represent storage, in-row versus out-of-row data storage, and so on. The overall scalability is, however, dependent not only on the RDBMS, but also on the hardware and the application. The following references answer questions about scalability related to scaling out, scaling up, data partitioning, 64-bit architecture, storage technology, operating system, and other related topics. "Scaling Out on SQL Server" is available at http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/enus/part10/c3861.mspx. "SQL Server Scalability FAQ" is available at http://www.microsoft.com/sql/techinfo/administration/2000/scalabilityfaq.asp. "Microsoft SQL Server 2000 Scalability Project Server Consolidation" is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnsql2k/html/sql_asphosting.asp.

High Availability
Availability refers to the ability to connect to and perform actions against the database server, the database, and the data. The features in SQL Server that contribute to availability are discussed here. "SQL Server 2000 High Availability Series" provides the most complete set of white papers on planning and deploying a highly available environment containing SQL Server. This series is available at: http://www.microsoft.com/technet/prodtechnol/sql/2000/deploy/harag01.mspx. Also refer to Microsoft SQL Server 2000 High Availability (Microsoft Press, 2003). Apart from covering the hardware technologies, the three SQL Server technologies covered in this series are: Clustering Standby database or log shipping Replication

These topics are discussed under the following headings.

Clustering
Both Oracle and SQL Server offer high availability support through the use of clusters of servers. The two DBMSs depend on the underlying hardware and system software to provide cluster management to detect and manage failures. SQL Server can be run in an Active-Passive or Active-Active configuration using the Microsoft Cluster Services (MSCS).

350

Appendices

Information on Windows clustering and SQL Server failover clustering is available at http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/failclus.mspx.

Standby Database or Log Shipping


Both Oracle and SQL Server offer the standby database capability where a copy (secondary) of the entire (primary) database is maintained in a separate location to provide recovery from server node and database failures as well as catastrophic disasters. Changes made to the primary are captured in redo or transaction logs, shipped to the secondary site, and applied to the standby database. The application of the logs to the standby database can be controlled to be near-synchronous, or lag behind the primary to suit recovery needs. For more information about SQL Server log shipping, refer to: The overview on log shipping available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/adminsql/ad_1_server_8elj.asp. The FAQ available at http://support.microsoft.com/default.aspx?scid=kb;enus;314515&sd=tech. "Microsoft SQL Server 2000 How to Setup Log Shipping" available at http://support.microsoft.com/default.aspx?scid=%2fsupport%2fsql%2fcontent%2f2 000papers%2fLogShippingFinal.asp.

Replication
Replication is a set of technologies for copying and distributing data and database objects from one database to another and synchronizing the data between them for consistency. Replication is based on the master-slave technique (called publishersubscriber in SQL Server) in both Oracle and SQL Server. Replication is popularly used to provide high availability of shared data over a WAN. SQL Server offers the following forms of replication: Snapshot Replication. This is a materialized view (indexed view in SQL Server) containing a snapshot of data at a particular point in time. Transactional Replication. This is a progression from the snapshot, with changes sent to the subscriber at the transaction level. This enables data modifications made at the publisher to be propagated to the subscribers and also enables subscribers to make occasional updates back to the publisher. Merge Replication. Merge replication is similar to the multimaster replication technique in Oracle that allows several instances of the object available at several SQL Server sites.

For an overview, planning, tools, implementation, and other details of replication, refer to http://msdn.microsoft.com/library/default.asp?url=/library/enus/replsql/replover_694n.asp.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

351

Features and Tools


A brief overview of some of the tools that are very valuable in managing a SQL Server environment is provided under the following headings.

Performance Monitor
Performance Monitor is the most important tool in monitoring SQL Server performance. When SQL Server is installed on a server, it adds a set of measuring and monitoring counters for SQL Server instances. In addition to visually monitoring performance counters, the performance of SQL Server can be logged using counter logs. All counters available for monitoring are also available for logging whose set up is separate from the monitoring. Similar to Oracle Enterprise Manager, where events can be set up to track space, resources, availability, performance, and faults, alerts can be set up to track SQL Server activity using performance monitor alerts and SQL Server Agent. The "Monitoring SQL Server Performance" chapter in SQL Server 2000 Administrators Pocket Consultant (Stanek, 2000) is a good reference for using Performance Monitor. It is available at http://www.microsoft.com/technet/prodtechnol/sql/2000/books/c10ppcsq.mspx.

Profiler
The SQL Profiler utility can be used to monitor the performance of instance components, Transact-SQL statements, stored procedures, and auditing activity. This utility provides a robust set of functionality that allows you to optionally include a variety of events, report information (data columns), and filtering capabilities. The article "How To: Use SQL Profiler" provides information on how to track long running queries and heavily used stored procedures. It is available at http://msdn.microsoft.com/sql/default.aspx?pull=/library/enus/dnpag/html/scalenethowto15.asp.

SQL Server Health and History Tool (SQLH2)


This tool allows collection of data from a running instance of SQL Server which can then be reviewed and reported on. For more information on SQLH2, refer to http://www.microsoft.com/downloads/details.aspx?FamilyID=eedd10d6-75f7-4763-86ded2347b8b5f89&DisplayLang=en. The tool is especially useful for measuring service uptime and performance trends.

DTS
Data Transformation Services (DTS) offer import and export of data by providing a set of tools that let you extract, transform, and consolidate data from disparate sources into single or multiple destinations supported by DTS connectivity. By using DTS tools to graphically build DTS packages, or by programming a package with the DTS object model, you can create custom data movement solutions tailored to the specialized data transfer needs. For more information on DTS, refer to the "Data Transformation Services (DTS) in Microsoft SQL Server" white paper available at http://msdn.microsoft.com/SQL/sqlwarehouse/DTS/default.aspx.

352

Appendices

Query Analyzer
The SQL Query Analyzer is a graphical tool that can be used to execute queries directly against the database. It also has the functionality to debug query performance problems such as Show Execution Plan, Show Server Trace, Show Client Statistics and Index Tuning Wizard. An overview of Query Analyzer is available at http://msdn.microsoft.com/library/default.asp?url=/library/enus/qryanlzr/qryanlzr_1zqq.asp. For an overview of some of the important features of query analyzer, refer to http://www.sql-server-performance.com/query_analyzer_tips.asp.

Index Tuning Wizard


The Index Tuning Wizard allows you to select and create an optimal set of indexes and statistics based on input from Query Analyzer or a SQL Profiler log for a SQL Server database without requiring an expert understanding of the structure of the database, the workload, or the internals of SQL Server. For information on the features and use of the Index Tuning Wizard, refer to "Index Tuning Wizard for Microsoft SQL Server 2000" at http://msdn.microsoft.com/library/default.asp?url=/library/enus/dnsql2k/html/itwforsql.asp.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

353

Appendix C: Baselining
This appendix is concerned with creating baselines, capturing statistics, and comparing and reporting results.

Creating Baselines
Baselines are a collection of facts and figures that reflect the state of a system. Most commonly, the term baseline is used with respect to performance. In the Stabilizing Phase, this term is used to refer to a snapshot of the entire system specification. Unlike a new development project where there is no reference point, the state of the current solution can be captured in a migration project. This can be used as a baseline for comparison with the migrated solution being tested. After every successful cycle of testing, it is recommended that all elements, such as scripts, applications, application configuration, database configuration, and databases, are backed up and are baselined with appropriate tags for identification. Baselines from each cycle of testing can be compared with each other to see the progression towards a solution that can be packaged for production. You should create a baseline during the Stabilizing Phase. These baseline values can be used later while troubleshooting performance problems and also for proactive monitoring. The Performance Monitor, also known as System Monitor tool, can be used to monitor key system resources (CPU, memory, Disk I/O, network, as well as SQL Server counters) while you run the Stabilizing Phase tests. The following tables (A.4 through A.8) list some of the performance monitor counters that you can use for creating a baseline. These counters can also be used for general monitoring of the solution after deployment. Table A.4: Memory Management Performance Monitor Counters
\Memory\Available Mbytes \Memory\Pages/sec

Description
Available MBytes is the amount of physical memory available to processes running on the computer, in Megabytes. Lack of memory puts pressure on the CPU and Disk I/O. Pages/sec is the rate at which pages are read from or written to disk to resolve hard page faults. Hard page faults occur when a process requires code or data that is not in its working set or elsewhere in physical memory and must be retrieved from disk. Note that some paging will usually be present because of the way the operating system works, but the optimal level for average pages/sec should be close to 0.

354

Appendices

Table A.5: Network Analysis Performance Monitor Counters


\Network Interface (Network card)\Bytes Total/sec \Network Segment\% Net Utilization

Description
Bytes Total/sec is the rate at which bytes are sent and received over each network adapter, including framing characters. Average value should be less than 50% of NIC capacity. The percentage of network bandwidth in use on a network segment. Network Monitor Driver can be installed and used to monitor the network utilization. The threshold value varies based on the network configuration.

Table A.6: CPU Monitoring Performance Monitor Counters


\Processor% Processor Time

Description
This counter is the primary indicator of processor activity, and it displays the average percentage of busy time observed during the sample interval. It is calculated by monitoring the time that the service is inactive and subtracting that value from 100%. * Privileged Time is the percentage of elapsed time that the process threads spent executing code in privileged mode. Average values above 10% indicate possible CPU pressure. * User Time is the percentage of elapsed time the processor spends in the user mode. * Privileged Time should be about 15% less than the total User Time. This counter indicates the combined rate at which all the processors are switched from one thread to another. The pressure on memory can cause page faults, which can cause an increase in context switch/sec value, and a decrease in the overall performance. This counter indicates the number of threads in the processor queue. There is a single queue for processor time, even on computers with multiple processors. A sustained processor queue length of less than 10 threads per processor is normally acceptable, dependent on the workload. In other words, there are more threads ready to run than the current number of processors can service in an optimal way.

\Processor\% Privileged Time and \Processor\% User Time

\System\Context switches/sec

\System\Processor Queue Length

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

355

Table A.7: Disk I/O Monitoring Performance Monitor Counters


\PhysicalDisk\Avg. Disk Queue Length \PhysicalDisk\Current Disk Queue Length

Description
* Avg. Disk Queue Length is the average number of requests (both read and write) that were queued for the selected disk during the sample interval. * Current Disk Queue Length is the number of requests outstanding on the disk at the time the performance data is collected. These numbers should be less than two per usable physical disk in the RAID array, and may be higher on the SAN systems. Average time in seconds to read/write data to the disk.

\PhysicalDisk\Avg. Disk sec/Read \PhysicalDisk\Avg. Disk sec/Write

Table A.8: Monitoring SQL Server Monitoring Performance Monitor Counters


\SQLServer:Access Methods\Free Space Scans/sec \SQLServer:Access Methods\Full Scans/sec

Description
Number of scans initiated to search for free space to insert a new record fragment. Inserts on a heap with clustered index will impact performance as against a heap with no clustered indexes. Tables with missing indexes, or when too many rows are requested, end up with unrestricted full scans of the base table or indexes. This baseline counter can be an indicator of an increase in the use of temporary tables because they do not tend to have indexes. SQL Server uses latches to protect the integrity of internal structures. This counter monitors the total wait time for latch requests that had to wait in the last second. Lock Timeouts counter indicates the number of lock requests that timed out. This includes internal requests for NOWAIT locks. Lock Wait Time (ms) indicates the total wait time (in milliseconds) for locks in the last second. Lock Timeouts should be low and Wait Time should be zero. Number of lock requests that resulted in a deadlock. Should be zero. Number of transactions started for the database. It is an excellent indicator of growth related to transaction load. Percentage of pages that were found in the buffer pool without having to incur a read from disk. The higher the value of the buffer cache hit ratio the better. In a transaction processing system where a set user base is

\SQLServer:Latches\Total Latch Wait Time (ms)

\SQLServer:Locks(Total)\Lock Timeouts/sec and Lock Wait Time (ms)

\SQLServer:Locks(Total)\Numb er of Deadlocks/sec \SQL Server:Databases\Transaction s/sec \SQLServer:Buffer Manager\Buffer cache hit ratio

356

Appendices

Performance Monitor Counters

Description
operating against the database, this value may reach 98-99 percent. If this value is small, then adding memory to the SQL Server may help. This should be looked at with table scan frequency and might indicate that indexes/partitioning would improve performance.

\SQLServer:SQL Statistics\SQL ReCompilations/sec \SQLServer:Batch Requests/sec and \SQLServer:General Statistics\User Connections \SQLServer:Memory Manager\Memory Grants Pending \SQLServer:Memory Manager\Target Server Memory (KB) \SQLServer:Memory Manager\Total Server Memory (KB)

Recompilation adds overhead on the processor. Consistent high values for this counter should be investigated. These counters, along with Transactions/sec, are a good indicator of load on the server.

Current number of processes waiting for a workspace memory grant. This value should remain around 0. The total amount of dynamic memory the server is willing to consume. On a dedicated server, this value should remain close to the actual physical memory size. The total amount of dynamic memory the server is currently consuming. On a dedicated server, this value should remain close to the actual physical memory size.

You must determine which server you will use to monitor SQL Server. If you run Performance Monitor on the same machine as the SQL Server, it should not add major overhead, except some disk I/O and disk space requirement for the performance log files, depending on counters that you are monitoring and the interval. A large number of counters with a short sampling interval might require a lot of disk space; however, larger intervals will produce less accurate results. If you run Performance Monitor on a different machine, be aware that it might congest traffic on your network. For the purpose of creating the baseline, if you have disk space available on the SQL Server machine, it is recommended you run the Performance Monitor on the same machine, add as many counters as desired, and use the short sampling interval. Once you have established the baseline, for proactive monitoring you can monitor just the counters absolutely required and increase the interval. To create the baseline chart, follow these steps: 1. Click Start, then All Programs, then Administrative Tools, and then Performance. 2. Expand Performance Logs and Alerts node in the tree in the left pane. 3. Right-click Counter Logs and select New Log Settings. 4. Provide a name, click OK, add the counters, verify the details on the Log Files and Schedule tab, and click OK.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

357

Capturing Statistics
Some of the points to be considered for capturing statistics are: Types of data. The type of data being captured qualitative or quantitative. Characteristic. The statistic must be relevant to the test being performed. It must suit the comparisons being made or analysis to be performed. CPU usage measurement is an example of a statistic that is relevant to testing. Unit of measurement. The unit of measurement would be dependent on the magnitude of the measured characteristic. An example is measuring and comparing memory: the unit to be used is bytes or Mbytes. Frequency. The frequency of measurement of a statistic can be a critical decision. Too much data can be cumbersome and too little could be harmful.

Some examples of resources on which statistics can be captured are: Memory usage of the system Memory usage of the database CPU usage Disk I/O rate Number of user connections to the database Procedure cache usage Number of transactions executed per second Data cache usage Network bandwidth usage

Comparing and Reporting Results


Test results have only as much value as the inferences that can be drawn from them. Test results can have either a qualitative value or a quantitative value. In a development project, the expected behavior of the solution is based solely on business requirements. For a migration, measurements from the existing solution form the expected result. During the testing, the data from the migrated solution is compared with the expected results. Deviations from expected results cannot be construed as defects. As a result, analyzing and reporting results is a topic of serious consequence. A thorough analysis of the results has to be made and conclusions drawn based on several measurable and non-measurable factors (assumptions). The capabilities of the systems available for performing stabilization tests may be one such factor.

358

Appendices

Appendix D: Installing Common Drivers and Applications


This appendix contains installation information for FreeTDS, unixODBC, and ActiveState Perl: FreeTDS provides an implementation of the Tabular Data Stream (TDS) protocol, emulating the different versions of TDS that are used by Oracle and SQL Server. unixODBC is an implementation of the ODBC protocol and API for UNIX; the unixODBC library allows applications running under UNIX to communicate with database servers, such as SQL Server, using ODBC. Applications developed using unixODBC can execute under UNIX and SFU. ActiveState Perl is the industry-standard Perl distribution. This Perl distribution can be used from within the Window environment.

Installing FreeTDS
A precompiled and directly installable version of FreeTDS (currently version 0.62.3) is available at http://interopsystems.com/tools. This can be installed under SFU 3.5 using the pkg_add command. The package can be downloaded from

Configuring FreeTDS
When FreeTDS is installed, you should configure it to connect to your SQL Server databases using the following procedure: To configure FreeTDS connectivity, follow these steps: 1. Move to the /usr/local/etc directory and edit the freetds.conf file. This file contains information about the Sybase and SQL Server database servers it can access and the versions of TDS to use. 2. Add the following entries to the end of the file. Replace aa.aa.aa.aa with the IP address or DNS name of the UNIX computer running SQL Server (DNS name is recommended), and bbbb with the port that SQL Server is listening to on this computer (SQL Server usually listens on port 1433).
[SQLServer] host = servername port = bbbb tds version = 8.0

Note The name, SQLServer, does not have to be the same as the name of the computer running SQL Server it is an identifier used by the FreeTDS library functions to locate the entry in this file. It is a name that uniquely identifies this entry in the configuration file, and is used with the -S option for most of the scripts. 3. Save the file.

Testing the FreeTDS Configuration


You can test the FreeTDS configuration using tsql, a tool provided with FreeTDS, by following this procedure:

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

359

To test the installation of FreeTDS, follow these steps: 1. At a shell prompt, type the following command. Replace server with the entry in freetds.conf that you want to test (SQLServer), and password with the sa password for the selected server:
tsql S server U sa P password

Windows NT Authentication, called "integrated security," is not supported by FreeTDS. 2. At the 1> prompt, type the following commands. The result should be a list of databases available on the selected server:
SELECT name FROM sysdatabases Go

3. At the 1> prompt, type the following commands. The result should be a list of connections currently established to the selected server:
EXEC sp_who Go

4. At the 1> prompt, type the following command to leave tsql.


exit

Installing unixODBC
unixODBC is available in source form (in the file unixODBC-2.2.6.tar.gz) from the unixODBC Web site at http://www.unixodbc.com. This guide uses unixODBC version 2.2.6. The procedures assume you have already downloaded the file unixODBC2.2.6.tar.gz. To install unixODBC in UNIX, follow these steps: 1. Log in to UNIX as the root user. 2. At a shell prompt, access the directory where the downloaded unixODBC source code is located (default/usr/local/unixODBC), and then type the following command to unzip the file unixODBC-2.2.6.tar.gz to the file unixODBC-2.2.6.tar.
gunzip unixODBC-2.2.6.tar.gz

3. Type the following command to unpack the file unixODBC-2.2.6.tar into the directory unixODBC-2.2.6:
tar xvf unixODBC-2.2.6.tar

4. Move to the unixODBC-2.2.6 directory and type the following command to generate the files needed to build unixODBC. This command will generate a number of messages on the screen as it examines your UNIX configuration and generates the appropriate make files.
./configure

Note The configure script uses a number of well-known tricks to ascertain which tools and libraries are available that it can use to compile unixODBC. If the configure script fails, it is usually because the script cannot find a particular tool, file, or library. You can run the script supplying parameters to help it analyze your environment. Execute ./configure -h for more details. 5. Type the following command to build unixODBC. As before, you will see a large number of messages as the build process progresses:
make

6. Type the following command to install unixODBC:


make install

360

Appendices 7. Type the following command to check that unixODBC was installed successfully:
/usr/local/bin/isql

If unixODBC is installed correctly, you will see the following message:


********************************************** * unixODBC - isql * Syntax * * * * Options * * -b * -dx * -x0xXX * * -w * -c * * -mn * -v * --version * * Notes * * isql supports redirection and piping * for batch processing. * * Examples * * cat My.sql | isql WebDB MyID MyPWD -w * * Each line in My.sql must contain * exactly 1 SQL command except for the * last line which must be blank. * * Please visit; * * http://www.unixodbc.org * pharvey@codebydesign.com * nick@easysoft.com batch.(no prompting etc) delimit columns with x delimit columns with XX, where x is in hex, ie 0x09 is tab wrap results in an HTML table column names on first row. (only used when -d) verbose. version isql DSN [UID [PWD]] [options] * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * **********************************************

limit column display width to n *

**********************************************

Installing ActiveState Perl


ActiveState Perl can be downloaded from the ActiveState Web site at http://www.activestate.com. ActiveState Perl is available in prebuilt binary form for Linux, Solaris, and Windows, but the source code is also available and can be used to build ActiveState Perl on other UNIX platforms.

Solution Guide for Migrating Oracle on UNIX to SQL Server on Windows

361

This section describes how to install ActiveState Perl using the prebuilt Microsoft Installer (MSI) package ActivePerl-5.6.1.635-MSWin32-x86.msi. To install ActiveState Perl under Windows, follow these steps: 1. Execute the file ActivePerl-5.6.1.635-MSWin32-x86.msi. Windows Installer starts the installation. 2. In the ActivePerl 5.6.1 Build 635 Setup screen, click Next. 3. In the ActivePerl 5.6.1 Build 635 License Agreement screen, select I accept the terms in the License Agreement and click Next. 4. In the Custom Setup screen, make sure all features are selected to be installed (the default) and click Next. 5. In the New features in PPM screen, clear the checkbox for Enable PPM3 to send profile info to ASPN and click Next. 6. In the Choose Setup Options screen, check all four options and click Next. 7. In the Ready to Install screen, click Install. The Installing ActivePerl 5.6.1 Build 635 screen will appear and indicate the progress of the installation process. 8. In the Completing the ActivePerl 5.6.1 Build 635 Setup Wizard screen, remove the check mark from Display the release notes and click Finish.

362

Appendices

Appendix E: Reference Resources


For detailed discussion of the concepts and practices presented in this guide, you should refer to the following resources.
HTU

For additional information on migration, refer to the UNIX Migration Project Guide (UMPG) at http://www.microsoft.com/technet/itsolutions/migration/unix/umpg/default.mspx.
UTH

HTU

Details about Windows Services for UNIX (SFU) are available at http://www.microsoft.com/windows/sfu/productinfo/overview/default.asp.
UTH

Microsoft Solutions Framework (MSF) models and disciplines are available at http://www.microsoft.com/msf.
HTU UTH

For guidance on the structure of operations, refer the Microsoft Operations Framework (MOF) available at http://www.microsoft.com/mof.
HTU UTH

For additional information on migrating UNIX applications to Microsoft Windows, see the UNIX Application Migration Guide at http://go.microsoft.com/fwlink/?LinkId=30832
HTU UTH

Specific guidance on the Microsoft Operations Framework (MOF) quadrants is available in the "Process Model for Operations" white paper at http://www.microsoft.com/technet/itsolutions/techguide/mof/mofpm.mspx.
HTU UTH

For specific guidance on operating the solution, download the Migrating High Performance Computing (HPC) Applications from UNIX to Windows guide at http://go.microsoft.com/fwlink/?LinkID=23112.
HTU UTH

For monitoring the deployed solution, refer to http://www.microsoft.com/business/reducecosts/efficiency/manageability/default.m spx.


HTU UTH

HTU

For best practices regarding monitoring, refer to http://www.microsoft.com/business/reducecosts/efficiency/manageability/bestpracti ces.mspx.


UTH

HTU

For planning a SQL Server 2000 installation, refer to the Microsoft Web site at http://www.microsoft.com/sql/techinfo/planning/default.asp.
UTH

HTU

For programming SQL Server 2000, refer the developer topics at http://www.microsoft.com/sql/techinfo/development/2000/default.asp.
UTH


HTU

SQL Server Books Online is available at http://msdn.microsoft.com/library/enus/startsql/getstart_4fht.asp.


HTU UTH

For SQL Server 2000 Resource Kit information, refer the Microsoft Web site at http://www.microsoft.com/sql/techinfo/reskit/default.asp.
UTH

HTU

For managing and maintaining SQL Server 2000, refer the SQL Server 2000 Operations Guide at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql /maintain/operate/opsguide/default.asp.
UTH

Potrebbero piacerti anche