Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Course Notes
SAS Data Integration Studio: Fast Track Course Notes was developed by Linda Jolley, Kari Richardson,
Eric Rossland, and Christine Vitron. Editing and production support was provided by the Curriculum
Development and Support Department.
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of
SAS Institute Inc. in the USA and other countries. indicates USA registration. Other brand and product
names are trademarks of their respective companies.
SAS Data Integration Studio: Fast Track Course Notes
Copyright 2009 SAS Institute Inc. Cary, NC, USA. All rights reserved. Printed in the United States of
America. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, or otherwise, without the prior written
permission of the publisher, SAS Institute Inc.
Book code E1477, course code DIFT, prepared date 31Jul2009.
DIFT_001
ISBN 978-1-60764-048-6
Table of Contents
Course Description .................................................................................................................... viii
Prerequisites ................................................................................................................................ ix
Chapter 1
1.1
1.2
1.3
Chapter 2
2.1
2.2
2.3
Chapter 3
3.1
3.2
iii
iv
Exercises.................................................................................................................. 3-82
3.3
Chapter 4
4.1
4.2
4.3
Chapter 5
5.1
5.2
5.3
5.4
Chapter 6
6.1
6.2
Chapter 7
7.1
7.2
7.3
7.4
7.5
7.6
Basic Standardization with the Apply Lookup Standardization Transformation ......... 7-123
Demonstration: Using the Apply Lookup Standardization Transformation .......... 7-125
Exercises................................................................................................................ 7-138
7.7
Chapter 8
Working with Tables and the Table Loader Transformation ............. 8-1
8.1
8.2
vi
8.3
Table Properties and Load Techniques of the Table Loader Transformation ................. 8-14
Chapter 9
9.1
9.2
Using the SCD Type 2 Loader and Lookup Transformations ........................................ 9-15
Demonstration: Populate Star Schema Tables Using the SCD Type 2 Loader
with the Surrogate Key Method .................................................... 9-29
9.3
Chapter 10
Chapter 12
vii
viii
Course Description
This intensive training course provides accelerated learning for those students who will register sources
and targets; create and deploy jobs; work with transformations; set up change management; work with
slowly changing dimensions; and understand status handling and change data capture. This course is for
individuals who are comfortable with learning large amounts of information in a short period of time. The
&di1 and &di2 courses are available to provide the same type of information in a much more detailed
approach over a longer period of time.
To learn more
For information on other courses in the curriculum, contact the SAS Education
Division at 1-800-333-7660, or send e-mail to training@sas.com. You can also
find this information on the Web at support.sas.com/training/ as well as in the
Training Course Catalog.
For a list of other SAS books that relate to the topics covered in this
Course Notes, USA customers can contact our SAS Publishing Department at
1-800-727-3228 or send e-mail to sasbook@sas.com. Customers outside the
USA, please contact your local SAS office.
Also, see the Publications Catalog on the Web at support.sas.com/pubs for a
complete list of books and a convenient order form.
ix
Prerequisites
Experience with SAS programming, SQL processing, and the SAS macro facility is required. This
experience can be gained by completing the SAS Programming 1: Esstentials, SAS SQL 1: Essentials,
and SAS Macro Language 1: Essentials courses.
Chapter 1 Introduction
1.1
1.2
1.3
1-2
Chapter 1 Introduction
Objectives
1-3
1-4
Chapter 1 Introduction
The platform for SAS Business Analytics is also known as the SAS Enterprise Intelligence
Platform and the SAS Intelligence Platform.
1-5
1-6
Chapter 1 Introduction
SAS platform applications cannot execute SAS code on their own. They must request code
submission and other services from a SAS server.
10
SAS AppDev
Studio Eclipse
Plug-Ins
1-7
1-8
Chapter 1 Introduction
SAS BI Dashboard
SAS Data
Integration Studio
SAS Enterprise
Guide
SAS Information
Delivery Portal
The SAS Information Delivery Portal is a Web application that can surface
the different types of business analytic content such as information maps,
stored processes, and reports.
SAS Information
Map Studio
SAS Management
Console
SAS OLAP Cube Studio is used to create OLAP cubes, which are
multidimensional structures of summarized data. The Cube Designer
provides a point-and-click interface for cube creation.
SAS Visual BI
(JMP)
The SAS Web OLAP Viewer provides a Web interface for viewing and
exploring OLAP data. It enables business users to look at data from
multiple angles, view increasing levels of detail, and add linked graphs.
SAS Web Report Studio provides intuitive and efficient access to query and
reporting capabilities on the Web.
dfPower Studio
dfPower Studio from DataFlux (a SAS company) combines advanced dataprofiling capabilities with proven data quality, integration, and
augmentation tools for incorporating data quality into a data collection and
management process.
Cubes
Dashboards
Data explorations
Folders
Information maps
Jobs
Libraries
OLAP schema
Prompts
Reports
Stored processes
Tables
Table columns
11
12
1-9
1-10
Chapter 1 Introduction
is the root folder for the folder structure. This folder cannot be renamed, moved or
deleted. It can contain other folders, but it cannot contain individual objects.
is a shortcut to the personal folder of the user who is currently logged on.
contains folders for individual SAS products. These folders contain content that is
installed along with the product. For example, some products have a set of initial
jobs, transformations, stored processes, or reports which users can modify for their
own purposes. Other products include sample content (for example, sample stored
processes) to demonstrate product capabilities. Where applicable, the content is
stored under the product's folder in subfolders that indicate the release number for
the product.
Shared Data
is provided for you to store user-created content that is shared among multiple users.
Under this folder, you can create any number of subfolders, each with the
appropriate permissions, to further organize this content.
You can also create additional folders under SAS Folders in which to store
shared content.
13
1-11
1-12
Chapter 1 Introduction
Objectives
16
17
18
19
1-13
1-14
Chapter 1 Introduction
Status Bar
20
The title bar shows the current version of SAS Data Integration Studio, as well as the name of the current
connection profile.
The menu bar provides access to the drop-down menus. The list of active options varies according to the
current work area and the kind of object that you select. Inactive options are disabled or hidden.
The Toolbar provides access to shortcuts for items on the menu bar. The list of active options varies
according to the current work area and the kind of object that you select. Inactive options are disabled or
hidden.
The status bar displays the name of the currently selected object, the name of the default SAS Application
Server if one has been selected, the login ID and metadata identity of the current user, and the name of the
current SAS Metadata Server. To select a different SAS Application Server, double-click the name of that
server to display a dialog box. If the name of the SAS Metadata Server turns red, the connection is
broken. In that case, you can double-click the name of the metadata server to display a dialog box that
enables you to reconnect.
1-15
Tree View
21
The tree view provides access to the Basic Properties pane, Folders tree, Inventory tree, Transformations
tree, and Checkouts tree.
The Basic Properties pane displays the basic properties of an object selected in a tree view. To surface
this pane, select View Basic Properties from the desktop.
The Folders tree organizes metadata into folders that are shared across a number of SAS applications.
The Inventory tree displays metadata for objects that are registered on the current metadata server, such as
tables and libraries. Metadata can be accessed in folders that group metadata by type, such as Table,
Library, and so on.
The Transformations tree displays transformations that can be dragged and dropped into SAS Data
Integration Studio jobs.
The Checkouts tree displays metadata that has been checked out for update, as well as any new metadata
that has not been checked in. The Checkouts tree is not displayed in above view of SAS Data Integration
Studio. The Checkouts tree automatically displays when you are working under change management.
1-16
Chapter 1 Introduction
Job Editor
Job Editor
22
The Job Editor window enables you to create, maintain, and troubleshoot SAS Data Integration Studio
jobs.
The Diagram tab is used to build and update the process flow for a job.
The Code tab is used to review or update code for a job.
The Log tab is used to review the log for a submitted job.
The Output tab is used to review the output of a submitted job.
The Details pane is used to monitor and debug a job in the Job Editor.
23
24
25
1-17
1-18
Chapter 1 Introduction
dfPower Profile
dfPower Customize
dfPower Architect
26
27
28
29
1-19
1-20
Chapter 1 Introduction
b. Click
c. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
d. Click
1-21
1-22
Chapter 1 Introduction
Some folders in the Folders tree are provided by default, such as My Folder, Products,
Shared Data, System, and Users.
Three folders (and subfolders) were added by an administrator: Chocolate Enterprises, Data
Mart Development, and Orion Star.
4. Click
5. Click
1-23
The DIFT Demo folder contains seven metadata objects: two library objects, four table objects, and
one job object.
Each metadata object has its own type of properties.
1-24
Chapter 1 Introduction
6. Single-click on the DIFT Test Table ORDER_ITEM table object. The Basic Properties pane
displays basic information for this table object.
7. Single-click on the DIFT Test Source Library library object. The Basic Properties pane displays
basic information for this library object.
1-25
1-26
Chapter 1 Introduction
8. Single-click on the DIFT Test Job OrderFact Table Plus job object. The Basic Properties pane
displays basic information for this library object.
1-27
The name of the metadata table object is shown on the General tab, as well as the metadata folder
location.
1-28
Chapter 1 Introduction
The Columns tab displays the column attributes of the physical table. Note that all columns are
numeric.
1-29
The Physical Storage tab displays the type of table, the library object name, and the name of the
physical table.
d. Click
1-30
Chapter 1 Introduction
10. Right-click on DIFT Test Table ORDER_ITEM and select Open. The View Data window opens
and displays the data for this table.
The functions of the View Data window are controlled by the View Data toolbar:
EXPLANATION
Specifies the number of the first row that is displayed in the table.
Positions the data with the Go-to row as the first data line displayed.
Navigates to the first record of data in the View Data window.
Navigates to the last page of data in the View Data window.
Switches to browse mode.
Switches to edit mode.
Displays the Search area.
Refreshes view of the data.
Displays the Search area.
Displays the Sort By Column tab in the View Data Options window.
Displays the Filter tab in the View Data Options window.
Displays the Columns tab in the View Data Options window.
Displays physical column names in the column headers.
You can display any combination of column metadata,
physical column names, and descriptions in the column headers.
Displays optional descriptions in the column headers.
Displays optional column metadata in the column headers. This metadata
can be entered in some SAS Intelligence Platform applications, such as
SAS Information Map Studio.
Toggles between showing formatted and unformatted data in the View
Data window.
11. To close the View Data window, select File Close (or click
).
1-31
1-32
Chapter 1 Introduction
The name of the metadata table object is shown on the General tab, as well as the metadata folder
location.
1-33
The Options tab displays the library reference and the location of the physical path of this library.
c. Click
1-34
Chapter 1 Introduction
13. Display the generated LIBNAME statement for this library object by right-clicking on
DIFT Test Source Library and selecting View Libname.
14. Click
1-35
15. Access the Job Editor window to examine the properties of the job objects in more detail.
a. Right-click on DIFT Test Job OrderFact Table Plus and select Open.
This job joins two source tables and then loads the result into a target table. The target table is then
used as the source for the Rank transformation, the result of the ranking is loaded into a target table,
sorted, and then a report is generated based on the rankings.
1-36
Chapter 1 Introduction
b. Click the DIFT Test Table ORDERS table object. Note that the Details area now has a
Columns tab.
The Columns tab in the Details area displays column attributes for the selected table object.
These attributes are fully editable in this location.
Similarly, selecting any of the table objects in the process flow diagram (DIFT Test Table
ORDERS, DIFT Test Table ORDER_ITEM, DIFT Test Target Order
Fact Table (in diagram twice), DIFT Test Target Ranked Order Fact)
displays a Columns tab for that table object.
1-37
c. Click the SQL Join transformation. Note that the Details area now has a Mappings tab.
The full functionality of the Mappings tab from the SQL Join Designer window is found on this
Mappings tab.
Similarly, selecting any of the transformations in the process flow diagram (SQL Join, Table
Loader, Rank, Sort, List Data) displays a Mappings tab for that transformation.
1-38
Chapter 1 Introduction
As each transformation finishes, the icon is decorated with a symbol to denote success or failure.
Those transformations that had errors are also outlined in red.
Also, the Status tab in the Details area provides the status for each part of the job that executed.
e. Double-click the word Error under Status for the Table Loader.
The Details area moves focus to Warnings and Errors tab. The error indicates that the physical
location for the target library does not exist.
1-39
1-40
Chapter 1 Introduction
f.
Select DIFT Test Target Library found in the Data Mart Development DIFT Demo folder
on the Folders tab.
The Basic Properties pane displays a variety of information, including the physical path location.
1-41
i.
. The Details area shows that all but the List Data
Double-click the word Error under Status for the List Data transformation.
The Details area moves focus to Warnings and Errors tab. The error indicates that the physical file
does not exist. However, because the file is to be created from the transformation, it is more likely
that the location for the file does not exist.
1-42
Chapter 1 Introduction
j.
The Status tab of the Details pane shows the transformation completed successfully.
l.
Select File Close (or click ) to close the job editor window. If any changes were made while
viewing the job, the following window opens:
m. If necessary, click
16. Investigate some of the options available for SAS Data Integration Studio by selecting Tools
Options.
1-43
1-44
Chapter 1 Introduction
17. Examine the Show advanced property tabs option (this option is on the General tab of the Options
window).
a. If Show advanced property tabs is de-selected
then tabs such as Extended Attributes and Authorization do not appear in the
properties window for a specified object.
1-45
18. Examine the Enable row count on basic properties for tables option (this option is on the General
tab of the Options window).
a. If Enable row count on basic properties for tables is de-selected
then the Number of Rows field displays Row count is disabled for a selected
table object.
1-46
Chapter 1 Introduction
then the Number of Rows field displays the number of rows found for the selected table
object.
1-47
a. Click
to establish and/or test the application server connection for SAS Data
Integration Studio. An information window opens verifying a successful connection:
b. Click
1-48
Chapter 1 Introduction
the resultant objects in the Diagram area are then drawn as the following:
1-49
1-50
Chapter 1 Introduction
b. Verify that the default selection in the layout area is Left To Right.
This results in process flow diagrams going horizontally, such as the following:
The options on this tab affect how data are displayed in the View Data window.
1-51
1-52
Chapter 1 Introduction
a. Verify the default selection for the Column headers area is Show column name in column
header.
If Show column description in column header is selected in the Column headers area
If both Show column name in column header and Show column description in column
header are selected in the Column headers area
1-53
1-54
Chapter 1 Introduction
23. Select the Tools menu. Note that there is an item, dfPower Tool, that provides direct access to many
of the DataFlux dfPower Studio applications.
1-55
1-56
Chapter 1 Introduction
If the DIFT Repository has not been created, then follow these steps to create and
initialize it:
From SAS Data Integration Studio, select dfPower Tool dfPower Studio.
In the Navigation area, right-click on Repositories and select New Repository.
Click
Click
Click
a. Click
1-57
1) Type DIFT Orion Detail as the value for the Description field.
2) Click
next to the Directory field. The Browse for Folder window opens.
3) Navigate to S:\Workshop\OrionStar\ordetail.
4) Click
to close the Browse for Folder window. The Directory field displays
the selected path.
5) Click
1-58
Chapter 1 Introduction
1-59
d. Click
1-60
Chapter 1 Introduction
h. Click
i.
Type DIFT Orion Detail Project as the value for the Project name field.
j.
Type DIFT Orion Detail Project as the value for the Description field.
k. Click
1-61
1-62
Chapter 1 Introduction
The results are displayed in dfPower Explorer. Four tables were analyzed, and in the four tables there
are thirty columns.
1-63
1-64
Chapter 1 Introduction
5. Click the ORDER_ITEM table in the Matching Tables area. Having both tables selected
displays the relationship between the two tables.
1-65
1-66
Chapter 1 Introduction
7. Click PRODUCT_LIST in the Matching Tables area. The Product_ID column could
potentially link these tables.
1-67
1-68
Chapter 1 Introduction
The table and all its columns get added to the Profile Job Definition & Notes area.
1-69
2. From dfPower Explorer, right-click on the ORDER_ITEM table in the Database area and select
Add Table to Profile Task.
3. From dfPower Explorer, right-click on the PRODUCT_LIST table in the Database area and select
Add Table to Profile Task.
4. Collapse the listing of columns for each of the tables in the Profile Job Definition &
Notes area.
1-70
Chapter 1 Introduction
6. Type DIFT Orion Detail Information as the value for the Name field.
7. Click
8. Click
10. From SAS Data Integration Studio session, select Tools dfPower Tool
dfPower Profile (Configurator).
13. Click
1-71
1-72
Chapter 1 Introduction
If a dfPower Profile job is not available (for instance, one was not created using dfPower
Explorer), SAS data can be added by using the following steps:
Select Insert SAS Data Set Directory.
Type DIFT Orion Detail Data as the value for the Description field.
Click
next to the Directory field. The Browse for Folder window opens.
Navigate to S:\Workshop\OrionStar\ordetail.
Click
The link to the SAS Data Set Directory appears in the database listing.
14. Expand the DIFT Orion Detail data source. A list of available SAS tables is displayed. The ones
selected are the ones added from dfPower Explorer.
18. Click
1-73
1-74
Chapter 1 Introduction
If you did not open an existing job in dfPower Profile (Configurator) and you attempt to run a
job, a warning window opens.
Clicking
opens the Save As window. Typing a valid name and then clicking
displays the Run Job window as above.
24. Click
to close the Run Job window. The Executor executes the job.
1-75
1-76
Chapter 1 Introduction
1-77
The columns from the CUSTOMER table are listed in the Tables area with a tabular view of the each
column and its calculated statistics.
1-78
Chapter 1 Introduction
1-79
1-80
Chapter 1 Introduction
34. Click
35. Select File Exit to close the dfPower Profile (Viewer) window.
36. Select File Exit to close the dfPower Profile (Configurator) window.
Objectives
35
36
1-81
1-82
Chapter 1 Introduction
Change Management
The Change Management facility in SAS Data Integration
Studio enables multiple SAS Data Integration Studio
users to work with the same metadata repository at the
same time without overwriting each other's changes.
Change Management
Metadata
Repository
37
38
Checkouts Tree
If you are authorized to work with a project repository, a
Checkouts tree is added to the desktop of SAS Data
Integration Studio.
The Checkouts tree displays metadata in your project
repository, which is an individual work area or playpen.
39
40
1-83
1-84
Chapter 1 Introduction
41
b. Click
c. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
d. Click
1-85
1-86
Chapter 1 Introduction
1-87
3. Double-click on the application server area of the status bar to open the Default Application Server
window.
4. Verify that SASApp is selected as the value for the Server field.
5. Click
6. Click
7. Click
following:
1-88
Chapter 1 Introduction
8. Verify that the tree view area now has a Checkouts tab.
This tab displays metadata objects checked out of the parent repository, as well as any new objects
that Barbara creates.
9. If necessary, click the Folders tab.
10. Expand the Data Mart Development DIFT Demo folders.
11. Select the DIFT Test Job OrderFact Table Plus job, hold down the CTRL key, and select both
DIFT Test Source Library and DIFT Test Table ORDER_ITEM.
12. Right-click on one of the selected items and select Check Out.
The icons for the three objects are decorated with a check (
).
1-89
1-90
Chapter 1 Introduction
16. Click
17. Right-click on DIFT Test Table ORDER_ITEM and select Check In (optionally, select
Check Outs Check In with the table object selected).
The Check In Wizard opens.
1-91
1-92
Chapter 1 Introduction
18. Type Testing out Change Management as the value for the Title field.
19. Type Showing off features of Change Management simply added a
description to the table object as the value for the Description field.
20. Click
22. Click
1-93
1-94
Chapter 1 Introduction
24. Click
1-95
b. Click
c. Type Ole as the value for the User ID field and Student1 as the value for the Password
field.
d. Click
1-96
Chapter 1 Introduction
4. Verify that the tree view area now has a Checkouts tab.
1-97
1-98
Chapter 1 Introduction
7. Right-clicking on DIFT Test Source Library (or on DIFT Test Job OrderFact Table Plus) shows
that the Check Out option is not available for this checked out object.
Ole can tell that Barbara has the object checked out.
9. Select File Close (or click
Ole can tell that Barbara had this object checked out and that it was checked back in. The title and
description information filled in by Barbara in the Check In Wizard can give Ole an idea on what
updates Barbara made to this metadata object.
11. Select File Close (or click
12. Right-click on DIFT Test Table ORDER_ITEM and select Check Out.
13. Click the Checkouts tab and verify that the table object is available for editing.
1-99
1-100
Chapter 1 Introduction
16. Click
to close the Connection Profile window and open the Log On window.
c. Type Ahmed as the value for the User ID field and Student1 as the value for the
Password field.
d. Click
1-101
1-102
Chapter 1 Introduction
1-103
Clearing a project repository unlocks checked out objects (any changes made to these checked out
objects will not be saved) and deletes any new objects that may have been created in the project
repository.
6. Select both repositories (that is, select Barbaras Work Repository, hold down the CTRL key, and
select Oles Work Repository).
7. Click
1-104
Chapter 1 Introduction
8. Verify that the checked out objects are no longer checked out.
11. Click
12. Select File Exit to close Ahmeds SAS Data Integration Studio session.
1-105
1-106
Chapter 1 Introduction
14. Select View Refresh. The Checkouts tab was active, so the metadata for the project repository is
refreshed.
1-107
17. Select File Exit to close Oles SAS Data Integration Studio session.
1-108
Chapter 1 Introduction
21. Select File Exit to close Barbaras SAS Data Integration Studio session.
2.2
2.3
2-2
Objectives
2-3
2-4
Classroom Environment
During this course, you will use a classroom machine on
which the SAS platform has been installed and configured
in a single machine environment.
The single machine environment provides an easy way
for each student to learn how to interact with the SAS
platform without impacting each other.
The classroom environment includes the following
predefined elements in the SAS metadata:
users for the various job roles
groups
metadata folders with data and report objects
a basic security model
7
Course Data
The data used in the course is from a fictitious global
sports and outdoors retailer named Orion Star Sports &
Outdoors.
Course Data
The Orion Star data used in the course consists of the
following:
data ranging from 2003 through 2007
employee information for the employees located in
many countries as well as in the United States
headquarters
approximately 5,500 different sports and outdoor
products
approximately 90,000 customers worldwide
approximately 750,000 orders
64 suppliers
2-5
2-6
continued...
10
11
12
Course Scenario
During this course, you will have the opportunity to learn
about SAS Data Integration Studio as a data integration
developer.
The course consists of follow-along demonstrations,
exercises to reinforce the demonstrations, and a case
study to practice what you have learned.
13
2-7
2-8
Objectives
16
Course Tasks
There are several main steps you will accomplish during
this class.
17
Step 1:
Step 2:
Step 3:
Step 4:
Step 5:
Step 6:
Step 7:
Step 8:
Deploy jobs.
18
19
2-9
2-10
20
22
23
2-11
2-12
24
Deploy jobs.
25
Customer
Dimension
Order
Fact Table
Product
Dimension
Time
Dimension
26
27
2-13
2-14
Exercises
Product Table
Place an X in the column to indicate whether the data item will be used to classify data or as an
analysis variable. Add any additional data items that you think are needed.
Data Column
Product_ID
Product_Name
Product_Group
Product_Category
Product_Line
Supplier_ID
Supplier_Name
Supplier_Country
Discount
Total_Retail_Price
CostPrice_Per_Unit
Categorical Data?
Analysis Data?
2-15
2-16
The following table contains a data dictionary for the columns of source tables.
Column
Birth_Date
Table
Type Length
Format
Label
CUSTOMER
Num 4
DATE9.
Customer Birth
Date
STAFF
Num 4
DATE9.
Employee Birth
Date
CITY
Num 8
City ID
POSTAL_CODE
Num 8
City ID
STREET_CODE
Num 8
City ID
CITY
char 30
City Name
POSTAL_CODE
char 30
City Name
STREET_CODE
char 30
City Name
CONTINENT
Num 4
Continent ID
COUNTRY
Num 4
Numeric Rep.
for Continent
Continent_Name
CONTINENT
char 30
Continent Name
CostPrice_Per_Unit
ORDER_ITEM
Num 8
Count
STREET_CODE
Num 4
Country
CITY
char 2
COUNTRY
char 2
CUSTOMER
char 2
GEO_TYPE
char 2
Country
Abbreviation
HOLIDAY
char 2
Country's Holidays
ORGANIZATION
char 2
$COUNTRY.
Country
Abbreviation
STATE
char 2
$COUNTRY.
Abbreviated
Country
STREET_CODE
char 2
$COUNTRY.
Abbreviated
Country
SUPPLIER
char 2
$COUNTRY.
Country
City_ID
City_Name
Continent_ID
DOLLAR13.2
$COUNTRY.
Country
Country
Abbreviation
$COUNTRY.
Customer Country
Country_Former_Name COUNTRY
char 30
Former Name
of Country
Country_ID
Num 4
Country ID
COUNTRY
Column
Table
Type Length
Format
2-17
Label
Country_Name
COUNTRY
char 30
Current Name
of Country
County_ID
COUNTY
Num 8
County ID
STREET_CODE
Num 8
County ID
County_Name
COUNTY
char 60
County Name
County_Type
COUNTY
Num 4
Customer_Address
CUSTOMER
char 45
Customer Address
Customer_FirstName
CUSTOMER
char 20
Customer First
Name
Customer_Group
CUSTOMER_TYPE char 40
Customer Group
Name
Customer_Group_ID
CUSTOMER_TYPE Num 3
Customer Group ID
Customer_ID
CUSTOMER
Num 8
Customer ID
ORDERS
Num 8
Customer ID
Customer_LastName
CUSTOMER
char 30
Customer Last
Name
Customer_Name
CUSTOMER
char 40
Customer Name
Customer_Type
CUSTOMER_TYPE char 40
Customer Type
Name
Customer_Type_ID
CUSTOMER
Num 3
Customer Type ID
CUSTOMER_TYPE Num 3
Customer Type ID
GEO_TYPE.
County Type
Delivery_Date
ORDERS
Num 4
DATE9.
Discount
DISCOUNT
Num 8
PERCENT.
Discount as Percent
of
Normal Retail Sales
Price
ORDER_ITEM
Num 8
PERCENT.
Discount in percent
of
Normal Total Retail
Price
Emp_Hire_Date
STAFF
Num 4
DATE9.
Employee Hire
Date
Emp_Term_Date
STAFF
Num 4
DATE9.
Employee
Termination
Date
Employee_ID
ORDERS
Num 5
Employee ID
2-18
Column
Table
Type Length
Format
Label
ORGANIZATION
Num 8
Employee ID
STAFF
Num 8
Employee ID
DISCOUNT
Num 5
DATE9.
End Date
ORGANIZATION
Num 5
DATE9.
End Date
PRICE_LIST
Num 5
DATE9.
End Date
STAFF
Num 5
DATE9.
End Date
Factor
PRICE_LIST
Num 8
Yearly increase
in Price
Fmtname
HOLIDAY
char 7
Format Name
From_Street_Num
STREET_CODE
Num 4
From Street
Number
Gender
CUSTOMER
char 1
$GENDER.
Customer Gender
STAFF
char 1
$GENDER.
Employee Gender
Geo_Type_ID
GEO_TYPE
Num 4
Geographical Type
ID
Geo_Type_Name
GEO_TYPE
char 20
Geographical Type
Name
HLO
HOLIDAY
char 1
Job_Title
STAFF
char 25
Label
HOLIDAY
char 40
Holiday Name
Manager_ID
STAFF
Num 8
Manager for
Employee
Order_Date
ORDERS
Num 4
Order_ID
ORDERS
Num 8
Order ID
ORDER_ITEM
Num 8
Order ID
Order_Item_Num
ORDER_ITEM
Num 3
Order_Type
ORDERS
Num 3
Org_Level
ORGANIZATION
Num 3
Organization Level
Number
ORG_LEVEL
Num 3
Organization Level
Number
ORG_LEVEL
char 40
Organization Level
Name
End_Date
Org_Level_Name
DATE9.
Column
Table
Type Length
Format
2-19
Label
Org_Name
ORGANIZATION
char 40
Organization Name
Org_Ref_ID
ORGANIZATION
Num 8
Organization
Reference
ID
Personal_ID
CUSTOMER
char 15
Personal ID
Population
COUNTRY
Num 8
Postal_Code
POSTAL_CODE
char 10
Postal Code
STREET_CODE
char 10
Postal Code
POSTAL_CODE
Num 8
Postal Code ID
STREET_CODE
Num 8
Postal Code ID
DISCOUNT
Num 8
Product ID
ORDER_ITEM
Num 8
Product ID
PRICE_LIST
Num 8
Product ID
PRODUCT
Num 8
Product ID
PRODUCT_LIST
Num 8
Product ID
PRODUCT
Num 3
Product Level
PRODUCT_LEVEL Num 3
Product Level
PRODUCT_LIST
Product Level
Postal_Code_ID
Product_ID
Product_Level
COMMA12.
Num 3
Population
(approx.)
Product_Level_Name
PRODUCT_LEVEL char 30
Product Level
Name
Product_Name
PRODUCT
char 45
Product Name
PRODUCT_LIST
char 45
Product Name
PRODUCT
Num 8
Product Reference
ID
PRODUCT_LIST
Num 8
Product Reference
ID
Quantity
ORDER_ITEM
Num 3
Quantity Ordered
Salary
STAFF
Num 8
DOLLAR12.
Start
HOLIDAY
Num 8
DATE9.
Start_Date
DISCOUNT
Num 4
DATE9.
Start Date
ORGANIZATION
Num 4
DATE9.
Start Date
PRICE_LIST
Num 4
DATE9.
Start Date
Product_Ref_ID
Employee Annual
Salary
2-20
Column
Table
Type Length
Format
Label
STAFF
Num 4
State_Code
STATE
char 2
State Code
State_ID
STATE
Num 8
State ID
State_Name
STATE
char 30
State Name
State_Type
STATE
Num 4
Street_ID
CUSTOMER
Num 8
Street ID
STREET_CODE
Num 8
Street ID
SUPPLIER
Num 8
Street ID
Street_Name
STREET_CODE
char 40
Street Name
Street_Number
CUSTOMER
char 8
Street Number
Sup_Street_Number
SUPPLIER
char 8
Supplier Street
Number
Supplier_Address
SUPPLIER
char 45
Supplier Address
Supplier_ID
PRODUCT
Num 4
Supplier ID
PRODUCT_LIST
Num 4
Supplier ID
SUPPLIER
Num 4
Supplier ID
Supplier_Name
SUPPLIER
char 30
Supplier Name
To_Street_Num
STREET_CODE
Num 4
To Street Number
Total_Retail_Price
ORDER_ITEM
Num 8
Type
HOLIDAY
char 1
Unit_Cost_Price
PRICE_LIST
Num 8
DOLLAR13.2
Unit_Sales_Price
DISCOUNT
Num 8
DOLLAR13.2
Discount Retail
Sales
Price per Unit
PRICE_LIST
Num 8
DOLLAR13.2
DATE9.
GEO_TYPE.
DOLLAR13.2
Start Date
State Type
2-21
Type Length
Format
Label
Product_ID
num
Product ID
Product_Category
char
25
Product Category
Product_Group
char
25
Product Group
Product_Line
char
20
Product Line
Product_Name
char
45
Product Name
Supplier_Country
char
Supplier_ID
num
Supplier ID
Supplier_Name
char
30
Supplier Name
$COUNTRY.
Supplier Country
a. Complete the table by listing the source tables and the columns in those tables that are involved in
determining the values that will be loaded in the ProdDim table.
Target
Column
Product_ID
Product_Category
Product_Group
Product_Line
Product_Name
Supplier_Country
Supplier_ID
Supplier_Name
Source
Table
Source
Column
Computed
Column? (X)
2-22
b. Sketch the diagram for the product dimension table. Show the input data source(s) as well as the
desired calculated columns, and the target table (product dimension table).
Diagram for the Product Dimension Table:
ProdDim
Computed
Columns
Product_Category
Product_Group
Product_Line
30
OrderFact
Computed
Columns
NONE!
Order_Item
2-23
2-24
CustDim
Computed
Columns
Customer_Age
Customer_Age_Group
Customer Types
32
OrgDim
Computed
Columns
Company
Department
Group
Section
Staff
34
TimeDim
Computed
Columns
All Columns!!
(User-written code
generates the
TimeDim table.)
2-25
2-26
2.3
Solutions to Exercises
Categorical Data?
Product_ID
Product_Name
Product_Group
Product_Category
Product_Line
Supplier_ID
Supplier_Name
Supplier_Country
Analysis Data?
Discount
Total_Retail_Price
CostPrice_Per_Unit
Source
Table
Source
Column
Computed
Column
PRODUCT_LIST Product_ID
Product_Category
Product_Group
Product_Line
Product_Name
PRODUCT_LIST Product_Name
Supplier_Country
SUPPLIER
Country
Supplier_ID
SUPPLIER
Supplier_ID
Supplier_Name
SUPPLIER
Supplier_Name
b.
Diagram for the Product Dimension Table:
2-27
2-28
3.2
3.3
3-2
Objectives
3-3
3-4
Custom Folders
The Folders tree is one of the tree views in the left panel
of the desktop. Like the Inventory tree, the Folders tree
displays metadata for objects that are registered on the
current metadata server, such as tables and libraries. The
Inventory tree, however, organizes metadata by type and
does not enable you to add custom folders. The Folders
tree enables you to add custom folders.
In general, an administrator sets up the custom folder
structure in the Folders tree and sets permissions on
those folders. Users simply save metadata to the
appropriate folders in that structure.
3-5
1. Select Start All Programs SAS SAS Data Integration Studio 4.2.
2. Log on using Ahmeds credentials to access the Foundation repository.
a. Select My Server as the connection profile.
b. Click
c. Type Ahmed as the value for the User ID field and Student1 as the value for the Password
field.
d. Click
3-6
4. Right-click on the Data Mart Development folder and select New Folder.
3-7
3-8
Libraries
In SAS software, a library is a collection of one or more
files that are recognized by SAS and that are referenced
and stored as a unit.
Libraries are critical to SAS Data Integration Studio.
Metadata for sources, targets, or jobs cannot be finalized
until the appropriate libraries have been registered in a
metadata repository.
Accordingly, one of the first tasks in a SAS Data
Integration Studio project is to specify metadata for the
libraries that contain or will contain sources, targets, or
other resources. At some sites, an administrator adds and
maintains most of the libraries that are needed, and the
administrator tells SAS Data Integration Studio users
which libraries to use.
8
3-9
3-10
This demonstration illustrates defining metadata for a SAS library, a location that has some of SAS source
tables to be used throughout the rest of the course.
to close the Connection Profile window and open the Log On window.
c. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
d. Click
3-11
3-12
6. Click
7. Type DIFT Orion Source Tables Library as the value for the Name field.
8.
9. Verify that the location is set to /Data Mart Development/Orion Source Data.
The final specifications for the name and location window should be as follows:
10. Click
3-13
3-14
13. Click
3-15
The final settings for the library options window are shown here.
If the desired path does not exist in the Available items pane, click
. In the New
Path Specification window, click
next to Paths. In the Browse window, navigate
to the desired path. Click
to close the Browse window. Click
the New Path Specification window.
17. Click
to close
3-16
3-17
Exercises
For this set of exercises, use Barbaras project repository to create the library object(s).
1. Specifying Folder Structure
If you did not follow along with the steps of the demonstration, complete steps 1-13 starting on page
3-5.
2. Specifying Orion Source Tables Library
If you did not follow-along with the steps of the demonstration, complete steps 1-17 starting on page
3-10.
3. Specifying a Library for Additional SAS Tables
There are additional SAS tables that are needed for the course workshops. Therefore, a new library
object must be registered to access these tables. The specifics for the library are shown below:
Name:
Folder Location:
SAS Server:
SASApp
Libref:
DIFTSAS
Path Specification:
S:\Workshop\dift\data
Description:
3-18
Objectives
14
Source Data
Tables are the inputs and outputs of many SAS Data
Integration Studio jobs. The tables can be SAS tables or
tables created by the database management systems that
are supported by SAS/ACCESS software.
In this class, you will use source data from three different
types of data sources:
Q SAS tables
Q Microsoft Access Database using ODBC
Q external files
15
16
17
3-19
3-20
to close the Connection Profile window and open the Log On window.
d. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
3-21
When the Register Tables wizard opens, only those data formats that are licensed for your
site are available for use.
The procedure for registering a table typically begins with a page that asks you to "Select the
type of tables that you want to import information about". This window is skipped when you
register a table through a library.
3-22
4. Click
5. Click
6. Click
next to the SAS Library field and then select DIFT Orion Source Tables Library.
3-23
3-24
9. Click
11. Click
The metadata object for the table is found in the Checkouts tree.
3-25
3-26
12. Right-click the PRODUCT_LIST metadata table object and select Properties.
13. Type DIFT at the beginning of the default name.
14. Remove the description.
15. Click the Columns tab to view some of the defined information.
3-27
3-28
3-29
3-30
22. Right-click the DIFT PRODUCT_LIST metadata table object and select Open. The View Data
window opens.
3-31
3-32
23. Click
28. Click
3-33
3-34
The data returned to the View Data window are filtered based on the query specified.
19
20
3-35
3-36
This demonstration uses the Control Panels Administrative Tools to access the ODBC Data Source
Administrator. A Microsoft Access database will be defined as an ODBC data source to the operating
system.
To register the desired tables from the Microsoft Access database via ODBC connection, a library object
(metadata object) is needed, and this library object requires a server definition. This server definition
points to the newly defined ODBC system resources. On this image, Barbara does not have the
appropriate authority to create metadata about a server. So Ahmed will create this server definition for her
using SAS Management Console.
Finally, Barbara can use the Register Tables wizard to complete the registration of the desired table.
3-37
3. In the Administrative Tools window, double-click Data Sources (ODBC) to open the ODBC Data
Source Administrator window.
4. In the ODBC Data Source Administrator window, click the System DSN tab.
3-38
5. Click
7. Click
8. Type DIFT Course Data as the value for the Data Source Name field.
9. Click
3-39
3-40
13. Click
The path and database name are now specified in the Database area as shown here:
14. Click
3-41
The System DSN tab in the ODBC Data Source Administrator now has the newly defined ODBC data
source.
15. Click
3-42
c. Click
to close the Connection Profile window and open the Log On window.
d. Type Ahmed as the value for the User ID field and Student1 as the value for the Password
field.
e. Click
3-43
3-44
4. Click
5. Type DIFT Course Microsoft Access Database Server as the value for the Name
field.
6. Click
3-45
3-46
7. Select ODBC Microsoft Access as the value for the Data Source Type field.
8. Click
9. Click Datasrc.
10. Type "DIFT Course Data" (the quotes are necessary since the ODBC data source name has
spaces).
11. Click
3-47
3-48
12. Click
With the ODBC server defined, Barbara can now define the metadata object referencing a table in the
Microsoft Access database.
1. If necessary, access SAS Data Integration Studio as Barbaras credentials.
a. Select Start All Programs SAS SAS Data Integration Studio 4.2.
b. Select Barbaras Work Repository as the connection profile.
c. Click
to close the Connection Profile window and open the Log On window.
d. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
3-49
3-50
7. Select ODBC Microsoft Access as the type of table to import information about.
8. Click
3-51
There are no library metadata objects defined with an ODBC engine so none appear in the selection
list.
9. Click
10. Type DIFT Course Microsoft Access Database as the value for the Name field.
3-52
11. Verify that the location is set to /Data Mart Development/Orion Source Data.
The final specifications for the name and location window should be as follows:
12. Click
14. Click
3-53
3-54
16. Click
17. Verify that DIFT Course Microsoft Access Database Server is the value for the Database
Server field.
18. Click
3-55
3-56
19. Click
20. Click
3-57
3-58
22. Click
3-59
23. Click
The metadata object for the ODBC data source, as well as the newly defined library object, are found
in the Checkouts tree.
24. Right-click the CustType metadata table object and select Properties.
3-60
25. Type DIFT Customer Types as the new value for the Name field.
26. Click the Columns tab to view some of the defined information.
3-61
3-62
29. Right-click the DIFT Customer Types metadata table object and select Open. The View Data
window opens.
3-63
3-64
22
23
to close the Connection Profile window and open the Log On window.
d. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
3-65
3-66
6. Type DIFT Supplier Information as the value for the Name field.
7. Verify that the location is set to /Data Mart Development/Orion Source Data.
8. Click
3-67
3-68
9. Click
13. Click
Previewing the file shows the first record contains column names and that the values are commadelimited and not space delimited.
15. Click
3-69
3-70
The final settings for the External File Location window are shown here:
16. Click
19. Click
3-71
3-72
20. Click
3-73
21. Type 2 (the number two) as the value for the Start record field.
22. Click
to close the Auto Fill Columns window. The top portion of the Column Definitions
window populates with 6 columns: 3 numeric and 3 character.
23. Click
3-74
24. Select Get the column names from column headings in this file.
25. Verify that 1 is set as the value for The column headings are in file record field.
26. Click
Description
Supplier_ID
Supplier ID
Supplier_Name
Supplier Name
Street_ID
Supplier Street ID
Supplier_Address
Supplier Address
Supplier Country
29. Click the Data tab in the bottom part of the Column Definitions window.
30. Click
3-75
3-76
31. Click
32. Click
The metadata object for the external file is found on the Checkouts tab.
3-77
3-78
5. Click
7. Click
3-79
3-80
8. Click
3-81
3-82
Exercises
STAFF
DIFT STAFF
c. Register the five tables (SAS tables) found in the DIFT SAS Library. Change the default
metadata names to DIFT <SAS-table-name>.
d. Check in all SAS tables.
6. Defining Metadata for an Existing ODBC Data Source
Two tables, Contacts and NewProducts, are located in the Microsoft Access database
DIFT.mdb located in the directory S:\Workshop\dift\data.
a. Place the metadata table objects in the Data Mart Development Orion Source Data folder.
Change the default metadata names to DIFT <table-name>.
b. Check in all tables.
Hint: The ODBC library and ODBC server are defined in the previous demonstration. If necessary,
consider the following steps to replicate:
ODBC Server: pg 3-42, steps 1-13
ODBC Library: pg 3-51, steps 9-19
3-83
Length
Type
Informat
Format
Begin
Position
End
Position
22
Company
30
Char
YYMM
Num
YYMM5.
24
28
Sales
Num
DOLLAR13.
30
43
Cost
Num
DOLLAR13.
45
58
Salaries
Num
DOLLAR13.
61
84
Profit
Num
DOLLAR13.
87
100
$22.
Name the metadata object representing the external file DIFT Profit Information and
place the metadata object in the Data Mart Development Orion Source Data folder.
After the external file metadata is defined in the project repository, be sure to check it in.
8. Defining Metadata for an ODBC Data Source
Orion Star is considering purchasing a small store that sells various products over the internet and
through catalog orders. Some data from the store was made available for analysis in the form of a
Microsoft Access database DIFTExerciseData.mdb located in the directory S:\Workshop\dift\data.
Register all three tables in metadata within this database.
The ODBC system data source needs to be created. Name the data source
DIFT Workshop Data.
A library metadata object and a server metadata object need to be created. Name these as follows:
Library name:
Server name:
Place the metadata table objects in the Data Mart Development Orion Source Data folder.
Change the default metadata names to DIFT <table-name>.
Check in all objects.
3-84
f. Type DIFT SAS Library as the value for the Name field.
g. Verify that the location is set to \Data Mart Development\Orion Source Data.
h. Click
next to Paths.
q. Click
s. Click
t. Verify that the information is correct in the review window and then click
The new library metadata object is found in the Checkouts tree.
3-85
3-86
d. Click
f. Click
3-87
3-88
h. Click
3-89
6) Select STAFF table, hold down the CTRL key and select ORDER_ITEM and ORDERS.
7) Verify that /Data Mart Development/Orion Source Data is the folder listed for the
Location field.
8) Click
10) Right-click the STAFF metadata table object and select Properties.
11) Type DIFT at the beginning of the default name.
12) Click
13) Right-click the ORDER_ITEM metadata table object and select Properties.
14) Type DIFT at the beginning of the default name.
15) Remove the default description.
16) Click
17) Right-click the ORDERS metadata table object and select Properties.
18) Type DIFT at the beginning of the default name.
3-90
19) Click
c. Register the five tables (SAS tables) found in the DIFT SAS Library. Change the default
metadata names to DIFT <SAS-table-name>.
1) Select File Register Tables. The Register Tables wizard opens.
2) Select SAS as the type of table to import information.
3) Click
4) Click
5) Click
6) Click
7) Verify that /Data Mart Development/Orion Source Data is the folder listed for the
Location field.
8) Click
The metadata objects for the table are found in the Checkouts tree.
10) Right-click the CUSTOMER_TRANS metadata table object and select Properties.
11) Type DIFT at the beginning of the default name.
12) Click
13) Right-click the CUSTOMER_TRANS_OCT metadata table object and select Properties.
15) Click
16) Right-click the NEWORDERTRANS metadata table object and select Properties.
17) Type DIFT at the beginning of the default name.
20) Right-click the STAFF_PARTIAL metadata table object and select Properties.
21) Type DIFT at the beginning of the default name.
22) Click
The metadata objects for the table are found in the Checkouts tree.
3-91
3-92
5) Click
6) Click
3-93
9) Select Contacts, hold down the CTRL key and select NewProducts.
10) Click
11) Click
12) Right-click the Contacts metadata table object and select Properties.
13) Type DIFT at the beginning of the default name.
14) Click
15) Right-click the NewProducts metadata table object and select Properties.
3-94
17) Click
4) Click
5) Click
8) Click
9) Navigate to S:\Workshop\dift\data.
10) Select profit.txt.
11) Click
13) Click
14) Click
17) Click
Column
Name
Company
Length
22
18) Click
Length
8
19) Click
$22.
End
Position
22
Type
Num
Begin
Position
Format
YYMM5.
24
End
Position
28
Column
Name
Sales
Char
Begin
Position
Informat
Column
Name
YYMM
Type
Length
8
Type
Num
Begin
Position
Format
DOLLAR13.
30
End
Position
43
3-95
3-96
20) Click
Column
Name
Cost
Length
8
21) Click
Length
8
22) Click
DOLLAR13.
Type
25) Click
58
Num
Begin
Position
Format
DOLLAR13.
Length
8
Type
61
End
Position
84
Num
Begin
Position
Format
DOLLAR13.
45
End
Position
Column
Name
Profit
Num
Begin
Position
Format
Column
Name
Salaries
Type
87
End
Position
100
. The review window displays general information for the external file.
26) Click
3-97
. The metadata object for the external file is found in the Checkouts tree.
b. After the external file metadata is defined in the project repository, be sure to check it in.
1) Select Check Outs Check In All.
2) Type Adding metadata for profit information external file as the
value for the Title field.
3) Click
4) Click
5) Click
8) Type DIFT Workshop Data as the value for the Data Source Name field.
9) Click
3-98
14) Click
15) Click
to close the Connection Profile window and access the Log On window.
4) Type Ahmed as the value for the User ID field and Student1 as the value for the
Password field.
5) Click
4) Type DIFT Workshop Microsoft Access Database Server as the value for
the Name field.
5) Click
6) Select ODBC Microsoft Access as the value for the Data Source Type field.
7) Click
8) Select Datasrc.
9) Type DIFT Workshop Data (the quotes are necessary since the ODBC data source
name has spaces).
10) Click
11) Click
to close the Connection Profile window and access the Log On window.
4) Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
5) Click
.
to define a new library object using the ODBC engine.
a) Type DIFT Workshop Microsoft Access Database as the value for the
Name field.
b) Verify that the location is set to /Data Mart Development/Orion Source Data.
c) Click
g) Click
h) Verify that DIFT Workshop Microsoft Access Database Server is the value for the
Database Server field.
i) Click
j) Click
9) Click
3-99
3-100
10) Select Catalog_Orders , hold down the CTRL key, select PRODUCTS and then
Web_Orders.
11) Click
12) Click
.The metadata objects for the three tables as well as the newly defined
library object are found in the Checkouts tree.
g. Update the metadata for Catalog_Orders.
1) Right-click the Catalog_Orders metadata table object and select Properties.
2) Type DIFT Catalog_Orders as the new value for the Name field.
3) Click
3-101
7) Click
3-102
4.2
4.3
4-2
Objectives
4-3
4-4
Customer
Dimension
Order
Fact Table
Product
Dimension
Time
Dimension
Case
Organization
Dimension
Study
Order
Exercise
Fact Table
Product
Demo
Dimension
Case
Time
Dimension
Study
ProdDim
Computed
Columns
Product_Category
Product_Group
Product_Line
Product_
List
Supplier
4-5
4-6
This demonstration defines a metadata object for a single target file. The target is to be a SAS data set
named ProdDim to be stored in DIFT Orion Target Tables Library (the library object needs to be created,
as well).
1. Select Start All Programs SAS SAS Data Integration Studio 4.2.
2. Log on using Barbaras credentials to access her project repository.
a. Select Barbaras Work Repository as the connection profile.
b. Click
to close the Connection Profile window and open the Log On window.
c. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
d. Click
4-7
4-8
7. Type DIFT Product Dimension as the value for the Name field.
8. Verify that the location is set to /Data Mart Development/Orion Target Data.
The final specifications for the name and location window should be as follows:
9. Click
next to the Library field. The target tables library is not yet defined.
a. Type DIFT Orion Target Tables Library as the value for the Name field.
b. Verify that the location is set to /Data Mart Development/Orion Target Data.
The final specifications for the name and location window are as follows:
c. Click
4-9
4-10
f.
Click
h. Click
i.
j.
k. Click
l.
next to Paths.
o. Click
p. Verify that the newly specified path is found in the Selected items pane.
The final settings for the library options window are shown here:
q. Click
4-11
4-12
r.
The new library metadata object can be found in the Library field.
13. Click
4-13
4-14
14. Expand the Data Mart Development Orion Source Data folder on the Folders tab.
15. From the Orion Source Data folder, expand DIFT PRODUCT_LIST table object.
16. Select the following columns from DIFT PRODUCT_LIST and click
the Selected pane:
Product_ID
Product_Name
Supplier_ID
4-15
20. Click
4-16
a. Click
Description
Product_Category
c. Click
Length
Product Category
25
Type
Character
Description
Product_Group
e. Click
f.
Product Group
Length
25
Type
Character
Description
Product Line
Length
25
Type
Character
4-17
4-18
4-19
. Define two simple indexes: one for Product_ID and one for
Neglecting to press ENTER results in not having the name of the index saved, which
produces an error when the table is generated because the name of the index and the
column being indexed do not match.
d. Select the Product_ID column and move it to the Indexes panel by clicking
e. Click
f.
g. Select the Product_Group column and move it to the Indexes panel by clicking
requested indexes are defined in the Define Indexes window.
to close the Define Indexes window and return to the Target Table Designer.
h. Click
25. Click
. The two
4-20
27. Click
The new table object and new library object appear on the Checkouts tab.
4-21
d. Click
e. Click
The objects should appear in the Data Mart Development Orion Target Data folders.
4-22
Exercises
Objectives
13
Types of Metadata
SAS Data Integration Studio enables you to import and
export metadata for individual objects or sets of related
objects. You can work with two kinds of metadata:
Q SAS metadata in SAS Package format
Q relational metadata (metadata for libraries, tables,
columns, indexes, and keys) in formats that can be
accessed with a SAS Metadata Bridge
14
4-23
4-24
15
Relational Metadata
By importing and exporting relational metadata in external
formats, you can reuse metadata from third-party
applications, and you can reuse SAS metadata in those
applications as well. For example, you can use third-party
data modeling software to specify a star schema for a set
of tables. The model can be exported in Common
Warehouse Metamodel (CWM) format. You can then use
a SAS Metadata Bridge to import that model into SAS
Data Integration Studio.
16
17
Relational Metadata
You can import and export relational metadata in any
format that is accessible with a SAS Metadata Bridge.
Relational metadata includes the metadata for the
following objects:
Q data libraries
Q tables
Q columns
Q indexes
Q keys (including primary keys and foreign keys)
18
4-25
4-26
This demonstration illustrates importing metadata that was exported in CWM format from Oracle
Designer.
1. Select Start All Programs SAS SAS Data Integration Studio 4.2.
2. Log on using Barbaras credentials to access her project repository.
a. Select Barbaras Work Repository as the connection profile.
b. Click
to close the Connection Profile window and access the Log On window.
c. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
d. Click
6. Select File Import Metadata. The Metadata Importer wizard initializes and displays the
window to enable the user to select an import format.
7. Select Oracle Designer.
8. Click
4-27
4-28
9. Click
next to the File name field to open the Select a file window.
13. Click
4-29
14. Verify that the folder location is set to /Data Mart Development/Orion Target Data.
The final settings for the File Location window of the Metadata Importer wizard should be as shown:
15. Click
4-30
17. Click
19. Click
4-31
4-32
20. Click
21. On the Folders tab, navigate to Data Mart Development Orion Target Data.
22. Select DIFT Orion Target Tables Library.
23. Click
25. Click
4-33
4-34
26. The finish window displays the final settings. Review and accept the settings.
27. Click
29. Click
30. The Checkouts tree displays two new metadata table objects.
31. Right-click the Current Staff metadata table object and select Properties.
32. Type DIFT at the beginning of the default name.
4-35
4-36
33. Click the Columns tab to view some of the defined information.
34. Update the formats for each of the columns.
Column Name
Column Format
Employee_ID
12.
Start_Date
Date9.
End_Date
Date9.
Job_Title
<none>
Salary
Dollar12.
Gender
$Gender6.
Birth_Date
Date9.
Emp_Hire_Date
Date9.
Emp_Term_Date
Date9.
Manager_ID
12.
35. Click
36. Right-click the Terminated Staff metadata table object and select Properties.
37. Type DIFT at the beginning of the default name.
4-37
4-38
38. Click the Columns tab to view some of the defined information.
39. Update the formats for each of the columns.
Column Name
Column Format
Employee_ID
12.
Start_Date
Date9.
End_Date
Date9.
Job_Title
<none>
Salary
Dollar12.
Gender
$Gender6.
Birth_Date
Date9.
Emp_Hire_Date
Date9.
Emp_Term_Date
Date9.
Manager_ID
12.
40. Click
4-39
c. Click
d. Click
4-40
k. Expand the Data Mart Development Orion Source Data folder on the Folders tab.
l. From the Orion Source Data folder, click the DIFT ORDER_ITEM table object.
m. Select all columns from DIFT ORDER_ITEM by clicking
the Selected pane).
.
.
.
. The new table object
x. Click
y. Click
4-41
11) Expand the Data Mart Development Orion Target Data folder on the Folders tab.
12) From the Orion Target Data folder, locate the DIFT Order Fact table object.
13) Select the DIFT Order Fact table object and click
Selected pane.
14) Click
15) Accept the default attributes of the columns and then click
16) Review the metadata listed in the finish window and then click
object appears on the Checkouts tab.
.
. The new table
4-42
6) Click
10) Click
11) Expand the Data Mart Development Orion Target Data folder on the Folders tab.
12) From the Orion Target Data folder, locate the DIFT Order Fact table object.
13) Select the DIFT Order Fact table object and click
Selected pane.
14) Click
15) Accept the default attributes of the columns and then click
16) Review the metadata listed in the finish window and then click
object appears on the Checkouts tab.
.
. The new table
11) Expand the Data Mart Development Orion Source Data folder on the Folders tab.
12) From the Orion Source Data folder, locate the DIFT Supplier Information table object.
13) Select the DIFT Supplier Information table object and click
Selected pane.
14) Click
15) Accept the default attributes of the columns and then click
4-43
16) Review the metadata listed in the finish window and then click
object appears on the Checkouts tab.
5) Click
6) Click
The metadata in the Orion Target Data folder should now resemble the following:
4-44
5.2
5.3
5.4
5-2
Objectives
Overview
At this point, metadata is defined for the following:
various types of source tables
desired target tables
The next step is to load the targets in the data mart.
5-3
5-4
What Is a Job?
A job is a collection of SAS tasks that creates output. SAS
Data Integration Studio uses the metadata for each job to
generate SAS code that reads sources and creates
targets in physical storage.
A Quick Example
Before you proceed to further discussions on jobs and
the Process Designer window, look at the creation of a
simple job. The job creates two SAS data sets, one
containing the current employees and the other
containing the terminated employees.
Splitter Transformation
The Splitter transformation is a
transformation that can be used to
create one or more subsets of a
source.
5-5
5-6
This demonstration shows the building of a job that uses the Splitter transformation.
The final process flow diagram will look like the following:
to close the Connection Profile window and access the Log On window.
d. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
5-7
5-8
6. Type DIFT Populate Current and Terminated Staff Tables as the value for the
Name field.
7. Verify that the Location is set to /Data Mart Development/Orion Jobs.
8. Click
5-9
5-10
When a job window is active, objects can be added to the diagram by right-clicking and
selecting Add to Diagram.
10. Select File Save to save diagram and job metadata to this point.
12. Select File Save to save diagram and job metadata to this point.
5-11
5-12
b. To connect the DIFT STAFF table object to the Splitter transformation, place your cursor over
the connection selector until a pencil icon appears.
14. Select File Save to save diagram and job metadata to this point.
5-13
d. Drag the two objects to the Diagram tab of the Job Editor.
When a job window is active, objects can be added to the diagram by right-clicking and
selecting Add to Diagram.
5-14
16. Select File Save to save diagram and job metadata to this point.
5-15
5-16
b. Right-click on the second temporary table object of the Splitter transformation and select Delete.
18. Connect the Splitter transformation to each of the target table objects.
a. Place your cursor over the Splitter transformation until a pencil icon appears.
b. When the pencil icon appears, click and drag the cursor to the first output table,
DIFT Current Staff.
5-17
c. Again, place your cursor over the Splitter transformation until a pencil icons appears, and click
and drag the cursor to the second output table, DIFT Terminated Staff.
19. Select File Save to save diagram and job metadata to this point.
5-18
5-19
5-20
8) Click
10) Click
5-21
5-22
d. Specify the subsetting criteria for the DIFT Terminated Staff table object.
1) Verify that the DIFT Terminated Staff table object is selected.
2) Select Row Selection Conditions as the value for the Row Selection Type field.
below the Selection Conditions area. The Expression window opens.
3) Click
.
in the operators area.
f.
5-23
Verify that all Target Table columns have an arrow coming in to them (that is, all target columns
will receive data from a source column).
g. Click
21. Select File Save to save diagram and job metadata to this point.
5-24
23. Click
5-25
5-26
26. Scroll to view the note about the creation of the DIFTTGT.TERM_STAFF table:
27. View the data for the DIFT Current Staff table object.
a. Right-click on the DIFT Current Staff table object and select Open.
b. When finished viewing the data, select File Close to close the View Data window.
5-27
5-28
28. View the data for the DIFT Terminated Staff table object.
a. Right-click on the DIFT Terminated Staff table object and select Open.
b. When finished viewing the data, select File Close to close the View Data window.
29. Select File Close to close the Job Editor. If necessary, save changes to the job. The new job object
appears on the Checkouts tab.
30. Select Check Outs Check In All.
a. Type Adding job that populates current & terminated staff tables as
the value for the Title field.
b. Click
c. Click
d. Click
New Jobs
New jobs are initialized by the New Job wizard. The New
Job wizard names a job and the metadata location.
(A description can optionally be specified.)
Selecting
creates
an empty job.
10
Job Editor
The Job Editor window enables you to create, maintain,
and troubleshoot SAS Data Integration Studio jobs. To
display this window for an existing job, right-click a job in
the tree view and select Open.
Job Editor Tabs:
11
Tab
Description
Diagram
Code
Log
Output
5-29
5-30
Description
Details
12
13
14
15
5-31
5-32
Introduction to Transformations
A transformation is a metadata object that specifies how
to extract data, transform data, or load data into data
stores. Each transformation that you specify in a process
flow diagram generates or retrieves SAS code. You can
also specify user-written code in the metadata for any
transformation in a process flow diagram.
16
Transformations Tree
The Transformations tree organizes transformations into a
set of folders. You can drag a transformation from the
Transformations tree to the Job Editor, where you can
connect it to source and target tables and update its
default metadata. By updating a transformation with the
metadata for actual sources, targets, and transformations,
you can quickly create process flow
diagrams for common scenarios.
The display shows the standard
Transformations tree.
17
Objectives
20
5-33
5-34
22
Navigate
pane
SQL Clauses
pane
Properties
pane
23
24
25
The Tables pane displays when a table object is selected in the Navigate pane, and when the Select
keyword is selected in the Navigate pane. The Tables pane might also display when other aspects of
particular joins are requested (for instance, the surfacing of Having, Group by, and Order by
information). The Tables pane is displayed in the same location as the SQL Clauses pane.
5-35
5-36
26
27
29
5-37
5-38
Calculating Product_Category
Product_Category values are calculated by:
Q performing a grouping using Product_ID
210100100000 210100x00000 group to 210100000000
210200100000 210200x00000 group to 210200000000
220100100000 220100x00000 group to 220100000000
220200100000 220200x00000 group to 220200000000
30
Calculating Product_Line
Product_Line values are calculated by:
Q performing a grouping using Product_ID
210100000000 210x00000000 group to 210000000000
220100000000 220x00000000 group to 220000000000
230100000000 230x00000000 group to 230000000000
240100000000 240x00000000 group to 240000000000
31
33
...
5-39
5-40
34
...
In this demonstration, you can take advantage of the SQL Join transformation to join the DIFT
Product_List and DIFT Supplier Information source tables to create the target table
DIFT Product Dimension.
In addition, three calculated columns are to be constructed.
The final process flow diagram will look like the following:
c. Click
d. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
to close the Log On window.
e. Click
5-41
5-42
5-43
7. Select File Save to save diagram and job metadata to this point.
5-44
f.
9. Rename the temporary table object associated with the File Reader transformation.
a. Right-click on the green temporary table object and select Properties.
5-45
5-46
d. Click
10. Select File Save to save diagram and job metadata to this point.
11. Add the SQL Join transformation to the diagram.
a. In the tree view, click the Transformations tab.
b. Expand the Data grouping.
c. Select the SQL Join transformation.
5-47
12. Select File Save to save diagram and job metadata to this point.
13. Add inputs to the SQL Join transformation.
a. Place your cursor over the SQL Join transformation in the diagram to reveal the two default
ports.
b. Connect the DIFT PRODUCT_LIST table object to one of the input ports for the SQL Join.
c. Connect the File Reader transformation (click on the temporary table icon,
the File Reader and drag) to the other port of the SQL Join transformation.
, associated with
5-48
14. Select File Save to save diagram and job metadata to this point.
15. Add the DIFT Product Dimension table object as the output for the SQL Join.
a. Right-click on the temporary table objects associated with the SQL Join transformation and select
Replace.
b. In the Table Selector window, expand the Data Mart Development Orion Target Data
folders.
c. Select DIFT Product Dimension table object.
d. Select
16. Select File Save to save diagram and job metadata to this point.
17. Review properties of the File Reader transformation.
a. Right-click on the File Reader transformation and select Properties.
b. Click the Mappings tab.
c. Verify that all target columns have a column mapping.
d. Click
18. Select File Save to save diagram and job metadata to this point.
5-49
5-50
5-51
b. Select the Join item on the Diagram tab. Verify that the Join is an Inner join from the Properties
pane.
The type of join can also be verified and or changed by right-clicking on the Join
item in the process flow of the SQL Join clauses. A pop-up menu appears that has
the type of Join checked, but also enables selection of another type of join.
5-52
c. Select the Where keyword in the Navigate pane to surface the Where tab.
d. Verify that the Inner join will be executed based on the values of Supplier_ID columns from
the sources being equal.
2) Select Choose column(s) from the drop-down list under the first Operand column.
4) Click
5-53
5-54
5) Type 1 (numeral one) in the field for the second Operand column and press ENTER.
f.
Select the Select keyword in the Navigate pane to surface the Select tab.
5-55
5-56
g. Map the Country column to Supplier_Country by clicking on the Country column and
dragging to the Supplier_Country.
The Expression field is what needs to be filled in for the three columns.
5-57
i.
Re-order the columns so that Product_Group is first, then Product_Category, and then
Product_Line.
j.
5-58
5) Click
6) Click
7) Click
.
to close the Expression window.
5-59
.
.
8) Click
l.
.
.
to close the Expression window.
5-60
m. Click
o. Click
20. Run the job.
A warning occurred in the execution of the SQL Join. You see a change in the coloring of the
transformation in the process flow and the symbol overlay.
5-61
5-62
c. Double-click the Warning for the SQL Join. The Warnings and Errors tab is moved forward with
the warning message.
5-63
21. Edit the SQL Join transformation to fix the column mappings.
a. Right-click on the SQL Join transformation and select Open.
b. Click the Select keyword on the Navigate pane to surface the Select tab. Note the warning
symbol,
c. Map the Product_ID column to the Product_Group column (click on Product_ID in the
Source table side and drag to Product_Group in the Target table side).
5-64
f.
g. Click
22. Run the job by right-clicking in background of the job and selecting Run.
5-65
5-66
23. Click
24. View the log for the executed job by selecting the Log tab.
25. Scroll to view the note about the creation of the DIFTTGT.PRODDIM table:
5-67
c. Click
d. Click
5-68
Exercises
Objectives
39
Automatic Mappings
By default, SAS Data Integration Studio automatically
creates a mapping when a source column and a target
column have the same column name, data type, and
length.
Events that trigger automatic mapping include:
Q connecting a source and a target to the transformation
on the Diagram tab
Q clicking Propagate on the toolbar or in the pop-up
menu in the Job Editor window
Q clicking Propagate on the Mappings tab toolbar and
selecting a propagation option
Q clicking Map all columns on the Mappings tab toolbar.
40
5-69
5-70
Automatic Propagation
Automatic propagation sends column changes to tables
when process flows are created. If you disable automatic
propagation and refrain from using manual propagation,
you can propagate column changes on the Mappings tab
for a transformation that are restricted to the target tables
for that transformation. Automatic propagation controls
can occurs at various levels:
Q Global
Q Job
Q Process flow
Q Transformation
42
Level
Global
Job
Control
Automatically propagate columns in the
Automatic Settings group box on the Job Editor
tab in the Options window. (Click Options in the
Tools menu to display the window.) This option
controls automatic propagation of column
changes in all new jobs.
Automatically Propagate Job in the dropdown
menu that displays when you click Settings in
the toolbar on the Diagram tab in the Job Editor
window. This option controls automatic
propagation of column changes in the currently
opened job.
5-71
Process flow
Transformation
Not applicable
Transformation
Not applicable
5-72
This demonstration investigates automatic and manual propagation and mappings. The propagation is
investigated from sources to targets only. Propagation can be done from targets to sources.
1. If necessary, access SAS Data Integration Studio using Brunos credentials.
a. Select Start All Programs SAS SAS Data Integration Studio 4.2.
b. Verify that the connection profile is My Server.
c. Click
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
Note that four columns are character and two are numeric.
d. Click
5-73
5-74
5. Create a target table with the same attributes as the DIFT PRODUCTS (Copy) table.
a. Right-click the DIFT PRODUCTS (Copy) table object and select Copy.
b. Right-click the DIFT Additional Examples folder and select Paste.
c. Right-click the Copy of DIFT PRODUCTS (Copy) table object and select Properties.
d. Type DIFT PRODUCTS Information as the value for the Name field (on the General tab).
Click
j.
Click
k. Verify that the DBMS appropriately updated to SAS with this new library selection.
l.
m. Click
The new table object appears under the DIFT Additional Examples folder.
6. Create a target table with different attributes as the DIFT PRODUCTS (Copy) table.
a. Right-click the DIFT PRODUCTS (Copy) table object and select Copy.
b. Right-click the DIFT Additional Examples folder and select Paste.
c. Right-click the Copy of DIFT PRODUCTS (Copy) table object and select Properties.
d. Type DIFT PRODUCTS Profit Information as the value for the Name field (on the
General tab).
5-75
5-76
j.
Click
Click
m. Verify that the DBMS appropriately updated to SAS with this new library selection.
n. Type PRODUCTSProfitInfo as the value for the Name field.
o. Click
The new table object appears under the DIFT Additional Examples folder.
5-77
5-78
c. Click
d. Select DIFT Orion Target Tables Library as the value for the Library field.
e. Type PRODUCTSProfitInfo2 as the value for the Name field.
f.
Click
i.
Click
j.
The new table object appears under the DIFT Additional Examples folder.
5-79
5-80
d. Click
d. Click
10. Add table objects and transformations to the Diagram tab of the Job Editor.
a. Click and drag the DIFT PRODUCTS (Copy) table object to the Diagram tab of the Job Editor.
b. Click and drag the DIFT PRODUCTS Information table object to the Diagram tab of the Job
Editor.
c. Click the Transformations tab.
d. Expand the Data grouping of transformations.
e. Click and drag the Extract transformation to the Diagram tab of the Job Editor.
f.
g. Click and drag the Table Loader transformation to the Diagram tab of the Job Editor.
5-81
i.
Click
A similar message regarding no mappings defined can be found for the Table Loader
transformation.
to close the message.
11. Connect the objects in the process flow diagram and investigate the mappings.
a. Connect the DIFT PRODUCTS (Copy) table object to the Extract transformation.
b.
c. Connect the Table Loader transformation to the DIFT PRODUCTS Information table object.
The process flow diagram updates to the following:
5-82
Mappings can be investigated by opening the Properties window for each transformation.
Alternatively the Details section displays defined mappings for the selected transformation.
d. If necessary, select View Details to display the Details area within the Job Editor window.
(Optionally, the Details section can be displayed by clicking the tool
Automatic mappings occur between a source column and a target column when these columns
have the same column name, data type, and length.
g. Click the Table Loader transformation on the Diagram tab.
h. View the mappings in the Details section.
All mappings were automatically established between the source columns and the target columns.
5-83
12. Add additional table objects and transformations to the Diagram tab of the Job Editor.
a. Click and drag the DIFT PRODUCTS (Copy) table object to the Diagram tab of the Job Editor.
b. Click and drag the DIFT PRODUCTS Profit Information table object to the Diagram tab of
the Job Editor.
c. Click the Transformations tab.
d. Expand the Data grouping of transformations.
e. Click and drag the Extract transformation to the Diagram tab of the Job Editor.
f.
g. Click and drag the Table Loader transformation to the Diagram tab of the Job Editor.
13. Connect the new objects in the process flow diagram and investigate the mappings.
a. Connect the DIFT PRODUCTS (Copy) table object to the Extract transformation.
b. Connect the Extract transformation to the Table Loader transformation.
c. Connect the Table Loader transformation to the DIFT PRODUCTS Profit Information table
object.
The process flow diagram updates to the following:
5-84
f.
5-85
Manually map the TYPE columns. A Warning will appear. An expression could be written to
avoid this warning otherwise the Log for the job will contain a WARNING message as well.
i.
Scroll in the target table side of the Mappings tab to view the automatic expression for the Sex
column.
5-86
15. Turn off automatic mappings for the job and rebuild the connections for the first part of this process
flow.
a. Right-click in the background of the Job Editor and select Settings Automatically Map
Columns.
b. Click on each of the connections for the first flow, right-click and select Delete.
c. Re-connect the DIFT PRODUCTS (Copy) table object to the Extract transformation.
e. Click
f.
g. Click the Mappings tab in the Details section. Note that there are no mappings defined.
5-87
5-88
h. Click
i.
Click
on the Mappings tab tool set. All columns are no longer mapped.
j.
Click
k. Click
l.
on the Mappings tab tool set, and verify that Include Selected Columns in
Click
Mapping is selected.
m. Click
on the Mappings tab tool set. The selected column is manually mapped.
b. Click
d. Click
5-89
5-90
19. Add table objects and transformations to the Diagram tab of the Job Editor.
a. Click and drag the DIFT PRODUCTS (Copy) table object to the Diagram tab of the Job Editor.
b. Click and drag the DIFT PRODUCTS Profit Information (2) table object to the Diagram tab
of the Job Editor.
c. Click the Transformations tab.
d. Expand the Data grouping of transformations.
e. Click and drag the Extract transformation to the Diagram tab of the Job Editor.
f.
Click and drag the Sort transformation to the Diagram tab of the Job Editor.
5-91
20. Connect the source to the first transformation and investigate the propagation.
a. Connect the DIFT PRODUCTS (Copy) table object to the Extract transformation.
b. Click the Extract transformation on the Diagram tab.
c. Click the Mappings tab in the Details section.
All columns from the source are propagated and mapped to the target.
21. Turn off automatic propagation and mappings for the job.
a. Right-click in the background of the job and select Settings Automatically Propagate
Columns.
b. Right-click in the background of the job and select Settings Automatically Map Columns.
These selections can also be made from the Job Editors tool set.
5-92
22. Break the connection between the DIFT PRODUCTS (Copy) table object and the Extract
transformation (click on the connection, right-click, and select Delete).
23. Remove the already propagated columns from the output table of the Extract transformation.
a. Right-click on the output table object associated with the Extract transformation and select
Properties.
24. Reconnect the source to the Extract transformation and investigate the propagation.
a. Connect the DIFT PRODUCTS (Copy) table object to the Extract transformation.
b. Click the Extract transformation on the Diagram tab.
c. Click the Mappings tab in the Details section.
All columns from the source are NOT propagated and mapped to the target.
25. Manually propagate all columns.
a. Click
b. Click
on the Mappings tab tool set. All columns are removed from the target table side.
5-93
5-94
5-95
5-96
g. Click
Click the Mappings tab in the Details section. Verify that only the three selected columns were
propagated (and mapped).
j.
k. Click the Mappings tab in the Details section. The propagation does not propagate to the target
table. This needs to be done manually.
l.
Click
5-97
5-98
Chaining Jobs
Existing jobs can be added to the Diagram tab of the Job
Editor window. These jobs are added to the control flow in
the order that they are added to the job. This sequence is
useful for jobs that are closely related - however, the jobs
do not have to be related. You can always change the
order of execution for the added jobs in the Control Flow
tab of the Details pane.
44
Chaining Jobs
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
c. Click
6. Return to the Folders tab, and expand the folders to Data Mart Development
Orion Jobs.
5-99
5-100
7. Select the DIFT Populate Old and Recent Orders Tables job and drag it to the Diagram tab of the
job editor window.
8. Select the DIFT Populate Order Fact Table job and drag it to the Diagram tab of the job editor
window.
The first job connects automatically to the second job.
The OrderFact table, created in the DIFT Populate Order Fact Table job, is the source table for
the DIFT Populate Old and Recent Orders Tables job. Therefore, DIFT Populate the Order Fact
Table job should run first and then the DIFT Populate Old and Recent Orders Tables job.
5-101
12. Select View Layout Left to Right. The diagram updates with the correct ordering and in the
horizontal view.
5-102
14. Click
15. Click the Status tab in the Details pane and verify that the jobs both ran successfully in concurrence.
46
47
5-103
5-104
48
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
5-105
5-106
8. Click
The Collect Table Statistics choice populates the Records field - otherwise, a zero is listed.
10. Scroll to the right in the table of statistics.
5-107
5-108
11. Click
14. Click
5-109
15. Access a Windows Explorer window by selecting Start All Programs Accessories
Windows Explorer.
16. Navigate to S:\Workshop\dift\reports.
17. Double-click DIFTTestJobStatsRun1.csv. Microsoft Excel opens and displays the saved statistics.
18. Click
5-110
19. Click
on the Statistics tab toolbar. The table view changes to a graphical view, which is a line
graph by default.
20. Click
default.
between Line Graph and Bar Chart. All of the reported statistics are selected by
22. Click
Current Memory
Operating System Memory
Current I/O
Operating System I/O
. The line graph updates to the following view of the requested statistics:
5-111
5-112
Records
Real Time
CPU Time
Current Memory
Operating System Memory
25. Click
. The line graph updates to the following view of the requested statistics:
Current Memory
Operating System Memory
Current I/O
Operating I/O
28. Click
. The line graph updates to the following view of the requested statistics:
5-113
5-114
29. Click Bar Chart on the Statistics tab toolbar. The table view changes to a bar chart view.
The bar chart quickly tells us that the Table Loader transformation took almost three times as long as
the rank transformation. The SQL Join transformation ran very quickly, compared to the other
transformations.
30. Place your cursor over the bar for the SQL Join transformation. Tooltip text appears with summarized
information about the processing of the SQL Join transformation.
31. Place your cursor over the bar for the Table Loader transformation. Tooltip text appears with
summarized information about the processing of the Table Loader transformation.
5-115
5-116
32. Click
The bar chart updates to a single bar for just the Table Loader transformation. The scaling for times is
easier to read for this selected transformation.
33. Click
34. Click
The graph can be written to a file, and then printed from the file.
35. Click
5-117
5-118
36. Click
39. Click
About Reports
The reports featured in SAS Data Integration Studio can
be used to generate reports - metadata for tables and
jobs can be reviewed in a convenient format.
Reports enable you to:
Q find information about a table or job quickly
Q compare information between different tables or jobs
Q obtain a single file that contains summary information
of all tables or jobs in HTML, RTF, or PDF format
Q perform custom behaviors that are defined by usercreated plug-in SAS code, Java code, or both
50
5-119
5-120
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
2. Click
4. Click
from the Reports windows toolbar. The Report Options window is displayed.
Verify that the default report format is set to HTML. A valid CSS file can be specified as well as
additional ODS HTML statement options.
5-121
5-122
5. Click
5-123
5-124
b. Navigate to S:\Workshop\dift\reports.
c. Click
d. Type JobsReport as the name of the new folder and press ENTER.
e. Navigate to the new JobsReport folder.
f.
Click
5-125
5-126
11. Click
12. When done viewing the report, select File Close from the browser window.
13. To create a document object, click Job Documentation and then select
16. Click
5-127
5-128
5-129
5-130
k. Select File Save to save diagram and job metadata to this point.
l. Review the properties of the SQL Join transformation.
1) Right-click on the SQL Join transformation and select Open. The Designer window opens.
2) Select the Join item on the Diagram tab. Verify that the Join is an Inner Join from the
Properties pane.
3) Verify that the Inner join will be executed based on the values of Order_ID columns from
the sources being equal.
4) Select the Select keyword on the Navigate pane to surface the Select tab.
5) Verify that all target columns are mapped.
m. Click
3) Click
4) Click
5-131
5-132
6) Right-click on the other temporary output table for the Splitter and select Replace.
7) Verify that the Folders tab is selected.
8) Expand the Data Mart Development Orion Target Data folder.
9) Select DIFT Recent Orders.
10) Click
11) If necessary, separate the two target table objects. The process flow diagram should
resemble the following:
l. Select File Save to save diagram and job metadata to this point.
5-133
.
in the operators area.
i) Type 01jan2005d.
.
j) Click
k) Click
l) Click
.
to close the Expression Builder window.
The Selection Conditions area on the Row Selection tab updates to the following:
5-134
4) Specify the subsetting criteria for the DIFT Old Orders table object.
a) Verify that the DIFT Old Orders table object is selected.
b) Select Row Selection Conditions as the value for the Row Selection Type field.
below the Selection Conditions area. The Expression window is
c) Click
opened.
.
in the operators area.
i) Type 01jan2005d.
.
j) Click
k) Click
l) Click
.
to close the Expression Builder window.
The Selection Conditions area on the Row Selection tab updates to the following:
n. Select File Save to save the diagram and job metadata to this point.
5-135
6) View the data for the DIFT Recent Orders table object.
a) Right-click on the DIFT Recent Orders table object and select Open.
b) When finished viewing the data, select File Close to close the View Data window.
7) View the data for the DIFT Old Orders table object.
a) Right-click on the DIFT Old Orders table object and select Open.
b) When finished viewing the data, select File Close to close the View Data window.
p. Select File Close to close the Job Editor. The new job object appears on the Checkouts tab.
5-136
3) Click
4) Click
6.2
6-2
6.1 Exercises
6.1 Exercises
Customer
Dimension
Order
Fact Table
Product
Dimension
Time
Dimension
OrderFact
Computed
Columns
NONE!
Order_Item
6-3
6-4
ProdDim
Computed
Columns
Product_Category
Product_Group
Product_Line
CustDim
Computed
Columns
Customer Types
6.1 Exercises
OrgDim
Company
Department
Group
Section
Organization
Computed
Columns
Staff
TimeDim
Computed
Columns
All Columns!!
(User-written code
generates the
TimeDim table.)
6-5
6-6
In this exercise set you will define and load the three remaining dimension tables. For each of these
target tables, you will need to define:
metadata for the source tables
metadata for the target table
metadata for the process flow to move the data from the source(s) to the target
In addition, metadata objects for data sources needed for showing features of the software will be defined.
RENAME TO
Customer_ID
Country
Customer_Country
Gender
Customer_Gender
Customer_Name
Customer_FirstName
Customer_LastName
Birth_Date
Customer_Birth_Date
From the Customer Types table, import the following column metadata:
ORIGINAL COLUMN
(in Customer Types):
Customer_Type
Customer_Group
6.1 Exercises
6-7
DESCRIPTION
LENGTH
Customer Age
TYPE
Numeric
12
Character
Expression
Floor(Yrdif(customer.birth_date, today(), 'actual'))
The metadata for the columns can be imported from the DIFT Customer Dimension
metadata object and the text for the expressions can be found in HelperFile.txt.
6-8
RENAME TO
Employee_ID
Org_Name
Employee_Name
Country
Employee_Country
RENAME TO
Job_Title
Salary
Gender
Employee_Gender
Birth_Date
Employee_Birth_Date
Emp_Hire_Date
Employee_Hire_Date
Emp_Term_Date
Employee_Term_Date
LENGTH
TYPE
Group
40
Character
Section
40
Character
Department 40
Character
Company
Character
30
6.1 Exercises
6-9
Length
Type
Expression
_Group
Numeric input(put(organization.employee_id,orgdim.),12.)
_Section
_Department 8
_Company
The calculations for the desired computed columns are shown below:
Name
Expression
Group
Section
The metadata for the columns can be imported from the DIFT Organization Dimension
metadata object and the text for the expressions can be found in HelperFile.txt.
6-10
Verify that all columns for the Table Loaders target table have a defined mapping.
The warning message regarding a compressed data set occurs because the DIFT Organization
table has a COMPRESS=YES property set. Edit the table properties to set this to NO.
6.1 Exercises
6-11
Use the New Table wizard and register all the columns manually.
Method 2:
Run a modified version of the program that will be used to generate the table. (This
generates a temporary version of the table.) When the table exists, the Register Tables
wizard can be used to register the table.
Choose one of the methods and create a metadata table object for DIFT Time Dimension.
For Method 1:
Use the New Table wizard to create a metadata table object named DIFT Time Dimension.
Store the metadata object in the /Data Mart Development/Orion Target Data folder.
The following columns must be entered manually:
NAME
LENGTH
TYPE
Date_ID
Numeric
WeekDay_Num
Numeric
WeekDay_Name
Character
Month_Num
Numeric
Year_ID
Character
Month_Name
Character
Quarter
Character
Holiday_US
26
Character
Fiscal_Year
Character
Fiscal_Month_Num 8
Numeric
Fiscal_Quarter
Character
FORMAT
Date9.
Name the physical table, a SAS table, TimeDim, and store it in the DIFT Orion Target Tables
Library.
6-12
For Method 2:
In SAS Data Integration Studio, select Tools Code Editor.
In the Enhanced Editor window, include the TimeDim.sas program from the
S:\Workshop\dift\SASCode directory.
Submit the program and verify that no errors were generated in the Log window.
In SAS Data Integration Studio, invoke the Register Tables wizard.
Store the metadata object in the \Data Mart Development\Orion Target Data folder.
Select SAS as the source type, and select DIFT Orion Target Tables Library.
The TimeDim table should be available.
Set the name of the metadata table object to DIFT Time Dimension.
Verify (update if necessary) that the length of Date_ID is 4.
In the Code Editor window, uncomment the PROC DATASETS step and run just that step. Verify
that the TimeDim table is deleted (check the Log). You will re-create it via a SAS Data Integration
Studio job.
Close the Code Editor window. (Select File Close.) Do not save any changes.
10. Loading the Time Dimension Target Table
Some specifics for creating the job to load the DIFT Time Dimension table are shown below:
Name the job DIFT Populate Time Dimension Table.
Store the metadata object in the /Data Mart Development/Orion Jobs folder.
Use the User Written Code transformation to specify the code to load this table.
Add the Table Loader transformation to the process flow for visual effect but specify to exclude
this transformation from running.
6-13
6-14
k. Expand the Data Mart Development Orion Source Data folder on the Folders tab.
l. From the Data folder, expand DIFT Customer Types table object.
m. Select the Customer_Type and Customer_Group columns from DIFT Customer Types and
click
to move the columns to the Selected pane.
n. Select the Checkouts tab.
o. Expand DIFT CUSTOMER table object.
p. Select the following columns from DIFT CUSTOMER and click
Selected pane:
Customer_ID
Country
Gender
Customer_Name
Customer_FirstName
Customer_LastName
Birth_Date
q. Click
RENAME TO
Country
Customer_Country
Gender
Customer_Gender
6-15
2) Click
Description
Customer_Age
Customer Age
4) Click
Length
8
Type
Numeric
Description
Customer_Age_Group
Length
12
Type
Character
.
to add the index.
2) Click
.
.
v. Review the metadata listed in the finish window and then click
appears on the Checkouts tab.
6-16
k. Select File Save to save diagram and job metadata to this point.
l. Review the properties of the SQL Join transformation.
1) Right-click on the SQL Join transformation and select Open. The Designer window opens.
2) Select the Join item on the Diagram tab.
3) In the Properties pane, set the Join to Left.
4) Establish the join criteria of the Customer_Type_ID columns from the sources being
equal.
a) Double-click the Left icon in the process flow diagram for SQL clauses.
b) Click
g) Under the second Operator field, click, type in 0, and then press ENTER.
6) Verify mappings and establish new ones if necessary.
a) Select the Select keyword on the Navigate pane to surface the Select tab.
b) Manually map the Country source column to the Customer_Country column.
c) Manually map the Gender source column to the Customer_Gender column.
d) Click
6-17
6-18
e. Click
f. Click
6-19
6-20
k. Expand the Data Mart Development Orion Source Data folder on the Folders tab.
l. From the Data folder, expand DIFT STAFF table object.
m. Select the following columns from DIFT STAFF and click
Selected pane.
Job_Title
Salary
Gender
Birth_Date
Emp_Hire_Date
Emp_Term_Date
n. Select the Checkouts tab.
o. Expand DIFT ORGANIZATION table object.
p. Select the following columns from DIFT ORGANIZATION and click
to the Selected pane.
Employee_ID
Org_Name
Country
q. Click
RENAME TO
Org_Name
Employee_Name
Country
Employee_Country
Gender
Employee_Gender
Birth_Date
Employee_Birth_Date
Emp_Hire_Date
Employee_Hire_Date
Emp_Term_Date
Employee_Term_Date
LENGTH
40
TYPE
Character
4) Click
6-21
LENGTH
Section
40
6) Click
TYPE
Character
LENGTH
Department 40
8) Click
TYPE
Character
LENGTH
Company
30
TYPE
Character
.
to add the index.
2) Click
.
.
v. Review the metadata listed in the finish window and then click
appears on the Checkouts tab.
6-22
6-23
4) To connect the DIFT ORGANIZATION table object to the SQL Join, place your cursor over
the connection selector until a pencil icon appears.
5) Click on this connection selector and drag to one of the input ports for the SQL Join
transformation.
h. Select File Save to save diagram and job metadata to this point.
i. Add the Table Loader transformation to the diagram.
1) In the tree view, select the Transformations tab.
2) Expand the Access grouping.
3) Select the Table Loader transformation.
4) Drag the Table Loader transformation to the diagram.
5) Center the Table Loader so that it is to the right of the SQL Join transformation.
j. Connect the SQL Join transformation (click on the temporary table icon,
SQL Join and drag) to the Table Loader transformation.
k. Select File Save to save diagram and job metadata to this point.
l. Add the DIFT Organization Dimension table object to the process flow.
1) Click the Checkouts tab.
2) Drag the DIFT Organization Dimension table object to the diagram.
3) Center the table object so that it is to the right of the Table Loader transformation.
m. Connect the Table Loader transformation to the DIFT Organization Dimension table
object.
n. Select File Save to save diagram and job metadata to this point.
o. Rename the temporary table object associated with the SQL Join transformation.
1) Right-click on the temporary table element attached to the SQL Join transformation and select
Properties.
2) Select the Physical Storage tab.
3) Type SQLJoin as the value for the Name field.
4) Click
p. Select File Save to save diagram and job metadata to this point.
q. Specify the properties of the SQL Join transformation.
1) Right-click on the SQL Join transformation and select Open. The Designer window opens.
2) Right-click the Join item on the Diagram tab and change the join to a Left join.
3) Double-click the Where keyword on the SQL Clauses pane.
6-24
4) Select the Where keyword on the Navigate pane to surface the Where tab.
.
a) Click
b) Select Choose column(s) from the drop-down list under the first Operand column.
c) Expand the DIFT ORGANIZATION table and select Org_Level.
d) Click
h) Click
i) Map columns.
(1) Right-click in the panel between source and target columns, and select Map All.
Three of the target columns map.
(2) Map the Country column to Employee_Country by clicking on the Country
column and dragging to the Employee_Country.
(3) Map the Gender column to Employee_Gender by clicking on the Gender
column and dragging to the Employee_Gender.
(4) Map the Org_Name column to Employee_Name by clicking on the Org_Name
column and dragging to the Employee_Name.
(5) Map the Birth_Date column to Employee_Birth_Date by clicking on the
Birth_Date column and dragging to the Employee_Birth_Date.
(6) Map the Emp_Hire_Date column to Employee_Hire_Date by clicking on the
Emp_Hire_Date column and dragging to the Employee_Hire_Date.
6-25
on the toolbar.
on the toolbar.
on the toolbar.
(14) Locate the _Section column. In the Expression column, select Advanced
from the drop-down list. The Expression window opens as displayed.
6-26
(22) Locate the _Company column. In the Expression column, select Advanced
from the drop-down list. The Expression window opens as displayed:
(23) Copy the expression for _Company from HelperFile.txt.
Input(Put(calculated _department,orgdim.),12.)
(24) Paste the copied code in the Expression Text area.
(25) Select
(26) Locate the Group column. In the Expression column, select Advanced from
the drop-down list. The Expression window opens as displayed.
(27) Copy the expression for Group from HelperFile.txt.
Put(calculated _group,org.)
(28) Paste the copied code in the Expression Text area.
(29) Select
(30) Locate the Section column. In the Expression column, select Advanced
from the drop-down list. The Expression window opens as displayed:
(31) Copy the expression for Section from HelperFile.txt.
Put(calculated _section,org.)
(32) Paste the copied code in the Expression Text area.
(33) Select
6-27
(38) Locate the Company column. In the Expression column, select Advanced
from the drop-down list. The Expression window opens as displayed.
(39) Copy the expression for Company from HelperFile.txt.
Put(calculated _company,org.)
(40) Paste the copied code in the Expression Text area.
(41) Select
k) Click
s. Run the job by right-clicking in background of the job and select Run. The job runs without
Errors or Warnings.
t. Click
u. View the Log for the executed Job by selecting the Log tab.
6-28
v. Scroll to view the note about the creation of the DIFTTGT.ORGDIM table:
d. Click
e. Click
6-29
f. Click
k. Click
Description
Date_ID
3) Click
Length
4
Type
Numeric
Format
DATE9.
Description
Length
8
Type
Numeric
6-30
5) Click
Description
WeekDay_Name
7) Click
Length
9
Type
Character
Description
Month_Num
9) Click
Length
8
Type
Numeric
Description
Year_ID
11) Click
Length
4
Type
Character
Description
Month_Name
13) Click
Length
9
Type
Character
Description
Quarter
15) Click
Length
6
Type
Character
Description
Holiday_US
17) Click
Length
26
Type
Character
Description
Fiscal_Year
Length
4
19) Click
Type
Character
Description
Fiscal_Month_Num
21) Click
Length
8
Type
Numeric
Description
Fiscal_Quarter
m. Click
Length
6
Type
Character
1) Click
.
.
6-31
6-32
4) Click
5) Click
6) Navigate to S:\Workshop\dift\SASCode.
7) Select TimeDim.sas.
6-33
6-34
8) Click
window.
9) Click
10) Click
. The path and filename of the code file are now listed in the Open
(propagate from target to sources tool) from the tool set on the Mappings tab.
6-35
4) Verify that all columns are mapped. If not, right-click in the panel between the source
columns and the target columns and select Map All. The mappings will be updated.
5) Click the Code tab.
6) Click Exclude transformation from run.
7) Click
n. Run the job by right-clicking in the background of the job and select Run. The job runs without
Errors or Warnings.
o. View the Log for the executed Job by selecting the Log tab.
p. Select File Save to save diagram and job metadata to this point.
q. Select File Close to close the Job Editor.
11. Check-In the Metadata Objects for the Time Dimension
a. Select Check Outs Check In All.
b. Type Adding target table & job for Time Dimension as the value for the
Title field.
c. Click
d. Click
e. Click
6-36
Introduction..................................................................................................................... 7-3
Demonstration: Create Orion Reports Subfolders ................................................................. 7-5
7.2
7.3
7.4
7.5
7.6
7.7
7-2
7.1 Introduction
7.1 Introduction
Objectives
Transformations Tree
The Transformations tree organizes transformations into a
set of folders. You can drag a transformation from the
Transformations tree to the Job Editor, where you can
connect it to source and target tables and update its
default metadata. By updating a transformation with the
metadata for actual sources, targets,
and transformations, you can quickly
create process flow diagrams for
common scenarios.
7-3
7-4
Transformation Examples
This chapter has examples that use a number of
transformations available in SAS Data Integration Studio.
Here is a partial listing of those transformations:
Q Sort
Q Rank
Q Transpose
Q Data Validation
Q Extract
Q Append
Q Summary Statistics
Q One-Way Frequency
7.1 Introduction
7-5
This demonstration creates a series of subfolders under the Orion Reports folder. The new folders will be
used to organize the various metadata objects created and used in the subsequent sections of this chapter.
1. If necessary, access SAS Data Integration Studio using Brunos credentials.
a. Select Start All Programs SAS SAS Data Integration Studio 4.2.
b. Verify that the connection profile is My Server.
c. Click
to close the Connection Profile window and open the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the Password
field.
e. Click
7-6
19. Right-click on the Orion Reports folder and select New Folder.
20. Type Status Handling as the name for the new folder.
21. Press ENTER.
The final set of folders for Orion Reports should resemble the following:
Objectives
Extract Transformation
The Extract transformation is
typically used to create a subset
from a source. It can also be used to
create columns in a target that are
derived from columns in a source.
10
7-7
7-8
11
7-9
This demonstration creates a report on customer order information for customers from the United States
who placed orders in 2007. The customer dimension information first need to be joined to the order fact
table and then subset, which is done in a separate job. A second job is created that will extract the desired
rows, and then a summary statistics report will be created from this extracted data.
1. If necessary, access SAS Data Integration Studio using Brunos credentials.
a. Select Start All Programs SAS SAS Data Integration Studio 4.2.
b. Verify that the connection profile is My Server.
c. Click
to close the Connection Profile window and open the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the Password
field.
e. Click
g. Click
3. Add source table metadata to the diagram for the process flow.
a. Select the Data Mart Development Orion Target Data Folders.
b. Drag the DIFT Customer Dimension table object to the Diagram tab of the Job Editor.
c. Drag the DIFT Order Fact table object to the Diagram tab of the Job Editor.
7-10
d. Connect the DIFT Customer Dimension table object to one input port for the SQL Join
transformation.
e. Connect the DIFT Order Fact table object to the second input port for the SQL Join
transformation.
7-11
7-12
h. Click
i.
j.
Click
6. Select File Save to save diagram and job metadata to this point.
7-13
7-14
3) On the Functions tab, click the Date and Time folder under the Categories list.
4) On the Functions tab, under the Functions list, click YEAR(date).
5) Click
.
.
f.
7-15
7-16
g. Verify that all 22 target columns will be mapped one-to-one using a source column.
h. Select
8. Select File Save to save diagram and job metadata to this point.
9. Run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
d. View the Log for the executed Job. Scroll to view the note about the creation of the
DIFTTGT.CUSTOMERORDERINFO:
Right-click on the DIFT Customer Order Information table and select Properties.
1) Click the Columns tab.
2) Update the Format field for the Customer_Country column to $COUNTRY20..
3) Update the Format field for the Customer_Gender column to $GENDER6..
4) Click
g. Right-click on the DIFT Customer Order Information table and select Open.
7-17
10. When you are finished viewing the DIFT Customer Order Information table, close the
View Data window by selecting File Close.
11. Save and close the Job Editor window.
b. Select File Save to save diagram and job metadata to this point.
c. Select File Close to close the Job Editor window.
g. Click
2. Add source table metadata to the diagram for the process flow.
a. If necessary, click the Folders tab.
b. If necessary, expand Data Mart Development Orion Reports Extract and Summary.
c. Drag the DIFT Customer Order Information table object to the Diagram tab of the Job Editor.
3. Add the Extract transformation to the process flow.
a. Click the Transformation tab.
b. Expand the Data folder and locate the Extract transformation template.
c. Drag the Extract transformation to the Diagram tab of the Job Editor. Place the transformation
next to the table object.
7-18
d. Connect the DIFT Customer Order Information table object to the Extract transformation.
5. Select File Save to save diagram and job metadata to this point.
6. Specify properties for the Extract transformation.
a. Right-click on the Extract transformation and select Properties.
b. Click the Where tab.
c. In bottom portion of the Where tab, Click the Data Sources tab.
d. Expand CustomerOrderInfo table.
e. Select Customer_Country.
f.
Click
h. Click
7. Select File Save to save diagram and job metadata to this point.
8. Specify properties for the Summary Statistics transformation.
a. Right-click on the Summary Statistics transformation and select Properties.
b. On the General tab, remove the default description.
7-19
7-20
3) Click
to close the Select Data Source Items window. The Select analysis
columns area updates as displayed:
7-21
4) Click
in the Select columns to subgroup data area to open the Select Data
Source Items window.
5) Select Customer Gender, hold down the CTRL key and select Customer Age Group, and
then click
.
to close the Select Data Source Items window. The Select columns
6) Click
to subgroup data area updates as displayed:
7-22
.
.
7-23
7-24
f.
7-25
7-26
3) Navigate to S:\Workshop\dift\reports.
4) Type UnitedStatesCustomerInfo.html in the Name field.
5) Click
i.
Click
9. Select File Save to save diagram and job metadata to this point.
10. Run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
7-27
7-28
e. Click
f.
Click
.
to close the information bar in Internet Explorer window.
g. When done viewing the report, select File Close to close Internet Explorer.
7-29
12. Select File Save to save diagram and job metadata to this point.
13. Select File Close to close the Job Editor window. The Extract and Summary folder displays the
two jobs, and a target table:
7-30
14
15
Control Table
The control table can be any table that contains rows of
data that can be fed into an iteration. The creation of this
table can be an independent job, or as part of the job flow
containing the Loop transformations.
In the previous example, we used that value of country in
three places:
Q The unformatted value was used to subset data in the
Extract transformation.
Q The formatted value was used as a title for the report.
Q The compressed formatted value was used to build
the name of the HTML file created.
17
7-31
7-32
19
20
7-33
7-34
This demonstration uses the loop transformations to iterate through the distinct customer country values
and create a separate summary report for each of the countries. Three basic steps will be accomplished:
Step 1: Create the control table.
Step 2: Create the parameterized job.
Step 3: Create the iterative job.
g. Click
Select DIFT Orion Target Tables Library as the value for the Library field.
j.
k. Click
l.
1) Click
6) Click
11) Click
n. Click
7-35
7-36
g. Click
d. Connect the DIFT Customer Order Information table object to the SQL Join transformation.
By default, the SQL Join expects at least two input tables. However, for this instance, we need
just one input.
7-37
e. Click the status indicator on the SQL Join transformation to discover a source table is missing.
f.
Click
g. Right-click on the SQL Join transformation and select Ports Delete Input Port. The status
indicator now shows no errors.
Again, the status indicator for the SQL Join shows that there is a problem.
e. Click the status indicator on the SQL Join transformation to discover that mappings are needed.
f.
Click
6. Select File Save to save diagram and job metadata to this point.
7-38
d. On the Select tab, specify the following Expression information for the three target columns.
Column Name
Expression
CountryName
put(customer_country,$country.)
CCountryName
compress(put(customer_country,$country.))
CountryValue
put(customer_country,$2.)
7-39
f.
to return to the Job Editor. Note that the status indicator associated with the SQL
Click
Join transformation now shows no errors.
8. Select File Save to save diagram and job metadata to this point.
9. Run the job to generate the control table.
a. Right-click in background of the job and select Run.
b. Verify that the job runs successfully.
c. Click the Log tab and verify that DIFTTGT.DISTINCTCOUNTRIES is created with 45
observation and 3 variables.
7-40
f.
7-41
Type DIFT Parameterized Job for Country Reports as the value for the Name
field.
2. Double-click the job DIFT Parameterized Job for Country Reports and it opens in the Job Editor
window.
3. Edit the Extract transformation.
a. Right-click on the Extract transformation and select Properties.
b. Click the Where tab.
c. Type &CtryValue in place of US (in the Expression Text area) be sure double quotes
are being used.
d. Click
7-42
g. Click
Be sure to type in the period that separates the parameter name from the rest of the text.
to close the Summary Statistics Properties window.
5. Select File Save to save diagram and job metadata to this point.
7-43
7-44
5) Click
d. Click
e. Click
7-45
7-46
f.
Click
g. Click
h. Click
to close the DIFT Parameterized Job for Country Reports Properties window.
7. Select File Save to save diagram and job metadata to this point.
The icon for the job object in the Loop Transforms folder is now decorated with an ampersand to
denote that the job is parameterized.
Parameterized jobs can be tested *if* all parameters are supplied default values.
f.
Scroll towards end of the Summary Statistics code area and verify that the correct HTML file
name is being generated, as well as the correct title2 text.
7-47
7-48
7-49
g. Click
2. Add control table metadata to the diagram for the process flow.
a. Click the Folders tab.
b. If necessary, expand Data Mart Development Orion Reports Loop Transforms.
c. Drag the DIFT Control Table - Countries table object to the Diagram tab of the Job Editor.
3. Add the Loop transformation to the process flow.
a. Click the Transformations tab.
b. Expand the Control folder and locate the Loop transformation template.
c. Drag the Loop transformation to the Diagram tab of the Job Editor.
d. Connect the DIFT Control Table - Countries table object as input to the Loop transformation.
7-50
7-51
c. For the Country Name parameter, select CountryName as the value for the Mapped Source
Column.
d. For the Compressed Country Name parameter, select CCountryName as the value for the
Mapped Source Column.
e. For the Country Value parameter, select CountryValue as the value for the Mapped Source
Column.
f.
Click
7-52
7. Select File Save to save diagram and job metadata to this point.
8. Run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
and so on.
For each of these parameter sets, the inner job is executed and this execution results in an
HTML file.
7-53
7-54
Exercises
The Marketing Department has been asked to examine buying habits of various age groups across the
genders. The same kind of marketing analysis will be applied to each distinct gender/age group
combination. To make this task easier, a request has been made to create a separate SAS table for each of
the distinct gender/age group combinations. You first use the Extract transformation to create one of the
needed tables. This job can then be parameterized and used with the Loop transformations to create the
series of desired tables.
1. Using the Extract Transformation to Create Table for Female Customers Aged 15-30 Years
Create a job that uses the Customer Dimension table to load a new table to contain just the female
customers aged 15-30 years.
Place the job in the Data Mart Development Orion Reports Extract and Summary folder.
Name the job DIFT Populate Female15To30Years Table.
Use the Customer Dimension table as the source table for the job (the metadata for this table can be
found in Data Mart Development Orion Target Data).
Add the Extract transformation to the job and build the following WHERE clause:
Customer_Gender = F and Customer_Age_Group = 15-30 years
Register the output table from the Extract transformation with the following attributes:
Tab
General
Physical Storage
Attribute
Value
Name:
Location:
Library:
Name:
Female15To30Years
Run the job and verify that the new table has 12,465 observations and 11 variables.
The final job flow should resemble the following:
7-55
Length
Type
To be used for:
GenVal
Character
AgeGroup
12
Character
GdrAgeGrp
20
Character
Expression
GenVal
put(customer_gender,$1.)
AgeGroup
GdrAgeGrp compress(put(customer_gender,$gender.)||
tranwrd(customer_age_group,"-","To"))
Run the job the control table should have 8 observations and 3 columns. Verify the data look
appropriate.
Create a table template to be used in the parameterized job.
Place the table in the Data Mart Development Orion Reports Loop Transforms folder.
Name the table object DIFT Table Template for Gender Age Group Table.
Name the physical table &GdrAgeGrp._Customers and have the table created in the DIFT
Orion Target Tables Library as a SAS table.
The table template needs to have the same column specifications as the DIFT Customer
Dimension table.
7-56
Number of
Observations
Number of
Columns
Female15To30Years_Customers
12465 obs
11 cols
Female31To45Years_Customers
9263 obs
11 cols
Female46To60Years_Customers
9295 obs
11 cols
Female61To75Years_Customers
9266 obs
11 cols
Male15To30Years_Customers
15261 obs
11 cols
Male31To45Years_Customers
11434 obs
11 cols
Male46To60Years_Customers
11502 obs
11 cols
Male61To75Years_Customers
11468 obs
11 cols
Objectives
25
Return Codes
When a job is executed in SAS Data Integration Studio,
a return code for each transformation in the job is
captured in a macro variable. The return code for the job
is set according to the least successful transformation in
the job.
26
7-57
7-58
Example Actions
The return code can be associated with an action that
performs one or more of these tasks:
terminate the job or transformation
call a user-defined SAS macro
send a status message to a person, a file, or an event
broker
capture job statistics
27
Example
Actions
Successful
Send Email
Errors
Warnings
28
29
30
7-59
7-60
32
7-61
This demonstration illustrates establishing Status Handling for an SQL Join transformation and for a job.
Also illustrated is the use of the Return Code Check transformation.
1. If necessary, access SAS Data Integration Studio using Brunos credentials.
a. Select Start All Programs SAS SAS Data Integration Studio 4.2.
b. Verify that the connection profile is My Server.
c. Click
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
5. Establish two successful status handling conditions for the SQL Join transformation.
a. Click the Diagram tab.
b. Right-click on the SQL Join transformation and select Properties.
c. Click the Status Handling tab.
7-62
d. Click
to add a new condition. By default, a Successful Condition is added
with an Action of None.
There are two other conditions that can be tested for: Warnings and Errors.
e. Click
f.
7-63
The conditions of Warnings and Errors produce the same list of actions. The Errors condition has
one additional option associated it Abort.
g. Specify S:\Workshop\dift\reports\SHforSQLJoin.txt for the File Name field.
h. Specify Successful running of SQL Join in Job &jobid for the Message
field.
i.
Click
j.
Click
to add a new condition. A second Successful Condition is added with
an Action of None.
k. Click
l.
p. Click
7-64
q. Click
6. Select File Save to save diagram and job metadata to this point.
7. Re-run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
The notes pertaining to the text file for a successful condition should resemble:
7-65
7-66
The notes pertaining to the data set for a successful condition should resemble the following:
c. Double-click SHforSQLJoin.txt.
The SAS data set received a new observation each time the job was run.
c. Select File Exit to close SAS Enterprise Guide and do not save any changes.
7-67
7-68
11. Establish two successful status handling conditions for the job.
a. Right-click in the background of the job and select Properties.
b. On the General tab, change the name to DIFT Pop Cust Dim Table (SH).
Select Send Job Status in the Action area. The Action Options window opens.
i.
Click
7-69
j.
Click
to close the DIFT Pop Cust Dim Table (SH) Properties window.
12. Select File Save to save diagram and job metadata to this point.
13. Re-run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
d. View the Log for the executed Job. The new job status data set is created with one observation.
7-70
The SAS data set received a new observation each time the job was run.
The SAS data set can be used to gather some total time processing statistics for this job.
c. Select File Exit to close SAS Enterprise Guide and do not save any changes.
17. Close the DIFT Pop Cust Dim Table (SH) job and save any changes.
7-71
This demonstration shows the use of the Return Code Check transformation for transformations such as
the Extract transformation that do not have a Status Handling tab.
1. Locate and open the Report on US Customer Order Information job.
a. Click the Folders tab.
b. Expand Data Mart Development Orion Reports Extract and Summary.
c. Right-click DIFT Create Report for US Customer Order Information and select Copy.
d. Expand Data Mart Development Orion Reports Status Handling.
e. Right-click the Status Handling folder and select Paste.
f.
Right-click the pasted job, DIFT Create Report on US Customer Order Information and
select Properties.
g. On the General tab, change the name of the job to DIFT Create Report on US Cust
Info (SH).
h. Click
i.
Right-click DIFT Create Report for US Cust Order Info (SH) and select Open.
7-72
2. Right-click on the Extract transformation and select Properties. Note that the properties for this
transformation do not have a Status Handling tab.
The Return Code Check transformation can be used to take advantage of the status handling features
for those transformations that have no Status Handling tab. The Return Code Check transformation
captures the status of the previous transformation in the process flow, in this case, the Extract
transformation.
3. Click
4. Add the Return Code Check transformation to the process flow diagram.
a. Click the Transformations tab.
b. Expand the Control group.
c. Locate the Return Code Check transformation.
d.
Drag the Return Code Check transformation in to the process flow diagram.
e.
f.
7-73
7-74
h. Click
to close the Action Options window. The Return Code Check Properties window
shows this one condition.
i.
Click
7-75
7-76
Objectives
36
37
7-77
7-78
38
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
g. Click
Select DIFT Orion Target Tables Library as the value for the Library field.
j.
k. Click
l.
Expand the Data Mart Development Orion Source Data folder on the Folders tab.
7-79
7-80
m. From the Orion Source Data folder, select DIFT NEWORDERTRANS table object.
n. Click
o. Click
g. Click
Select DIFT Orion Target Tables Library as the value for the Library field.
j.
k. Click
l.
Expand the Data Mart Development Orion Source Data folder on the Folders tab.
m. From the Orion Source Data folder, select DIFT NEWORDERTRANS table object.
n. Click
o. Click
7-81
g. Click
5. Add source table metadata to the diagram for the process flow.
a. Click the Folders tab.
b. Expand Data Mart Development Orion Source Data.
c. Drag the DIFT NEWORDERTRANS table to the Diagram tab of the Job Editor.
6. Add the Data Validation transformation to the process flow.
a. Click the Transformation tab.
b. Expand the Data folder and locate the Data Validation transformation template.
c. Drag the Data Validation transformation to the Diagram tab of the Job Editor.
d. Connect the DIFT NEWORDERTRANS table object to the Data Validation transformation.
e. Right-click DIFT NEWORDERTRANS and select Properties.
f.
g. Click
7-82
5) Click
6) Click
10) Click
7-83
7-84
11) Click
16) Click
f.
g. Click
Click
j.
k. Click
l.
Click
7-85
7-86
m. Keep the default Action if invalid value, Move row to error table.
n. Click
to close the Invalid Values window. The Invalid Values tab shows the following:
s. Click
following:
to close the Missing Values window. The Missing Values tab shows the
t. Click
7-87
7-88
9. Add a sticky note to the job. (A sticky note is a way to visual document within a job.)
a. Click
b. Drag the sticky note and place it under the Data Validation transformation.
c. Double-click the sticky note to expand to add some text.
d. Type The Invalid_Products table is populated through the execution
of the Data Validation transformation. as the text for the sticky note.
10. Select File Save to save diagram and job metadata to this point.
7-89
7-90
c. When you are finished viewing the DIFT Valid Products data set, close the View Data
window by selecting File Close.
13. View the DIFT Invalid Products table.
a. Click the Folders tab.
b. Expand Data Mart Development Orion Reports Data Validation.
c. Right-click the DIFT Invalid Products table object and select Open.
d. When you are finished viewing the Invalid Products data set, close the View Data window
by selecting File Close.
c. When you are finished viewing the file, select File Exit to close it.
15. View ValidInvalidProdEntryToFile.txt.
a. Open Windows Explorer and navigate to S:\Workshop\dift\reports.
b. Open the file ValidInvalidProdEntryToFile.txt to view the generated exception report.
c. When you are finished viewing the file, select File Exit to close it.
16. View the data set difttgt.proddataexcept.
a. Copy the LIBNAME statement for the DIFT Orion Target Tables Library.
1) Click the Folders tab.
2) Expand Data Mart Development Orion Target Data.
3) Right-click on the DIFT Orion Target Tables Library object and select View Libname.
4) Right-click in the background of the Display Libname window and select Select All.
7-91
7-92
5) Right-click in the background of the Display Libname window and select Copy.
6) Click
4) Click the Output tab to view the information in the data set.
7-93
Exercises
Create metadata for the target table named DIFT Valid Customers.
The target table should be physically stored in the DIFT Orion Target Tables Library with a
name of Valid_Customers.
The table object should contain the exact same columns as the DIFT Customer
Dimension table object found in the Data Mart Development Orion Target Data
folder.
The target table object should end up in the Data Mart Development Orion Reports
Data Validation folder
Create metadata for the target table named DIFT Invalid Customers.
The target table should be physically stored in the DIFT Orion Target Tables Library with a
name of Invalid_Customers.
The table object should contain the exact same columns as the DIFT Customer
Dimension table object found in the Data Mart Development Orion Target Data
folder.
The target table object should end up in the Data Mart Development Orion Reports
Data Validation folder
Create the job that will load Valid Customers from the DIFT Customer Dimension table using the
Data Validation transformation.
7-94
How many rows were moved to the error table because the value for Customer_Type was invalid?
Were missing values found for Customer ID?
Were there any duplicate values found for Customer_Name and Customer_Birth_Date?
Objectives
43
Transpose Transformation
The Transpose transformation
creates an output data set by
restructuring the values in a SAS
data set, and transposing selected
variables in to observations.
The Transpose transformation is an
interface to the TRANSPOSE
procedure.
44
7-95
7-96
Sort Transformation
The Sort transformation provides an
interface for the SORT procedure.
The transformation can be used to
read data from a source, sort it, and
write the sorted data to a target in a
SAS Data Integration Studio job.
45
Append Transformation
The Append transformation can be
used to create a single target by
appending or concatenating two or
more sources.
46
Rank Transformation
The Rank transformation uses the
RANK procedure to rank one or
more numeric variables in the source
and store the ranks in the target.
47
48
7-97
7-98
49
7-99
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
7-100
2. Verify that the DIFT STAFF_PARTIAL metadata table object exists and has data loaded.
a. Click the Folders tab.
b. Navigate to the Data Mart Development Orion Source Data folder.
c. Right-click DIFT STAFF_PARTIAL and select Open.
f.
Click
7-101
1) Navigate to S:\Workshop\dift\data.
2) Change the Type field to Delimited Files (*.csv).
3) Select AddlStaff.csv.
4) Click
Previewing the file shows the first record contains column names and that the values are commadelimited and not space delimited.
h. Click
i.
Click
1) Clear Blank.
2) Click Comma.
j.
Click
1) Click
.
.The Auto Fill Columns window opens.
2) Type 2 (the number two) as the value for the Start record field.
3) Click
to close the Auto Fill Columns window. The top portion of the Column
Definitions window populates with three columns, one of them numeric and two of them
character.
7-102
4) Click
5) Select Get the column names from column headings in this file.
6) Verify that 1 is set as the value for The column headings are in the file
record field.
7) Click
k. Change the length, informat, and format for the Job_Title column.
1) In the top portion of the Column Definitions window, locate the Job_Title column.
2) Update the length to 25.
3) Update the informat to $25..
4) Update the format to $25..
l.
m. Click
n. Click
. The metadata object for the external file is found on the Checkouts tab.
4. Create a target table object that is a duplicate of metadata from DIFT STAFF_PARTIAL.
a. Click the Folders tab.
b. Navigate to Data Mart Development Orion Source Data.
c. Right-click DIFT STAFF_PARTIAL and select Copy to Folder.
d. Navigate to Data Mart Development Orion Reports Transpose and Rank.
e. Click
f.
Click
f.
Click
7-103
7-104
g. Click
7. Add source table metadata to the diagram for the process flow.
a. Click the Folders tab.
b. Expand Data Mart Development Orion Source Data.
c. Drag the DIFT STAFF_PARTIAL table object to the Diagram tab of the Job Editor.
d. Collapse Orion Source Data.
e. Expand Data Mart Development Orion Reports Transpose and Rank.
f.
Drag the DIFT Additional Staff Information external file object to the Diagram tab of the Job
Editor.
7-105
7-106
18. Select File Save to save diagram and job metadata to this point.
19. Add the Rank transformation to the process flow.
a. Click the Transformation tab.
b. Expand the Data folder and locate the Rank transformation template.
c. Drag the Rank transformation to the Diagram tab of the Job Editor.
20. Connect the DIFT Full Staff table object to the Rank transformation.
21. Select File Save to save diagram and job metadata to this point.
The process flow diagram should resemble the following:
7-107
d. Click
23. Change the name of work table that is output for the File Reader transformation.
a. Right-click on the green temporary table object associated with the File Reader transformation
and select Properties.
b. Click the Physical Storage tab.
c. Type FileReader as the value for the Name field.
d. Click
7-108
e. Click
25. Change the name of work table that is output for the first Sort transformation.
a. Right-click on the green temporary table object associated with the Sort transformation and select
Properties.
b. Click the Physical Storage tab.
c. Type SortedExtFile as the value for the Name field.
d. Click
7-109
26. Change the name of work table that is output for the Transpose transformation.
a. Right-click on the green temporary table object associated with the Transpose transformation and
select Properties.
b. On the General tab, remove the description.
c. Click the Physical Storage tab.
d. Type TransposedData as the value for the Name field.
e. Click
7-110
e) Click
to move all columns from the DIFT Full Staff table to the Selected area.
3) Map Job_Title from Source table area to Job_Title in the Target table
area.
7-111
7-112
7-113
3) Establish ColNames for the Select a column for output column names area.
a) Click
in the Select a column for output column names area
to open the Select a Data Source Item window.
b) Select ColNames.
7-114
4) Establish Job_Title for the Select columns whose values define groups
of records to transpose area.
in the Select columns whose values define groups of
a) Click
records to transpose area to open the Select Data Source Items window.
b) Select Job_Title and then click
c) Click
to close the Select Data Source Items window. The Select
columns whose values define groups of records to transpose
area updates as displayed:
Click
d. Click
7-115
7-116
29. Change the name of work table that is output for the Rank transformation.
a. Right-click on the green temporary table object associated with the Rank transformation and
select Properties.
b. Click the Physical Storage tab.
c. Type RankedData as the value for the Name field.
d. Click
6) Re-order the columns so that Salary and RankedSalary are the last two columns (click
on the row number and drag a column to desired ordering).
7) Verify that all columns are mapped properly (the RankedSalary column will not have a oneto-one column mapping).
8) Right-click on the RankedSalary column and select Propagate From Targets To End.
7-117
to move
2) Select RankSalary in the Available target columns area, and then click
move RankSalary to the Selected target columns area.
d. Click
to
31. Right-click in the background of the job and select Settings Automatically Propagate Columns
(effectively disabling the automatic propagation for this job, from this point of edit on).
32. Add the Sort transformation to the process flow.
a. Click the Transformation tab.
b. Expand the Data folder and locate the Sort transformation template.
c. Drag the Sort transformation to the Diagram tab of the Job Editor.
7-118
f.
Click
7-119
7-120
7-121
to close the Select Data Source Items window. The Select other
c) Click
columns to print area updates as displayed:
7-122
e. Click
41. Select File Save to save diagram and job metadata to this point.
42. Run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
Objectives
53
54
7-123
7-124
55
7-125
A table of potential customers was defined in metadata (DIFT Contacts). This demonstration creates
a job that initially reports on this data by creating a one-way frequency reports for two of the columns.
The job is then updated by adding the Apply Lookup Standardization transformation to apply two
predefined standardization schemes. The final step is reporting on the newly transformed data. The final
process flow should resemble the following:
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
7-126
2. Verify that the DIFT Contacts metadata table object exists and has data available.
a. Click the Folders tab.
b. Navigate to the Data Mart Development Orion Source Data folder.
c. Right-click on DIFT Contacts and select Open.
7-127
g. Click
4. Add source table metadata to the diagram for the process flow.
a. Click the Folders tab.
b. Navigate to the Data Mart Development Orion Source Data folder.
c. Drag the DIFT Contacts table object to the Diagram tab of the Job Editor.
5. Add the One-Way Frequency transformation to the process flow.
a. Click the Transformation tab.
b. Expand the Analysis folder and locate the One-Way Frequency transformation template.
c. Drag the One-Way Frequency transformation to the Diagram tab of the Job Editor.
6. Connect the DIFT Contacts table object to the One-Way Frequency transformation.
7. Select File Save to save diagram and job metadata to this point.
7-128
c) Click
to close the Select Data Source Items window. The Select
columns to perform a one-way frequency distribution on area
updates as displayed:
7-129
4) Type nocum nopercent as the value for Specify other options for TABLES
statement area.
5) Select Titles and footnotes in the selection pane.
6) Type OS and Database Frequency Counts as the value for the Heading 1.
c. Click
9. Select File Save to save diagram and job metadata to this point.
10. Run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
7-130
A quick glance verifies the initial suspicion that the OS and Database columns have not had
any standards imposed for data values.
11. Select File Close to close the job editor.
Two schemes have been pre-built for these types of column data. The next steps will
establish the necessary options to access these schemes
add the Apply Lookup Standardization
re-run the One-Way Frequency task against the standardized table.
1. Select Tools Options.
2. Select Data Quality tab.
3. Verify that the following fields are set appropriately in the Data Quality area:
Default Locale: ENUSA
DQ Setup Location:
C:\Program Files\SAS\SASFoundation\9.2\dquality\sasmisc\dqsetup.txt
Scheme Repository Type:
Scheme Repository:
4. Click
C:\Program Files\DataFlux\QltyKB\CI\2008A\scheme
7-131
7-132
11. Select File Save to save diagram and job metadata to this point.
7-133
7-134
For the OS column, select Phrase as the value for the Apply Mode field.
g. Click
13. Select File Save to save diagram and job metadata to this point.
7-135
c) Click
to close the Select Data Source Items window. The Select
columns to perform a one-way frequency distribution on area
updates as displayed:
7-136
15. Select File Save to save diagram and job metadata to this point.
16. Run the job.
a. Right-click in background of the job and select Run.
b. Click the Status tab in the Details area. Note that all processes completed successfully.
c. Click
7-137
7-138
Exercises
7-139
7-140
b. Add source table metadata to the diagram for the process flow.
1) Select the Data Mart Development Orion Target Data folder.
2) Drag the DIFT Customer Dimension table object to the Diagram tab of the Job Editor.
c. Add the Extract transformation to the process flow.
1) Click the Transformation tab.
2) Expand the Data folder and locate the Extract transformation template.
3) Drag the Extract transformation to the Diagram tab of the Job Editor. Place it next to the
DIFT Customer Dimension table object.
4) Connect the DIFT Customer Dimension table object to the Extract transformation.
d. Add the target table to the process flow.
1) Right-click on the green temporary table object associated with the Extract transformation
and select Register Table.
2) Type DIFT Customers - Females 15-30 Years as the value for the Name field.
3) Verify that the Location is set to
/Data Mart Development/ Orion Reports/Extract and Summary.
4) Click the Physical Storage tab.
5) Click
8) Click
e. Select File Save to save diagram and job metadata to this point.
f. Specify properties for the Extract transformation.
1) Right-click on the Extract transformation and select Properties.
2) Click the Where tab.
3) Construct the following expression:
7-141
7-142
12) No column metadata will be selected from existing metadata objects. Click
j)
k) Click
l)
7-143
14) Click
15) Review the metadata listed in the finish window and click
7-144
c. Add source table metadata to the diagram for the process flow.
1) Expand Data Mart Development Orion Target Data.
2) Drag the DIFT Customer Dimension table object to the Diagram tab of the Job Editor.
d. Add the SQL Join transformation to the process flow.
1) Click the Transformation tab.
2) Expand the Data folder and locate the SQL Join transformation template.
3) Drag the SQL Join transformation to the Diagram tab of the Job Editor. Place it next to the
DIFT Customer Dimension table object.
4) Connect the DIFT Customer Dimension table object to the SQL Join transformation.
5) Right-click on the SQL Join transformation and select Ports Delete Input Port. The status
indicator now shows no errors.
e. Add the target table to the process flow.
1) Right-click on the green icon (output table icon) for the SQL Join transformation and select
Replace.
2) Expand Data Mart Development Orion Reports Loop Transforms.
3) Click the DIFT Control Table Gender Age Groups table object.
4) Click
put(customer_gender,$1.)
AgeGroup
GdrAgeGrp
compress(put(customer_gender,$gender.)||
tranwrd(customer_age_group,"-","To"))
7-145
g. Select File Save to save diagram and job metadata to this point.
h. Run the job to generate the control table.
1) Right-click in background of the job and select Run.
2) Verify that the job runs successfully.
3) Click the Log tab and verify that DIFTTGT.DISTINCTCOUNTRIES is created with 45
observation and three variables.
7-146
i. Create a table object template that will be used to generate the individual gender-age group tables.
1) Click the Folders tab.
2) Expand Data Mart Development Orion Reports Loop Transforms.
3) Verify that the Loop Transforms folder is selected.
4) Select File New Table.
5) Type DIFT Table Template for Gender Age Group Table as the value for the
Name field.
6) Verify that the Location is set to
/Data Mart Development/ Orion Reports/ Loop Transforms.
7) Click
16) Click
17) Review the metadata listed in the finish window and click
j. Define the parameterized job metadata object to load the holding table.
1) Click the Folders tab.
2) Expand Data Mart Development Orion Reports Loop Transforms.
3) Verify that the Loop Transforms folder is selected.
4) Select File New Job. The New Job window opens.
5) Type DIFT Parameterized Job for Gender Age Group Tablesas the value
for the Name field.
6) Verify that the Location is set to
/Data Mart Development/ Orion Reports/ Loop Transforms.
7) Click
7-147
8) Add source table metadata to the diagram for the process flow.
a) Expand Data Mart Development Orion Target Data.
b) Drag the DIFT Customer Dimension table object to the Diagram tab of the Job Editor.
9) Add the Extract transformation to the process flow.
a) Click the Transformation tab.
b) Expand the Data folder and locate the Extract transformation template.
c) Drag the Extract transformation to the Diagram tab of the Job Editor. Place it next to the
DIFT Customer Dimension table object.
d) Connect the DIFT Customer Dimension table object to the Extract transformation.
10) Add the target table to the process flow.
a) Expand Data Mart Development Orion Reports Loop Transforms.
b) Drag DIFT Table Template for Gender Age Group Tables table object to the Diagram
tab of the Job Editor.
c) Right-click on the output table object (green icon) for the Extract transformation and
select Delete.
d) Connect the Extract transformation to the DIFT Table Template for Gender Age Group
Tables table object.
11) Select File Save to save diagram and job metadata to this point.
12) Specify properties for the Extract transformation.
a) Right-click on the Extract transformation and select Properties.
b) Select Where tab.
c) In bottom portion of the Where tab, click the Data Sources tab.
d) Expand CustDim table.
e) Select Customer_Gender.
f) Click
g) In Expression Text area, type ="&genval" AND (that is, an equals sign, the text
&genval, a space, the text AND, and another space).
h) On the Data Sources tab, double-click Customer_Age_Group to add this to the
Expression Text area.
i) In the Expression Text area, type ="&AgeGroup" following the
Customer_Age_Group column.
7-148
j) Click
13) Select File Save to save diagram and job metadata to this point.
14) Define job parameters.
a) Right-click in the background of the job and select Properties.
b) Click the Parameters tab.
c) Click
u) Click
7-149
v) Click
w) Click
15) Select File Save to save diagram and job metadata to this point.
16) Run the job.
a) Right-click in background of the job and select Run.
b) Click the Status tab in the Details area. Note that all processes completed successfully.
c) Click
d) View the Log for the executed Job. Specifically, locate the note about the
FEMALE15TO30YEARS_CUSTOMERS table.
7-150
l. Add control table metadata to the diagram for the process flow.
1) Click the Folders tab.
2) Expand Data Mart Development Orion Reports Loop Transforms.
3) Drag the DIFT Control Table Gender Age Groups table object to the Diagram tab of the
Job Editor.
m. Add the Loop transformation to the process flow.
1) Click the Transformations tab.
2) Expand the Control folder and locate the Loop transformation template.
3) Drag the Loop transformation to the Diagram tab of the Job Editor.
4) Connect the DIFT Control Table - Gender-Age Groups table object as input to the Loop
transformation.
n. Add the parameterized job to the process flow.
1) Click the Folders tab.
2) Expand Data Mart Development Orion Reports Loop Transforms.
3) Drag the DIFT Parameterized Job for Gender Age Group Tables job to the Diagram tab of
the Job Editor.
7-151
6) Click
7-152
q. Select File Save to save diagram and job metadata to this point.
r. Run the job.
1) Right-click in background of the job and select Run.
2) Click the Status tab in the Details area. Note that all processes completed successfully.
3) Click
7-153
12) Expand the Data Mart Development Orion Source Data folder on the Folders tab.
13) From the Orion Source Data folder, select DIFT Customer Dimension table object.
14) Click
15) Click
17) Review the metadata listed in the finish window and click
7-154
12) Expand the Data Mart Development Orion Source Data folder on the Folders tab.
13) From the Orion Source Data folder, select DIFT Customer Dimension table object.
14) Click
15) Click
17) Review the metadata listed in the finish window and click
d. Add source table metadata to the diagram for the process flow.
1) Click the Data Mart Development Orion Target Data Folders.
2) Drag the DIFT Customer Dimension table to the Diagram tab of the Job Editor.
e. Add the Data Validation transformation to the process flow.
1) Click the Transformation tab.
2) Expand the Data folder and locate the Data Validation transformation template.
3) Drag the Data Validation transformation to the Diagram tab of the Job Editor.
4) Connect the DIFT Customer Dimension table object to the Data Validation transformation.
7-155
d) Expand the DIFT Customer Types table in the Data Mart Development
Data folder and select Customer_Type.
e) Click
f) Select Change value to as the value for the Action if invalid field.
7-156
g) Type Unknown Customer Type as the value for the New Value field.
The following is populated in the Invalid Values window:
h) Click
c) Click
7) Click
7-157
7-158
h. Add a sticky note to the job. (A sticky note is a way to visually document within a job.)
1) Click
2) Drag the sticky note and place it under the Data Validation transformation.
3) Double-click the sticky note to expand to add some text.
4) Type The Invalid_Customers table is populated through the
execution of the Data Validation transformation. as the text for the
sticky note.
7-159
3) When you are finished viewing the file, select File Exit to close it.
l. View the DIFT Valid Customers table.
1) Click the Diagram tab of the Job Editor window.
2) Right-click on the DIFT Valid Customers table and select Open. Identify the duplicate
observations.
3) When you are finished viewing the DIFT Valid Customers data set, close the View
Data window by selecting File Close.
m. Select File Save to save the job metadata.
n. Select File Close to close the job editor.
Questions:
How many rows were moved to the error table because the value for Customer_Type was invalid?
None
Were missing values found for Customer ID?
No
Were there any duplicate values found for Customer_Name and Customer_Birth_Date?
Yes four pairs.
7-160
b. Add source table metadata to the diagram for the process flow.
1) Click the Folders tab.
2) Navigate to the Data Mart Development Orion Source Data folder.
3) Drag the DIFT Catalog_Orders table object to the Diagram tab of the Job Editor.
c. Add the One-Way Frequency transformation to the process flow.
1) Click the Transformation tab.
2) Expand the Analysis folder and locate the One-Way Frequency transformation template.
3) Drag the One-Way Frequency transformation to the Diagram tab of the Job Editor.
d. Connect the DIFT Catalog_Orders table object to the One-Way Frequency transformation.
e. Select File Save to save diagram and job metadata to this point.
f. Specify properties for the One-Way Frequency transformation.
1) Right-click on the One-Way Frequency transformation and select Properties.
2) Click the Options tab.
3) Verify that Assign columns is selected in the selection pane.
4) Establish CATALOG in the Select columns to perform a one-way frequency
distribution on area.
a) Click
in the Select columns to perform a one-way frequency
distribution on area to open the Select Data Source Items window.
b) Select CATALOG and then click
c) Click
g. Select File Save to save diagram and job metadata to this point.
h. Run the job.
1) Right-click in background of the job and select Run.
2) Click the Status tab in the Details area. Note that all processes completed successfully.
3) Click
7-161
7-162
n. Select File Save to save diagram and job metadata to this point.
o. Specify properties for the Apply Lookup Standardization transformation.
1) Right-click on the Apply Lookup Standardization transformation and select Properties.
2) Click the Standardizations tab.
3) For the CATALOG column, select DIFT Catalog Orders.sch.qkb as the value for the
Scheme field.
4) For the CATALOG column, select Phrase as the value for the Apply Mode field.
5) For the OS column, select Phrase as the value for the Apply Mode field.
6) Click
p. Select File Save to save diagram and job metadata to this point.
q. Specify properties for the One-Way Frequency transformation.
1) Right-click on the One-Way Frequency transformation and select Properties.
2) Click the Options tab.
3) Verify that Assign columns is selected in the selection pane.
7-163
r. Select File Save to save diagram and job metadata to this point.
s. Run the job.
1) Right-click in background of the job and select Run.
2) Click the Status tab in the Details area. Note that all processes completed successfully.
3) Click
7-164
8.2
8.3
Table Properties and Load Techniques of the Table Loader Transformation ......... 8-14
8-2
Objectives
Loader Transformations
SAS Data Integration Studio provides
three specific transformations to load data.
These Loader transformations are
designed to output to permanent,
registered tables (that is, tables
that are available in the Folder or
Inventory Tree).
Loaders can do the following:
create and replace tables
maintain indexes
do updates and appends
be used to maintain constraints
4
8-3
8-4
8-5
8-6
No Table Loader?
SAS Data Integration Studio data transformations can
perform a simple load of that transformations output
table. The transformations will drop and then replace
the table.
10
8-7
8-8
Objectives
13
Important Step
An important step in an ETL process usually involves
loading data into a permanent physical table that is
structured to match your data model. The designer or
builder of an ETL process flow must identify the type of
load that the process requires in order to:
append all source data to any
previously loaded data
replace all previously loaded
data with the source data
use the source data to update
and add to the previously
loaded data based on
specific key column(s)
14
8-9
Load Style
In SAS Data Integration Studio, the Table Loader
transformation can be used to perform any of the three
load types (the Load style field on the Load Technique
tab).
16
The APPEND procedure with the FORCE option is the default. If the source is a large table and the target
is in a database that supports bulk load, PROC APPEND can take advantage of the bulk-load feature.
Consider bulk loading the data into database tables by using the optimized SAS/ACCESS engine bulk
loaders. It is recommended that you use native SAS/ACCESS engine libraries instead of ODBC libraries
or OLE DB libraries for relational database data. SAS/ACCESS engines have native access to the
databases and have superior bulk-loading capabilities.
PROC SQL with the INSERT statement performs well when the source table is small (because the
overhead needed to set up bulk loading is not incurred). PROC SQL with the INSERT option adds one
row at a time to the database.
8-10
Replace
Description
Entire table
Simulating truncate
All rows using truncate uses PROC SQL with TRUNCATE to remove all rows
(only available for some databases).
17
When Entire table is selected, the table is removed and disk space is freed. Then the table is re-created
with 0 rows. Consider using this option unless your security requirements restrict table deletion
permissions (a restriction that is commonly imposed by a database administrator on database tables).
Also, avoid this method if the table has any indexes or constraints that SAS Data Integration Studio
cannot re-create from metadata (for example, check constraints).
If available, consider using All rows using truncate. Both All rows using selections enable you to
keep all indexes and constraints intact during the load. By design, using TRUNCATE is the quickest way
to remove all rows. The DELETE * syntax also removes all rows; however, based on the database and
table settings, this choice can incur overhead that will degrade performance. The database administrator
or database documentation should be consulted for a comparison of the two techniques.
Caution: When using All rows using delete repeatedly to clear a SAS table, the size of that
table should be monitors over time. All rows using delete only performs logical deletes for SAS
tables; therefore, a tables physical size will grow and the increased size can negatively affect
performance.
18
19
8-11
8-12
...
8-13
Description
SQL Set
21
When Modify by Column(s) is selected, the Match by Column(s) group box, which
enables you to select columns, is enabled.
When Modify Using Index is selected, the Modify Using Index group box, which enables
you to select an index, is enabled. Also, the Modify Using Index group box has a check box
enabled, Return to the top of the index for duplicate values coming
from the input data.
The options Modify by Columns and Modify using Index have the added benefit of being able to take
unmatched records and add them to the target table during the same single pass through the source table.
Of these three choices, the DATA step MODIFY with KEY= method often out-performs the other update
methods in tests conducted on loading SAS tables. The DATA step MODIFY with KEY= method can also
perform adequately for database tables when indexes are used.
When the SQL procedure with the WHERE or SET statements is used, performance varies. Neither of
these statements in PROC SQL requires data to be indexed or sorted, but indexing on the key column(s)
can greatly improve performance. Both of these statements use WHERE processing to match each row of
the source table with a row in the target table.
The update technique chosen should depend on the percentage of rows being updated. If the majority of
target records are being updated, the DATA step with MERGE (or UPDATE) might perform better than
the DATA step with MODIFY BY or MODIFY KEY= or PROC SQL because MERGE makes full use of
record buffers.
Performance results can be hardware and operating environment dependent, so you should consider
testing more than one technique.
8-14
Objectives
23
Keys
Several transformations available in SAS Data Integration
Studio, including the Table Loader transformation, can
take advantage of different types of keys that can be
defined for tables.
This section simply defines:
Primary Keys
Foreign Keys
Unique Keys
Surrogate Keys
24
8.3 Table Properties and Load Techniques of the Table Loader Transformation
Foreign Key
25
8-15
8-16
27
8.3 Table Properties and Load Techniques of the Table Loader Transformation
8-17
28
Integrity constraints preserve the consistency and correctness of stored data. They restrict the data values
that can be updated or inserted into a table. Integrity constraints can be specified at table creation time or
after data already exists in the table. In the latter situation, all data are checked to verify that they satisfy
the candidate constraint before the constraint is added to the table. Integrity constraints are enforced
automatically by the SAS System for each add, update, and delete of data to the table containing the
constraint(s). Specifying constraints is the user's responsibility.
There are five basic types of integrity constraints:
Not Null (Required Data)
Check (Validity Checking)
Unique (Uniqueness)
Primary Key (Unique and Not Null)
Foreign Key (Referential)
The first four types of constraints are referred to as "general constraints" in this document. Foreign keys
and primary keys that are referenced by one or more foreign keys are referred to as "referential
constraints". Note that a primary key alone is insufficient for referential integrity. Referential integrity
requires a primary key and a foreign key.
8-18
Indexes
An index is an optional file that you can create for
a SAS data file that does the following:
points to observations based on the values of one
or more key variables
provides direct access to specific observations
An index locates an observation by value.
29
30
8.3 Table Properties and Load Techniques of the Table Loader Transformation
Business Scenario
The SAS data set orion.sales_history is often
queried with a WHERE statement.
Partial Listing of orion.sales_history
Customer
_ID
. . .
Order_
ID
14958
. . .
1230016296
Order_
Type
1
Product_ID
. . .
Product_
Group
. . .
210200600078
. . .
. . .
. . .
14844
. . .
1230096476
220100100354
. . .
14864
. . .
1230028104
240600100115
. . .
Eclipse
Clothing
Bathing Suits
14909
. . .
1230044374
240100200001
. . .
Darts
. . .
14862
. . .
1230021668
240500200056
. . .
Running Clothes
. . .
14853
. . .
1230021653
220200200085
. . .
Shoes
. . .
14838
. . .
1230140184
220100300042
. . .
Knitwear
. . .
14842
. . .
1230025285
240200100053
. . .
Golf
. . .
14815
. . .
1230109468
230100700004
. . .
Tents
. . .
14797
. . .
1230168587
220101000004
. . .
Shorts
. . .
. . .
31
Business Scenario
You need to create three indexes on the most frequently
used subsetting columns.
Index Name
Index Variables
Customer_ID
Customer_ID
Product_Group
Product_Group
SaleID
Order_ID
Product_ID
. . .
Order_
ID
14958
. . .
1230016296
14844
. . .
1230096476
32
Order_
Type
Product_ID
. . .
Product_
Group
. . .
210200600078
. . .
. . .
220100100354
. . .
Eclipse
Clothing
. . .
8-19
8-20
orion.sales_history
Customer_ID Employee_ID
14958
121031
14844
121042
14864
99999999
14909
120436
14862
120481
14853
120454
14838
121039
14842
121051
14815
99999999
14797
120604
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
33
Customer_ID
Simplified Index
Key Value
4006
17(85)
4021
17(89)
4059
17(90)
4063
17(80, 86)
.
.
.
14958
1(1, 24)
14972
1(14)
.
.
.
34
8.3 Table Properties and Load Techniques of the Table Loader Transformation
Buffers
Data
pages are
loaded.
Output
SAS
Data
Buffers
Employee_ID
40
41
8-21
8-22
Index
Input
SAS
Data
Buffers
Only
necessary
pages are
loaded.
Output
SAS
Data
Employee_ID
Buffers
48
49
8.3 Table Properties and Load Techniques of the Table Loader Transformation
50
51
8-23
8-24
53
8.3 Table Properties and Load Techniques of the Table Loader Transformation
Condition Settings
The Constraint Condition and Index Condition options that
are available will depend on the load technique specified.
The choices translate to three different tasks:
Put on
Take off
Leave as is
54
55
8-25
8-26
56
57
8.3 Table Properties and Load Techniques of the Table Loader Transformation
58
8-27
8-28
9.2
Using the SCD Type 2 Loader and Lookup Transformations ................................... 9-15
Demonstration: Populate Star Schema Tables Using the SCD Type 2 Loader with the
Surrogate Key Method................................................................................ 9-29
9.3
9-2
Objectives
9-3
9-4
Business
Key
Primary
Key
Foreign
Key
9-5
9-6
Business Keys
Often the business key in a dimension table can function
as a primary key in that table
etc
10
11
12
country name
9-7
9-8
historical reporting
13
14
Type 1: no history
15
Emp_Name
Year
Salary
K Munch
2004
$42,000
(PK)
1649
16
9-9
9-10
Emp_Name
Year
Salary
K Munch
2005
$45,000
(PK)
1649
17
Emp_Name
Year
Salary
1649
K Munch
2004
$42,000
1649
K Munch
2005
$45,000
(PK)
18
Emp_I
D (PK)
Effective
DateTime
(PK)
1649
01Jan2004 K Munch
12:00PM
2004
$42,000
1649
01Jan2005 K Munch
12:00PM
2005
$45,000
31Dec2004
11:59PM
19
K Munch
2005
$45,000
2004
$42,000
K Munch
2007
$58,000
2006
$50,000
9-11
9-12
21
1
2
22
23
24
9-13
9-14
25
Objectives
28
29
9-15
9-16
Business Key
The business key consists of one or more columns that
identify a business entity, like a customer, a product, or an
employee.
The Business Key tab is used to specify one or more
columns in a target dimension table that represent the
business key.
30
Change Detection
The business key is used as the basis for change
detection. The business keys in source rows are
compared to the business keys in the target.
The Detect Changes tab is used to specify one or more
columns in a dimension table that are monitored for
changes.
31
Change Detection
By default all columns are included in change detection,
except the columns that are specified in the Change
Tracking, Business Key, and Generated Key tabs.
32
Change Tracking
The SCD Type 2 Loader provides three methods for
tracking historical records:
Beginning and End Date (or datetime) values
Version number
Current record indicator
33
9-17
9-18
Change Tracking
The Change Tracking tab is used to specify one or more
methods and associated columns in a target dimension
table to be used for tracking historical records.
Multiple methods can be selected.
34
Generated Key
The SCD Type 2 Loader generates values for a
Generated Key column in the target dimension table.
The Generated Key tab is used to specify a column for
the generated key values as well as a method for
generating the key values.
35
Generated Key
The generated key:
eliminates dependencies on the source data, as the
business key may be subject to redefinition, reuse, or
recycling
can be used as a primary key or as part of a
composite primary key
is generated at run time for each new row that is
added to the target
36
Generated Key
The generated key can be specified as a surrogate key or
a retained key.
37
Surrogate
Key
Retained
Key
9-19
9-20
Customer Name
Annual Income
0001
Sam Cook
$45,000
Begin
Date
End
Date
Current
0001 1
31Dec05 0
0002 2
B Diddley
$42,000 01Jan05
0003 3
John Doe
$25,000 01Jan05
0001 4
38
Cust Id is the business key in this example. The surrogate key method generates a new key value for
each added row.
Customer Name
Annual Income
0001
Sam Cook
$45,000
Cust
GenKey
Begin
Date
Customer
Name
Annual
Income
End Date
Current
01Jan05
Sam Cook
$40,000
31Dec05
01Jan05
B Diddley
$42,000
01Jan05
John Doe
$25,000
01Jan06
Sam Cook
$45,000
39
...
9-21
9-22
Customer Name
Annual Income
0001
Sam Cook
$45,000
Cust
ID
Begin
Date
Customer
Name
Annual
Income
End Date
Current
0001
01Jan05
Sam Cook
$40,000
31Dec05
0002
01Jan05
B Diddley
$42,000
0003
01Jan05
John Doe
$25,000
0001
01Jan06
Sam Cook
$45,000
40
...
---Primary Key---
---PrimaryKey---
Cust
GenKey
41
Begin
Date
Cust
ID
Begin
Date
01Jan05
0001
01Jan05
01Jan05
0002
01Jan05
01Jan05
0003
01Jan05
01Jan06
0001
01Jan06
9-23
42
An entry consists of all the rows (the current row and the historical rows) for a business entity represented
by a business key value.
If a source row has a business key that does not exist in the target, then that row represents a new entry.
The new row is added to the target with appropriate change-tracking values.
If a source row has the same business key as a current row in the target, and a value in a column identified
as a Change Detection column differs, then that row represents an update to an existing entry. The source
row is added to the target. The source row becomes the new current row for that entry; it receives
appropriate change-tracking values. The superseded target row can also receive new change-tracking
values (closed out).
If a source row has the same business key and content as a current row in the target, it might indicate that
the entry is being closed out. The entry is closed out if change tracking is implemented with begin and
end datetime values, and if the end datetime value in the source is older than the same value in the target.
When this is the case, the new end date is written into the target to close out the entry.
If a source row has the same business key and the same content as a current row in the target, then that
source row is ignored.
9-24
43
The digest column contains a concatenated encryption of values from selected columns other than the key
columns. It is a character column with a length of 32 and named DIGEST_VALUE. The encrypted
concatenation uses the MD5 algorithm.
Lookup Transformation
The Lookup transformation can be used to load a target
table with columns taken from a source and from a
number of lookup tables.
When a job containing a Lookup transformation is run,
each source row is processed as follows:
44
45
Lookups Tab
The Lookups tab in the Lookup transformation is used
to specify lookup properties:
Where expression
Exceptions
A set of lookup properties must be specified for each
Lookup table.
46
9-25
9-26
47
48
49
Available Actions
Abort
50
9-27
9-28
Error Tables
The Errors tab in the properties of the Lookup
transformation is used to specify:
51
52
9-29
to close the Connection Profile window and access the Log On window.
d. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
9-30
f.
Select the DIFT SAS Library from the SAS Library drop-down list and click
9-31
9-32
i.
j.
Type DIFT SCD Source Data Template in the Name field and click
9-33
e. Type DIFT SCD Source Data as the value for the Name field.
f.
9-34
g. Click
1) Click
2) Navigate to S:\Workshop\dift\data.
3) Change the Type field to Delimited Files (*.csv).
4) Select OrionSourceDataM01.csv.
5) Click
i.
9-35
The file has 7 records. Note that the first record has column names and that the data fields are
comma-delimited.
j.
Click
k. Click
2) Click
4) Click
l.
. The Column Definitions window is displayed. Increase the size of the window
Click
by dragging the corners.
m. Click
9-36
2) Select Get the column definitions from other existing tables or external files.
3) Click
9-37
7) 29 columns from the DIFT SCD Source Data Template table are selected.
8) Click
9) Click
p. Definitions for 29 columns are imported. These column definitions will be forward propagated
through the process flow to the target tables.
9-38
r.
Click
s.
9-39
9-40
e. Type DIFT SCD Customer Dimension as the value for the Name field.
f.
Verify that the location is set to /Data Mart Development/ Orion SCD.
g. Click
i.
2) Type DIFT SCD Target Tables Library as the value for the Name field.
3) Verify that the location is set to /Data Mart Development/ Orion SCD.
4) Click
6) Click
7) Click
9-41
9-42
b) Click
9-43
g) Click
h) Click
i)
10) Click
Change the default name of the new table. Type SCDCustDim in the Name field.
9-44
k. Click
l.
Do not select columns at this time. You propagate columns from sources to the targets in a later
.
step. Click
n. Click
.A
f.
9-45
Verify that the location is set to /Data Mart Development/ Orion SCD.
.
g. Click
Select DIFT SCD Target Tables Library as the value for the Library field.
j.
k. Click
l.
Do not select columns at this time. You propagate columns from the source to the targets in a later
step.
m. Click
n. Click
o. Click
Verify that the location is set to /Data Mart Development/ Orion SCD.
g. Click
9-46
i.
Select DIFT SCD Target Tables Library as the value for the Library field.
j.
k. Click
l.
Do not select columns at this time. You propagate columns from the source to the targets in a later
step.
m. Click
n. Click
o. Click
9-47
9-48
e. Type Populate Star Schema with SCD-RK Processing as the value for the Name
field.
f.
g. Click
4. Add the source table to the diagram for the process flow.
a. Select the Checkouts tab.
b. Drag the DIFT SCD Source Data external file to the Diagram tab of the Job Editor.
5. Add a File Reader transformation to the process flow.
a. Select the Transformations tab.
b.
Expand the Access group and locate the File Reader transformation template.
c. Drag the File Reader transformation to the Diagram tab of the Job Editor. Place it under the
DIFT SCD Orion Detail Information external file object.
d. Connect the DIFT SCD Orion Detail Information external file object to the File Reader
transformation.
9-49
Connect the second temporary table output from the Splitter to the other Sort transformation.
9-50
8. Select File Save to save diagram and job metadata to this point.
9. Add two SCD Type 2 Loader transformations to the process flow.
a. Select the Transformations tab.
b. Expand the Data group locate the SCD Type 2 Loader transformation template.
c. Drag the SCD Type 2 Loader transformation to the Diagram tab of the Job Editor.
d. Connect the temporary table output from one Sort transformation to one of the SCD Type 2
Loader transformations.
e. Drag a second SCD Type 2 Loader transformation to the Diagram tab of the Job Editor.
f.
Connect the temporary table output from the second Sort transformation to the other SCD Type 2
Loader transformation.
9-51
e. Connect the SCD Type 2 Loader transformation to the DIFT SCD Product Dimension table
object.
11. Select File Save to save diagram and job metadata to this point.
12. Add a third output table to the Splitter by right-clicking on the Splitter transformation and selecting
Add Work Table.
e. Next connect the DIFT SCD Customer Dimension table object to the Lookup transformation.
9-52
f.
Add a third input port to the Lookup transformation by right-clicking on the Lookup
transformation and selecting Ports Add Input Port.
g. Connect the DIFT SCD Product Dimension table object to the Lookup transformation.
15. Add the DIFT SCD Order Fact table as the final output for this process flow.
a. Select the Checkouts tab.
b. Drag the DIFT SCD Order Fact table object to the Diagram tab of the Job Editor, placing it
under the Table Loader transformations.
9-53
9-54
c. Connect the Table Loader transformation to the DIFT SCD Order Fact table object.
9-55
All 29 columns are propagated forward from the DIFT SCD Source Data table to the
temporary output table of the File Reader transformation.
c. Click
9-56
d. Click
e. Right-click on the second temporary output table (leading to the DIFT SCD Product
Dimension table) from the Splitter and select Properties.
f.
Click
h. Right-click on the first temporary output table (leading to the Lookup transformation and the
DIFT SCD Order Fact table) from the Splitter and select Properties.
i.
j.
Click
4. Define columns for the temporary output tables from the Splitter.
a. Right-click on the Splitter and select Properties.
b. Select the General tab.
c. Type Column Splitter in the Name field.
9-57
d. This is a reminder that the splitter is used to direct different columns from the data source to the
three target tables.
e. Select the Row Selection tab.
f.
Verify that All Rows are selected for each of the three output tables.
g. Select the Mappings tab. The 29 columns in the temporary output table from the File Reader are
listed in the Source table pane on the left. Propagate only the necessary columns to each output
table.
h. Select the Splitter 0 (TempForCustDim) table from the Target table drop-down list.
i.
Select columns 1 through 11 in the Source table pane. Use the Shift key to make the selection.
The selected columns should include only:
Customer_ID
Customer_Country
Customer_Gender
Customer_Name
Customer_FirstName
Customer_LastName
Customer_Birth_Date
Customer_Type
Customer_Group
Customer_Age
Customer_Age_Group
9-58
j.
Click
l.
Select columns 12 through 19 in the Source table pane. Use the Shift key to make the selection.
The following 8 columns should be selected.
Product_ID
Product_Name
Supplier_ID
Supplier_Name
Supplier_Country
Product_Group
Product_Category
Product_Line
m. Click
9-59
o. Select columns 20 through 29 in the Source table pane. Use the Shift key to make the selection.
The 9 selected columns are these:
Order_ID
Order_Item_Num
Quantity
Total_Retail_Price
CostPrice_Per_Unit
Discount
Order_Type
Employee_ID
Order_Date
Delivery_Date
p. Click
q. Select Customer_Id (column 1) and while holding the CTRL key, select Product_Id (column 12).
The two selected columns are these:
Customer_ID
Product_ID
9-60
r.
Click
Click
5. Define columns for the temporary output tables from the Sort transformations.
a. Right-click on the first Sort transformation and select Properties.
b. Select the General tab.
This is a reminder that the sort transformation will be used to remove records with duplicate
Customer Id values.
d. Select the Mappings tab.
e. Click
Click
9-61
9-62
i.
This is a reminder that this sort transformation will be used to remove records with duplicate
Customer Id values.
j.
k. Click
Click
e. Right-click on the DIFT SCD Customer Dimension table and select Properties.
f.
Select the Columns tab and verify that it has the 11 propagated columns.
i.
Click
j.
Click
m. Click
n. Right-click on the DIFT SCD Product Dimension table and select Properties.
o. Select the Columns tab and verify that it has the 8 propagated columns.
7. Add metadata for 4 new columns.
a. Select the Product_ID column.
b. Click
d. Click
8. Define columns for the temporary output table from the Lookup transformation.
a. Right-click on the Lookup transformation and select Properties.
b. Select the Mappings tab.
9-63
9-64
c. Click
6) Click
16 columns are defined for the temporary output table from the Lookup transformation.
e. Click
9-65
9-66
f.
Select the Columns tab and verify that it has the 16 propagated columns.
g. Click
f.
Click
9-67
9-68
i.
j.
In the Remove duplicate records area select Remove rows with duplicate keys (NODUPKEY).
m. Click
4. Update the properties of the first SCD Type 2 Loader transformation that will populate DIFT SCD
Customer Dimension. Apply the Retained Key method.
a. Right-click on the SCD Type 2 transformation (for DIFT SCD Customer Dimension
table) and select Properties.
b. Click the Change Tracking tab. Use beginning and end dates to track changes for each
Customer ID and use a current record indicator to keep track of the current record for each
Customer ID.
1) Check Use beginning and end dates.
2) Select BeginDateTimeCust as the Column Name for Beginning Date.
3) Select EndDateTimeCust as the Column Name for the End Date.
4) Check Use current indicator.
9-69
SAS Data Integration Studio provides default expressions for the Change Tracking
columns. The DATETIME function is used to generate the beginning datetime value. A
datetime constant with a future date is used to specify an open ended value for the ending
datetime. You can click in the Expression field to specify a custom expression.
Use Version Number and Use Current Indicator are provided as alternative methods to
track the current record.
c. Click the Business Key tab. Specify Customer_ID as the business key in the Customer
Dimension.
1) Click
2) Select Customer_ID.
9-70
3) Click
d. Click the Generated Key tab. Specify GenKeyCust as the retained key column.
1) Select GenKeyCust as the column to contain the generated key values.
2) Check Generate retained key to implement the retained key method.
A default expression is provided to generate retained key values in the New record
.
field. To specify a custom expression, click
9-71
To implement the surrogate key method, uncheck Generate retained key. The
surrogate key method provides default expressions for new and changed records.
e. Click the Detect Changes tab. Specify Customer_Name as the column on which changes are
based.
1) Select Customer_Name and move it form Available columns to Selected
columns.
If no columns are selected on the Detect Changes tab, then all columns are used to
detect changes, except those used for Change Tracking, Business Key, Generated
Key, and Type 1 columns.
9-72
f.
j.
Click
5. Update the properties for the second SCD Type 2 Loader transformation that will populate DIFT
SCD Product Dimension.
a. Right-click on the SCD Type 2 Loader transformation (for DIFT SCD Product
Dimension table) and select Properties.
b. Click the Change Tracking tab.
1) Verify that Use beginning and end dates is checked.
2) Select BeginDateTimeProd as the Column Name for Beginning Date.
3) Select EndDateTimeProd as the Column Name for the End Date.
4)
2) Select Product_ID.
9-73
9-74
3) Click
9-75
f.
9-76
h. Check Postcode.
i.
j.
Click
6. Select File Save to save diagram and job metadata to this point.
7. Specify properties for the Lookup transformation that will populate DIFT SCD Order Fact.
a. Right-click on the Lookup transformation and select Properties.
b. Select the Lookups tab. Specify lookup mappings to the DIFT Customer Dimension table.
1) Select the row for DIFT SCD Customer Dimension and click Lookup Properties.
2) In the Lookup Properties - DIFT SCD Customer Dimension window, select the Source to
Lookup Mapping tab.
3) Click on the Customer_ID column in the Source table pane to select it and click on the
Customer_ID column in the Lookup table pane.
4) Click
9-77
8) Click on the BeginDateTimeCust column in the Lookup table pane to select it, and click on
the BeginDateTimeCust column in Target table the pane.
9-78
9) Click
The Lookup will retrieve the GenKeyCust and BeginDateTimeCust values from the
Customer Dimension table and assign them to the GenKeyCust and
BeginDateTimeCust columns in the target table. This links the transaction in the target
table to the current record for the customer ID in the Customer Dimension table.
10) In the Lookup Properties - DIFT SCD Customer Dimension window, select the Where tab.
11) Click the Data Sources tab.
12) Expand the SCDCustDim table.
13) Select the CurrRecCust column and click
Expression Text pane.
9-79
15) In the Lookup Properties - DIFT SCD Customer Dimension window, select the Exceptions
tab.
9-80
17) Click
2) In the Lookup Properties - DIFT SCD Product Dimension window, select the Source to
Lookup Mapping tab.
3) Click on the Product_ID column in the Source table pane to select it, and click on the
Product_ID column in the Lookup table pane.
4) Click
9-81
The Lookup transformation will use the Product_Id value in an incoming transaction to
do a lookup into the Product Dimension table to find a matching product id value.
5) In the Lookup Properties - DIFT SCD Product Dimension window, select the Lookup to
Target Mapping tab.
6) Click on the GenKeyProd column in the Lookup table pane to select it, and click on the
GenKeyProd column in Target table the pane.
7) Click
8) Click on the BeginDateTimeProd column in the Lookup table pane to select it, and click on
the BeginDateTimeProd column in Target table the pane.
9-82
9) Click
The lookup will retrieve the GenKeyProd and BeginDateTimeProd values from the
Customer Dimension table and assign them to the GenKeyProd and
BeginDateTimeProd columns in the target table. This links the transaction in the target
table to the current record for the customer id in the Product Dimension table.
10) In the Lookup Properties - DIFT SCD Product Dimension window, select the Where tab.
11) Click the Data Sources tab.
12) Expand the SCDProdDim table.
13) Select the CurrRecProd column and click
Expression Text pane.
9-83
15) In the Lookup Properties - DIFT SCD Customer Dimension window, select the Exceptions
tab.
9-84
16) Click
window.
17) Click
d.
2)
9-85
All source columns and a generated column are selected by default. Remove all columns
except Source Row Number, Order_ID, Customer_ID, and Product_ID from the
Selected columns pane.
3) Click
9-86
5) Four generated columns and no source columns are selected by default. Accept the default
column selection.
6) Click
These mappings for the Lookup transformation were established in an earlier step.
f.
Click
8. Select File Save to save diagram and job metadata to this point.
9. Specify properties for the Table Loader transformation that will populate DIFT SCD Order
Fact.
a. Right-click on the Table Loader transformation and select Properties.
b. Select the Load Technique tab.
9-87
9-88
d. Click
10. Select File Save to save diagram and job metadata to this point.
Update the Processing Order, Run the Job, and view the Results
1. If necessary, select View Detail to open the Details panel. It opens below the Diagram Editor.
2. In the Details panel, select the Control Flow tab.
a. Use the
and
b. Verify the order of processing on the Diagram tab in the Job Editor window.
The number in the upper-left corner of the transformation indicates the order of processing.
3. Run the job.
a. Select the Status tab in the Details pane to monitor the execution of the job.
b. Click
9-89
9-90
e. Select the Status tab in the Details pane to monitor the execution of the job.
f.
Click
9-91
a. Right-click on the DIFT SCD Order Fact table and select Open. The data is displayed in the
View Table window. Scroll to see the right-most columns:
changed to
Ines Muller
e. Select the Status tab in the Details pane to monitor the execution of the job.
f.
Click
9-92
b. Right-click on the DIFT SCD Order Fact table and select Open. The data is displayed in the
View Table window. Scroll to see the right- most columns:
changed to
Ines Deisser
Objectives
56
57
9-93
9-94
CDC Transformations
SAS Data Integration Studio provides four CDC
transformations:
Q Attunity CDC
Q DB2 CDC
Q Oracle CDC
Q General CDC.
The Attunity, DB2, and Oracle transformations work
directly with changed data tables that are in native
database format. The General transformation can be used
to load change data from other vendors and custom
applications.
58
The separately licensed Attunity software enables you to generate source change tables from a variety of
relational databases running in a variety of operational environments.
continued...
60
Prerequisites (continued)
61
Oracle CDC
DB2 CDC
General CDC
9-95
9-96
62
63
64
65
continued...
9-97
9-98
Context
Rows
Processed
Timestamp
66
10-2
Objectives
Transformation Templates
The Process Library tree contains two kinds
of transformation templates:
Java Plug-In
Transformation
Templates
SAS Code
Transformation
Templates
10-3
10-4
10-5
10-6
&classvar1
&classvar2
&analysisvar
&title
10
syslast=yy.xx;
options=;
classvar1=Customer_Age_Group;
classvar2=Customer_Gender;
analysisvar=Quantity;
title=Sum of Quantity
across Gender and Age Group;
11
Options Window
The New Transformation wizards Options window is the
facility for creating the options to be used for the new
transformation.
12
10-7
10-8
Objectives
15
16
17
18
10-9
10-10
19
10-11
This demonstration creates a report on customer order information. The HTML report must have a textbased output with summary statistics as well as a bar chart graphic. A transformation with this type of
output does not currently exist. Hence, a new SAS code transformation is created and then used in a job.
1. If necessary, access SAS Data Integration Studio using Barbaras credentials.
a. Select Start All Programs SAS SAS Data Integration Studio 4.2.
b. Select Barbaras Work Repository as the connection profile.
c. Click
to close the Connection Profile window and access the Log On window.
d. Type Barbara as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
g. Type User Defined as the value for the Transformation Category field.
10-12
The General information for the new transformation should resemble the following:
h. Click
i.
j.
Click
10-13
10-14
k. Click
The Options window is available to define the options to be used in this transformation.
l.
2) Type Data Items as the value for the Displayed text field.
3) Click
.
.
4) Click
.
.
7) Click
8) Type Other Options as the value for the Displayed text field.
9) Click
10-15
10-16
10-17
Verify that Select from source is selected in the Columns to select from area.
j)
n) Click
10-18
e) Type The column selected for this option will be the grouping
column for GCHART and a classification column in the row
dimension for TABULATE. as the value for the Description field.
f) Click Requires a non-blank value in the Options area.
g) Select the Prompt Type and Values tab.
h) Select Data source column as the value for the Prompt type field.
i)
Verify that Select from source is selected in the Columns to select from area.
j)
Verify that Select from source is selected in the Columns to select from area.
j)
10-19
The three options in the Data Items group should resemble the following:
Click
Click
10-20
The two options in the Titles group should resemble the following:
Click
Click
The two options in the Other Options group should resemble the following:
10-21
10-22
Verify that the three items in the Data Items group are all required (note the *).
The descriptions entered for each of the parameters are displayed.
Clicking
opens a dialog window to navigate the SAS Folders to a data source
from which a column can be selected.
10-23
2) Click Titles in the selection pane. The two options in the Titles group are displayed:
3) Click Other Options in the selection pane. The two options in the Other Options group are
displayed:
10-24
4) Click
q. Click
The Inputs group box values add a specified number of inputs to the transformation when
it is used in a job. If you later update the transformation to increase this minimum number
of inputs value, any jobs that have been submitted and saved use the original value. The
increased minimum number of inputs is enforced only for subsequent jobs. Therefore, you
can increase the minimum number of inputs without breaking existing jobs. The Maximum
number of inputs field is used to allow you to connect additional inputs into the input port.
For example, a setting of 3 allows you to have up to three inputs. The rules for inputs also
apply to outputs.
s. Click
t. Click
10-25
g. Click
10-26
3. Add source table metadata to the diagram for the process flow.
a. Select the Folders tab.
b. Navigate to the Data Mart Development Data folder.
c. Drag the DIFT Customer Order Information table object to the Diagram tab of the Job Editor.
4. Add the Summary Table and Vertical Bar Chart transformation to the process flow.
a. Select the Checkouts tab.
b. Drag the Summary Table and Vertical Bar Chart transformation to the Diagram tab of the Job
Editor.
5. Connect the DIFT Customer Order Information table object to the Summary Table and Vertical
Bar Chart transformation.
6. Select File Save to save diagram and job metadata to this point.
10-27
7. Specify properties for the Summary Table and Vertical Bar Chart transformation.
a. Right-click on the Summary Table and Vertical Bar Chart transformation and select
Properties.
b. Select the Options tab.
c. Verify that Data Items is selected in the selection pane.
1) Click
2) Select Customer Age Group in the Select a Data Source Item window.
3) Click
4) Click
10-28
7) Click
option.
10-29
10-30
1) Type NODATE NONUMBER LS=80 as the value for the Specify SAS system
options field.
2) Type CustomerOrderReport as the value for the Name of HTML file to be
created field.
f.
Click
8. Select File Save to save diagram and job metadata to this point.
10-31
10-32
d. Click
10-33
10-34
e. Click
f.
Click
10-35
Exercises
Idvariable
Displayed text:
ID Variable
Description:
Required
Yes
Prompt type:
Other information:
Name
Fcastvariable
Displayed text:
Column to Forecast
Description:
Required
Yes
Prompt type:
Other information:
10-36
Alpha
Displayed text:
Significance Level
Description:
Required
No
Prompt type:
Numeric
Other information:
Name
Lead
Displayed text:
Description:
Required
No
Prompt type:
Numeric
Other information:
Name
Method
Displayed text:
Description:
Required
No
Prompt type:
Text
Other information:
10-37
Add the following options for the Titles and Other Options group:
Name
Title
Displayed text:
Description:
Required
No
Prompt type:
Text
Name
Options
Displayed text:
Description:
Required
No
Prompt type:
Text
Name
File
Displayed text:
Description:
Required
No
Prompt type:
Text
10-38
Use the YYMM column as the ID column and the Profit column as the column to forecast.
View the HTML file. The output should resemble the following:
3. Check In Objects
Check in the transformation and job objects.
10-39
The inner job should be a copy of the job from Exercise 3, but modified with parameters.
10-40
I=spline c=green;
I=spline c=blue l=3;
I=spline c=black;
I=spline c=red;
I=spline c=red;
I=spline c=black;
down=2 across=3 label=('Legend:')
position=(bottom center outside) ;
goptions dev=png;
%if (%quote(&file) ne) %then %do;
ods html path="S:\Workshop\dift\reports"
gpath="S:\Workshop\dift\reports"
file="&file..html";
%end;
%if (%quote(&title) ne) %then %do;
Title &title;
%end;
proc gplot data=fcast;
plot &fcastvariable * &idvariable = _type_ /
legend=legend1 autovref href='01jan2003'd ;
run;
quit;
ods html close;
%mend ForeCastGraph;
%ForeCastGraph;
g. Click
1) Click
2) Type Data Items as the value for the Displayed text field.
3) Click
4) Click
.
.
5) Type Forecast Options as the value for the Displayed text field.
10-41
10-42
6) Click
.
.
7) Click
8) Type Titles and Other Options as the value for the Displayed text field.
9) Click
b) Click
10-43
10-44
3) Define metadata for the method to model the series forecast option.
a) Select the Forecast Options group.
b) Click
10-45
o) Click
k. Define metadata for the options in the Titles and Other Options group.
1) Define metadata for the title to be used with forecast graphic.
a) Select the Titles and Other Options group.
b) Click
10-46
The two options in the Other Options group should resemble the following:
10-47
10-48
2) Click Forecast Options in the selection pane. The three options in the Forecast Options
group are displayed.
3) Verify that all fields have default values and that four values are available for selection for the
Method to Model the Series.
10-49
4) Click Titles and Other Options in the selection pane. The three options in the Titles and
Other Options group are displayed.
5) Verify that Name of HTML file to be created is a required field.
6) Click
m. Click
p. Click
10-50
h. Select File Save to save diagram and job metadata to this point.
i. Specify properties for the Extract transformation.
1) Right-click on the Extract transformation and select Properties.
2) Select the Where tab.
3) Type Company = Orion Australia as the value for the Expression Text area.
4) Click
j. Select File Save to save diagram and job metadata to this point.
10-51
e. Click
f. Click
10-52
11-2
Objectives
11-3
11-4
Cleansing
data will
result in
accurate
reports
11-5
11-6
DataFlux Products
read
18 functions
Quality
Knowledge
Base
dfPower
Studio
read
Server Process
PROC DQSRVADM
PROC DQSRVSVC
8 functions
run
Integration
Server
Profile
jobs
Architect
jobs
The language elements in the SAS Data Quality Server software can be separated into two functional
groups. As shown in the previous diagram, one group cleanses data in SAS, and the other group runs data
cleansing jobs and services on Integration Servers from DataFlux (a SAS company).
The language elements in the Local Process group read data definitions out of the Quality Knowledge
Base to, for example, create match codes, apply schemes, or parse text. The language elements in the
Server Process group start and stop jobs and services and manage log entries on DataFlux Integration
Servers.
The DataFlux Integration Servers and the related dfPower Profile and dfPower Architect applications are
made available with the SAS Data Quality Server software in various software bundles.
.
.
.
Original Data:
Name
City
Glenn Gray Ft Knox
Ric Rogers DANBURY
.
.
.
.
.
.
Updated Data:
Name
City
Glenn Gray Fort Knox
Ric Rogers Danbury
.
.
.
.
.
.
State
ME
CT
.
.
.
State
ME
CT
.
.
.
10
Address
Jen Barker
PO Box 15
Bath
Jenny Barker
Box 15
Bathe
City
Bath
Address
Jen Barker
PO Box 15
Bath
MY3Y$$Z5$$M~2
Jenny Barker
Box 15
Bathe
MY3Y$$Z5$$M~2
Bath
MY3Y$$Z5$$M~2
City
MatchCode
11-7
11-8
12
13
All DataFlux jobs and real-time services run on DataFlux Integration Servers. To execute
DataFlux jobs and services from SAS Data Integration Studio jobs, you must first install a
DataFlux Integration Server and register that server in SAS metadata.
15
Scheme Repository
11-9
11-10
Objectives
19
11-11
20
Batch jobs can be run on a server-grade machine, meaning the process is more scalable to larger data
sources. Server-class machines supported by DataFlux Integration Server include:
Windows
Unix AIX/HP-UX/Solaris/Linux
The data cleansing processes, available as real-time services via Service Oriented Architecture (SOA), are
available to any Web-based application that can consume services (Web applications, ERP systems,
operational systems, SAS, and more).
The data cleansing jobs and services registered to the DataFlux Integration Server are available (via
procedures and functions) from within SAS. This gives the user the full power of dfPower Architect and
dfPower Profile functionality from within SAS.
11-12
21
22
23
24
11-13
11-14
Real-Time Services
In addition, existing batch jobs can be converted to realtime services that can be invoked by any application that
is Web service enabled. This provides users with the
ability to reuse the business logic developed when
building batch jobs for data migration or loading a data
warehouse, and apply it at the point of data entry to
ensure consistent, accurate, and reliable data across the
enterprise.
25
11-15
11-16
11-17
11-18
1. Select Start All Programs DataFlux Integration Server 8.1 Integration Server Manager.
2. Click
6. Click
To configure a remote DataFlux Integration Server, specify a valid Server name and Server
port for the remote server.
to save the changes and close the Options window.
The DataFlux Integration Server Manager window now displays the machine/port specified.
11-19
11-20
This demonstration illustrates the creation of a dfPower Profile job, how to run the job, and how to review
the generated metrics.
11-21
3. Click the Contacts table. The right side of the window populates with a listing of columns found in
the Contacts table.
11-22
4. Click
to the left of the Contacts table. This selects all the fields of the table to be part of the
Profile job.
11-23
11-24
b. Click
to the left of Select/unselect all to select all the Column profiling metrics.
c. Click
11-25
11-26
d. Click
e. Click
under the M field to identify that this column has metric overrides
d. Click
e. Click
d. Click
e. Click
11-27
11-28
d. Click
e. Click
d. Click
e. Click
Once the desired metrics are specified, the profile job is ready to run and produce the profile
report.
11-29
d. Click
The name is now displayed on the title bar for dfPower Profile (Configurator).
11-30
b. Click
11-31
11-32
c. Click the Data Source node, and then select Insert Node On Page. (Alternatively, you can
double-click on node and it will be automatically appended to the job flow, or you can drag and
drop the node onto the job flow and perform a manual connection.)
The node is added to the job flow and a Data Source Properties window is opened.
g. Click
11-33
11-34
The Contacts table is now listed as the value for the Input table field. The fields found in
the Contacts table are now listed as Available.
h. Click
i.
Click
11-35
3. Add a Standardization node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, click
b. Click
11-36
c. Click the Standardization node, and then select Insert Node Auto Append. (Alternatively,
you can double-click on node and it will be automatically appended to the job flow, or you can
drag and drop the node onto the job flow and perform a manual connection.)
d. In the Standardization Properties window, move the ADDRESS and STATE fields from Available
to Selected.
1) In the Standardization fields area, click ADDRESS field in the Available list.
2) Click
11-37
e. Specify the appropriate Definition and/or Scheme for the two selected columns.
in the Definition field for the ADDRESS field to allow selection of a valid
1) Click
standardization definition.
11-38
3) Click
in the Definition field for the STATE field to allow selection of a valid
standardization definition.
4) Select State/Province (Abbreviation) as the value for the Scheme field.
5) Verify that the default names given to the output fields are ADDRESS_Stnd and
STATE_Stnd,
f.
.
to move all columns from Available fields to Output fields.
11-39
11-40
3) Remove DATE, MATCH_CD, and DELETE_FLG from the Output fields list.
a) Click DATE field.
b) Hold down the CTRL key and click MATCH_CD field.
c) Hold down the CTRL key and click DELETE_FLG field.
d) Click
4) Click
g. Click
11-41
11-42
c. Verify that the ADDRESS and STATE field values are standardized by checking for the following:
The state values for the first two records originally were state names spelled out the
STATE_Stnd field now has these values as abbreviations.
The address value for the first record has the word Street the ADDRESS_Stnd field has St.
The address value for the second record has the word Road the ADDRESS_Stnd field has
Rd.
Some of the original address values are all uppercased the ADDRESS_Stnd field has these
values proper-cased.
11-43
5. Add an Identification Analysis node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, click
b. Click
c. Click the Identification Analysis node, and then select Insert Node Auto Append.
(Alternatively, you can double-click on node and it will be automatically appended to the job
flow, or you can drag and drop the node onto the job flow and perform a manual connection.)
d. In the Identification Analysis Properties window, move the CONTACT field from Available to
Selected.
1) In the Identification analysis fields area, click CONTACT field in the
Available list.
2) Click
11-44
e. Click
f.
g. Verify that the default name given to the output column is CONTACT_Identity,
h. Click
i.
Click
j.
Click
k. Click
.
to move all columns from Available fields to Output fields.
For the first set of records in the Contacts table, the CONTACT field values are identified as
INDIVIDUAL.
11-45
11-46
7. Add a Branch node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, click
b. Click
c. Click the Branch node, and then select Insert Node Auto Append. The Branch Properties
window is displayed.
d. Click
to accept the default settings and close the Branch Properties window.
11-47
8. Add a Data Validation node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, click
b. Click
c. Click the Data Validation node, and then select Insert Node Auto Append.
11-48
d. Click
e. Click
f.
g. Click
h. Click
11-49
11-50
9. Add a Gender Analysis node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, click
b. Click
c. Click the Gender Analysis node, and then select Insert Node Auto Append.
d. In the Gender Analysis Properties window, move the CONTACT field from Available to Selected.
1) In the Gender analysis fields area, click CONTACT field in the Available list.
2) Click
e. Click
f.
g. Verify that the default name given to the output column is CONTACT_Gender.
h. Click
i.
Click
j.
Click
k. Click
.
to move all columns from Available fields to Output fields.
11-51
11-52
11-53
c. Click the Frequency Distribution node, and then select Insert Node Auto Append.
d. In the Frequency Distribution Properties window, move the CONTACT_Gender field from
Available to Selected.
1) In the Frequency distribution fields area, click CONTACT_Gender field in
the Available list.
2) Click
e. Click
11-54
11-55
13. Add a second Data Validation node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, verify that the Profiling category of nodes is expanded.
b. Click the Data Validation node, and then select Insert Node Insert On Page.
c. Click
d. Click on the new Data Validation node and drag it so that it is next to the first Data Validation
node.
11-56
e. Click on the Branch node and (without releasing the mouse button) drag cursor to the second
Data Validation node. Release the mouse button.
f.
g. Click
h. Click
i.
j.
Click
k. Click
11-57
11-58
11-59
15. Add a Text File Output node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, click
b. Click
11-60
f.
h. Click
11-61
a) Type Contacts Gender Frequencies as the value for the Name field.
b) Click
f.
Click
11-62
11-63
This demonstration illustrates the creation of a dfPower Architect job using the External Data Provider.
This job will be uploaded to the DataFlux Integration Server and then processed with the DataFlux IS
Service transformation in SAS Data Integration Studio.
1. From within SAS Data Integration Studio, select Tools dfPower Tool dfPower Architect.
2. Add an External Data Provider node to the job flow.
a. Locate the Nodes tab in the Toolbox panel.
b. Click
c. Click the External Data Provider node, and then select Insert Node On Page.
(Alternatively, you can double-click on node and it will be automatically added to the job flow.)
11-64
The node is added to the job flow and an External Data Provider window is opened.
b. Click
c. Click
d. Click
11-65
11-66
e. Select the first generic field (Field) and change the value of the Field Name field to
Field_1.
f.
Type 20 as the value for the Field Length field for Field_3.
g. Type 25 as the value for the Field Length field for Field_4.
h. Click
11-67
4. Add a Basic Statistics node to the job flow, and specify appropriate properties for it.
a. From the Nodes tab in the Toolbox panel, click
b. Click
c. Click the Basic Statistics node, and then select Insert Node Auto Append.
11-68
e. Click
11-69
1. Select Start All Programs DataFlux Integration Server 8.1 Integration Server Manager.
11-70
6. Click
11-71
11-72
8. (Optional) Select Actions Upload. The Upload Architect Jobs window is displayed.
11-73
11-74
13. Select Actions Upload. The Upload Real-time Services window is displayed.
16. Click
to move the LWDIWN EDP Basic Stats job to the Selected list.
11-75
11-76
b. Click
c. Type Ahmed as the value for the User ID field and Student1 as the value for the
Password field.
d. Click
11-77
11-78
11-79
11-80
7. Click
11-81
9. Click
11-82
3) Click
d. Click
11. Click
11-83
11-84
b. Select DefaultAuth.
d. Verify the value for the Port number field is set to 21036.
The final settings for connection properties should resemble the following:
13. Click
15. Click
11-85
11-86
11-87
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the Password
field.
e. Click
11-88
3. Create initial job metadata that will use DataFlux IS Job transformation.
a. Right-click on DataFlux IS Examples folder and select New Job.
b. Type LWDIWN - Run Profile and Architect Jobs as the value for the Name field.
c. Verify that /Data Mart Development/DataFlux IS Examples is the value for the Location
field.
d. Click
c. Drag the DataFlux IS Job transformation to the Diagram tab of the Job Editor.
11-89
11-90
f.
g. Click
The Contact Profile.pfi job file should appear in the Job field.
h. Click
i.
j.
k. Click
The diagram tab of the Job Editor window now displays the following:
11-91
c. (Optional) Drag a second DataFlux IS Job transformation to the Diagram tab of the Job Editor
and drop it next to the first DataFlux IS Job transformation.
d. (Optional) Right-click on the second DataFlux IS Job transformation and select Properties.
e. (Optional) Type Architect Reports at the end of the default value for the Name field.
11-92
f.
g. (Optional) Verify that Architect is the value for the Job type field.
h. (Optional) Verify that Contacts Table Analysis.dmc is the value for the Job field.
i.
(Optional) Click
The diagram tab of the Job Editor window now displays the following:
For DataFlux IS Job transformations, completed successfully simply means that the process was
passed off successfully to the DataFlux Integration Server.
8. Select File Close to close the Job Editor window.
11-93
11-94
9. Access DataFlux Integration Server Manager and verify the jobs ran successfully.
a. Select Start All Programs DataFlux Integration Server 8.1
Integration Server Manager.
b. Verify that both the Profile job and the Architect job completed. The bottom portion of the
DataFlux Integration Server Manager displays this information on the Status of All Jobs tab.
d. Click
11-95
11-96
f.
Click
g. Navigate to S:\Workshop\lwdiwn.
h. Select All Files (*.*) as the value for the Files of type field.
i.
j.
Click
k. Click
11-97
The Contacts Profile report is imported to the DataFlux Default management resources location.
11-98
l.
Double-click the Contacts Profile report and dfPower Profile(Viewer) is invoked with the profile
report.
c. (Optional) When done viewing, select File Close to close the browser.
12. (Optional) View the generated text file output.
a. (Optional) Open a Windows Explorer by selecting Start All Programs Accessories
Windows Explorer.
b. (Optional) Navigate to S:\Workshop\lwdiwn.
11-99
c. (Optional) Double-click the Unknown_Identities.txt file. The file opens in a Notepad window as
displayed.
d. (Optional) When done viewing, select File Exit to close the Notepad window.
The job being created will use the previously registered DataFlux Integration Server service. Before
taking advantage of the DataFlux IS Service transformation, a table needs to be defined in metadata (this
table will be the source data for the DataFlux IS Service transformation.
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
d. Click
1) Type DIWN DataFlux Sample Database as the value for the Name field.
2) Click
3) Double-click SASApp to move it from the Available servers list to the Selected
servers list.
4) Click
6) Click
a) Type DIWN DataFlux Sample Database Server as the value for the Name
field.
b) Click
c) Select ODBC Microsoft Access as the value for the Data Source Type field.
d) Click
e) Click Datasrc.
f) Type "DataFlux Sample" as the value for the Datasrc field.
g) Click
i)
Click
11-110
8) Verify that the newly defined database server (DIWN DataFlux Sample Database Server)
appears in the Database Server field.
9) Click
10) Review the final settings for the New Library Wizard.
11) Click
11-111
11-112
f.
Verify that the newly defined SAS library (DIWN DataFlux Sample Database) appears in the
SAS Library field. Also, the value specified for the Datasrc (DataFlux Sample)
information for the library server should appear in the Data Source field.
g. Click
i.
Click
11-113
11-114
j.
k. Click
Click
11-115
d. Click
d. Drag the Extract transformation to the Diagram tab of the Job Editor window, and place it next
to the DIWN Contacts table.
e. Connect the DIWN Contacts table object to the Extract transformation.
11-116
f.
On the target table side, change the name of the PHONE column to Field_2.
j.
On the target table side, change the name of the OS column to Field_3.
k. On the target table side, change the name of the DATABASE column to Field_4.
l.
11-117
Remove the remaining target columns from the target table side.
1) Click ID column, hold down the SHIFT key and click the CITY column.
2) Click
m. Click
Delete Target Columns from the tool set of the Mappings tab.
p. Drag the DataFlux IS Service transformation to the Diagram tab of the Job Editor, placing this
transformation next to the Extract transformation.
q. Connect the temporary table from the Extract transformation to the DataFlux IS Service
transformation.
r.
s.
t.
u. Click
11-118
c. Right-click on the temporary table object associated with the DataFlux IS Service transformation
and select Open.
A Warning window opens:
d. Click
The profiling metrics requested in the Architect job are displayed in the View Data window.
11-119
Exercises
12-2
Objectives
12-3
12-4
Deployment Techniques
You can also deploy a job in order to accomplish the
following tasks:
Divide a complex process flow into a set of smaller
flows that are joined together and can be executed
in a particular sequence.
Execute a job on a remote host.
Objectives
Scheduling Requirements
The SAS scheduling tools enable you to automate the
scheduling and execution of SAS jobs across your
enterprise computing environment. Scheduling requires
four main components:
SAS Application
Schedule Manager
Scheduling Server
Batch Server
12-5
12-6
Step 2
Step 3
Step 4
SAS Data
Integration
Studio
Scheduling Manager
plug-in in SAS
Management Console
Scheduling
server
Batch
servers
Flow_ABC
Flow_ABC
Batch server 1
Deployment
directory
event
Job_A
event
Job_A
event
Job_B
event
Job_B
event
Job_C
event
Job_C
Batch server 2
Command line 3 Job C
10
Step 1:
A SAS application (such as SAS Data Integration Studio) creates a job that needs to be
scheduled. If the job was created by SAS Data Integration Studio, the job is placed in a
deployment directory.
Step 2:
A user set up to administer scheduling can use the Schedule Manager plug-in in SAS
Management Console to prepare the job for scheduling, or users can schedule jobs directly
from other SAS applications. The job is added to a flow, which can include other jobs and
events that must be met (such as the passage of a specific amount of time or the creation of a
specified file). The Schedule Manager also specifies which scheduling server should be used
to evaluate the conditions in the flow and which batch server should provide the command to
run each job. The type of events you can define depends on the type of scheduling server you
choose. When the Schedule Manager has defined all the conditions for the flow, the flow is
sent to the scheduling server, which retrieves the command that is needed to run each job from
the designated batch server.
Step 3:
The scheduling server evaluates the conditions that are specified in the flow to determine
when to run a job. When the events specified in the flow for a job are met, the scheduling
server uses the command obtained from the appropriate batch server to run the job. If you
have set up a recurring scheduled flow, the flow remains on the scheduling server and the
events continue to be evaluated.
Step 4:
The scheduling server uses the specified command to run the job in the batch server, and then
the results are sent back to the scheduling server.
12-7
Scheduling Servers
SAS supports scheduling through three types
of scheduling servers:
Platform Process Manager server
Operating system scheduling server
In-Process scheduling server
The type you choose depends on the level of scheduling
functions your organization requires, the budget and
resources available to support a scheduling server, and
the applications that will be creating jobs to be scheduled.
When you create a flow in SAS Management Consoles
Schedule Manager, you specify which scheduling server
the flow is to be associated with. Schedule Manager then
passes the flow information to the appropriate scheduling
server.
11
You can create a definition for a scheduling server by using the Server Manager plug-in in SAS
Management Console or an application that directly schedules jobs.
12-8
Schedule Manager
The Schedule Manager plug-in for SAS Management
Console is a user interface that enables you to create
flows, which consist of one or more SAS jobs. Each job
within a flow can be triggered to run based on criteria
such as a date and time, the state of a file on the file
system, or the status of another job within the flow.
The available scheduling criteria depend on the type
of scheduling server used.
12
Schedule Manager is designed as a scheduler-neutral interface. When you create a flow, you specify
which scheduling server that the flow is to be associated with. Schedule Manager converts the flow
information to the appropriate format and submits it to the scheduling server (the Platform Computing
server, an operating system scheduling server, or an in-process scheduling server).
12-9
12-10
Batch Servers
Batch servers provide the command needed to run the
programs that have been submitted for scheduling.
Several batch server types are supported, each
of which provides the command to run a scheduled
SAS job from a specific application in a specific
environment.
The command is included in the metadata definition
for each server.
The batch servers commands are independent
of the type of scheduling server used.
Batch server metadata objects are components of the
SAS Application Server (for example, SASApp), and
can be created by using the Server Manager plug-in in
SAS Management Console.
13
Used for scheduling Java applications that are supplied by SAS and
that are invoked and launched by the standard java.exe shell. This
server definition is used for reports that are created by applications
such as SAS Web Report Studio and campaigns created by SAS
Marketing Automation.
Used for scheduling SAS programs that contain DATA step and
procedure statements. These programs include jobs that are created in
and deployed from SAS Data Integration Studio. The metadata
definition for a SAS DATA Step Batch server must include a SAS
start command that will run in batch mode.
Used for scheduling jobs from Java-based SAS applications that are
not specifically supported by the SAS Java Batch server types. If a
SAS application adds support for scheduling jobs before a new type
of SAS Java Batch server has been created, you can use the SAS
Generic Batch server to provide a command to run jobs from the
application.
Job Metadata
Job metadata becomes available to the Schedule
Manager when you use a SAS application such as
SAS Data Integration Studio to schedule a job.
The job metadata includes the following information:
the command that is to be used to execute the job
the name of the SAS batch server that is associated
with the job
the deployment directory that is associated with the
job (required only for SAS DATA Step Batch servers)
the filename of the executable program (required only
for SAS DATA Step batch servers)
14
Flow Metadata
Flow metadata is created when you use Schedule
Manager to create a flow. The flow metadata includes
the following information:
the name of the scheduling server that is to execute
the jobs in the flow
the triggers and dependencies that are associated
with the jobs in the flow
Depending on the scheduling server that the user
specifies, Schedule Manager converts the flow metadata
to the appropriate format and submits it to the scheduling
server.
15
12-11
12-12
16
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
12-13
12-14
c. Click
1) Type Orion Star Jobs as the value for the Name field.
next to the Directory field.
2) Click
7) Click
8) Click
d. Accept the default value for the Deployed Job Name field.
e. Verify that /Data Mart Development/Orion Jobs is the value for the Location field.
f.
Click
to save the information and close the Deploy a job for scheduling window.
12-15
12-16
g. Click
The Orion Jobs folders shows that the DIFT Populate Order Fact Table job icon has
been decorated to signify scheduling.
Also, a new objects appears in the same folder, DIFT_Populate_Order_Fact_Table.
i.
12-17
12-18
j.
k. Click
l.
Right-click on the job object, DIFT Populate Order Fact Table. There are more options
available:
12-19
12-20
4. Deploy DIFT Populate Old and Recent Orders Tables for scheduling.
a. Right-click DIFT Populate Old and Recent Orders Tables and select Scheduling Deploy.
b. Verify that SASApp SAS DATA Step Batch Server is selected as the value for the
Batch Server field.
c. Click
next to the Deployment Directory field and select Orion Star Jobs.
d. Accept the default value for the Deployed Job Name field.
e. Verify /Data Mart Development/Orion Jobs is the value for the Location field.
The final settings should resemble the following:
f.
Click
to save the information and close the Deploy a job for scheduling window. An
information message appears.
g. Click
next to the Deployment Directory field and select Orion Star Jobs.
d. Accept the default value for the Deployed Job Name field.
e. Verify /Data Mart Development/Orion Jobs is the value for the Location field.
f.
to save the information and close the Deploy a job for scheduling window. An
Click
information message appears.
g. Click
12-21
next to the Deployment Directory field and select Orion Star Jobs.
d. Accept the default value for the Deployed Job Name field.
e. Verify /Data Mart Development/Orion Jobs is the value for the Location field.
f.
to save the information and close the Deploy a job for scheduling window. An
Click
information message appears.
g. Click
next to the Deployment Directory field and select Orion Star Jobs.
d. Accept the default value for the Deployed Job Name field.
e. Verify /Data Mart Development/Orion Jobs is the value for the Location field.
f.
to save the information and close the Deploy a job for scheduling window. An
Click
information message appears.
g. Click
next to the Deployment Directory field and select Orion Star Jobs.
d. Accept the default value for the Deployed Job Name field.
e. Verify /Data Mart Development/Orion Jobs is the value for the Location field.
f.
to save the information and close the Deploy a job for scheduling window. An
Click
information message appears.
g. Click
12-22
9. Access Windows Explorer and verify the creation of the code files.
a. Select Start All Programs Windows Explorer.
b. Navigate to S:\Workshop\dift\OrionStarJobs.
c. Verify that .sas files were created for each of the deployed jobs.
to close the Connection Profile window and access the Log On window.
d. Type Ahmed as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
12-23
12-24
The deployed jobs are displayed and available to be part of the new flow.
next to the Scheduling Server field and select Platform Process Manager.
f.
to move all items from the available list to the selected list.
Click
12-25
12-26
g. Click
12-27
to verify that all six deployed jobs are found in the visual flow editor.
12-28
e. Click
f.
Click
g. Click
Click Run the deployed job only when all of the conditions occur.
j.
Click
12-29
12-30
k. Drag the gate node so that it is to the right of and in between the two jobs
DIFT_Populate_Organization_Dimension_Table and
DIFT_Populate_Time_Dimension_Table.
l.
m. Accept Completes successfully as the value for the Event type field.
n. Click
p. Accept Completes successfully as the value for the Event type field.
q. Click
r.
12-31
12-32
s.
Click
t.
Click
13. Verify the dependencies established using visual flow editor with the standard interface.
a. Locate the deployed job DIFT_Populate_Time_Dimension_Table.
b. Right-click and select Manage Dependencies.
c. Click
12-33
12-34
f.
Click
d. Click
e. Click
f.
.
next to the Trigger field and select Manually in Scheduling Server.
. A message is displayed confirming the successful scheduling.
Click
g. Click
d. Click
12-35
12-36
f.
i.
j.
12-37
12-38
Objectives
20
21
SAS program as
a stored process
22
23
12-39
12-40
stored process
stored process
Metadata
Server
stored process
stored process
Metadata Repository
24
25
SAS
Stored
Process
External
Files
SAS Data
Sources
26
SAS ODS
Output
SAS Data
Sources
27
SAS
Stored
Process
SAS Catalog
Entry
Results
Package
External
Files
12-41
12-42
SAS
Stored
Process
SAS Information
Delivery Portal
SAS
Enterprise Guide
28
SAS Web
Report Studio
SAS Information
Map Studio
SAS Visual BI
(JMP Software)
29
to close the Connection Profile window and access the Log On window.
d. Type Bruno as the value for the User ID field and Student1 as the value for the
Password field.
e. Click
f.
Click
12-43
12-44
c. Click
d. Click
12-45
next to the SAS server field and select SASApp Logical Stored Process Server.
e. Click
next the Source code repository field. The Manage Source Code
Repositories window is displayed.
1) Click
3) Click
g. Click Package.
12-46
h. Click
.
.
i.
j.
12-47
A new metadata object, a stored process, now should appear in the Extract and Summary folder.
12-48
5. Execute the SAS Stored Process using SAS Add-In for Microsoft Office.
a. Select Start All Programs Microsoft Office Microsoft Office Excel 2007.
b. Click the SAS tab.
c. Click
g. Click
i.
Click
12-49
12-50
6. When finished viewing, select the Office button and then Exit Excel (do not save any changes).
13-2
Objectives
Education
Comprehensive training to deliver greater value to your
organization
http://support.sas.com/training/
13-3
13-4
SAS Publishing
SAS offers a complete selection of publications to help
customers use SAS software to its fullest potential:
http://support.sas.com/publishing/
Computer-based
certification exams
typically 60-70 questions
and 2-3 hours in length
Preparation materials and
practice exams available
Worldwide directory of
SAS Certified Professionals
http://support.sas.com/certify/
Support
SAS provides a variety of self-help and assisted-help
resources.
http://support.sas.com/techsup/
User Groups
SAS supports many local, regional, international,
and special-interest SAS user groups.
support.sas.com/usergroups/
13-5
13-6