Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Index
SAP HANA - Overview
SAP HANA - Core Architecture
SAP HANA - Data Types
SAP HANA - SQL Operators
SAP HANA - SQL Functions
SAP HANA - SQL Expressions
SAP HANA - SQL Sequences
SAP HANA - SQL Triggers
SAP HANA - SQL Synonym
Working with Tables in SAP HANA
Overview of Row Data Storage and Column Data Storage
SQLScript - A New Avatar of SQL in HANA
SAP HANA Table Types
SAP HANA Procedure - Old Wine in New Bottle!
Calling HANA Procedure in ABAP Report Program
Variable Scope Nesting in Procedure
IF-ELSE Logic in Procedure
For Loop in Procedure
Cursor in HANA
Use of Array in Procedure
SAP HANA Modeling :
- Attribute View
- Analytic View
o Level Hierarchy in SAP HANA - 71
- Calculation view
- Decision tables
How the HANA models processed?
Referential Join
Join Types (Inner/Referential/Left Outer/Right Outer/Full Join/Union)
SAP HANA - Excel Integration
SAP HANA - MDX Provider
SAP HANA - Monitoring & Alerting
SAP HANA Interview Questions
SAP HANA Upgrade
Left Topics
2
Decision Tables
External View
Hana Up gradation
3
SAP HANA is a combination of HANA Database, Data Modeling, HANA Administration and Data
Provisioning in one single suite. In SAP HANA, HANA stands for High-Performance Analytic Appliance.
According to former SAP executive, Dr. Vishal Sikka, HANA stands for Hasso’s New Architecture. HANA
developed interest by mid-2011 and various fortune 500 companies started considering it as an option to
maintain Business Warehouse needs after that.
SAP HANA is a combination of software and hardware innovation to process huge amount of real
time data.
Based on multi core architecture in distributed system environment.
Based on row and column type of data-storage in database.
Used extensively in Memory Computing Engine (IMCE) to process and analyze massive amount of
real time data.
It reduces cost of ownership, increases application performance, enables new applications to run on
real time environment that were not possible before.
It is written in C++, supports and runs on only one Operating System Suse Linux Enterprise Server
11 SP1/2.
Today, most successful companies respond quickly to market changes and new opportunities. A key to this
is the effective and efficient use of data and information by analyst and managers.
Due to increase in “Data Volume”, it is a challenge for the companies to provide access to real time
data for analysis and business use.
It involves high maintenance cost for IT companies to store and maintain large data volumes.
Due to unavailability of real time data, analysis and processing results are delayed.
SAP HANA was initially, developed in Java and C++ and designed to run only Operating System Suse
Linux Enterprise Server 11. SAP HANA system consists of multiple components that are responsible to
emphasize computing power of HANA system.
Most important component of SAP HANA system is Index Server, which contains SQL/MDX
processor to handle query statements for database.
HANA system contains Name Server, Preprocessor Server, Statistics Server and XS engine, which is
used to communicate and host small web applications and various other components.
4
Index Server
Index Server is heart of SAP HANA database system. It contains actual data and engines for processing that data. When SQL or
MDX is fired for SAP HANA system, an Index Server takes care of all these requests and processes them. All HANA processing
takes place in Index Server.
Index Server contains Data engines to handle all SQL/MDX statements that come to HANA database system. It also has
Persistence Layer that is responsible for durability of HANA system and ensures HANA system is restored to most recent state
when there is restart of system failure.
Index Server also has Session and Transaction Manager, which manage transactions and keep track of all running and closed
transactions.
5
SQL/MDX Processor
It is responsible for processing SQL/MDX transactions with data engines responsible to run queries. It segments all query requests
and direct them to correct engine for the performance Optimization.
It also ensures that all SQL/MDX requests are authorized and also provide error handling for efficient processing of these
statements. It contains several engines and processors for query execution −
MDX (Multi Dimension Expression) is query language for OLAP systems like SQL is used for Relational database.
MDX Engine is responsible to handle queries and manipulates multidimensional data stored in OLAP cubes.
Planning Engine is responsible to run planning operations within SAP HANA database.
Calculation Engine converts data into Calculation models to create logical execution plan to support parallel processing
of statements.
Stored Procedure processor executes procedure calls for optimized processing; it converts OLAP cubes to HANA
optimized cubes.
It is responsible to coordinate all database transactions and keep track of all running and closed transactions.
When a transaction is executed or failed, Transaction manager notifies relevant data engine to take necessary actions.
Session management component is responsible to initialize and manage sessions and connections for SAP HANA system using
predefined session parameters.
Persistence Layer
It is responsible for durability and atomicity of transactions in HANA system. Persistence layer provides built in disaster recovery
system for HANA database.
It ensures database is restored to most recent state and ensures that all the transactions are completed or undone in case of a
system failure or restart.
It is also responsible to manage data and transaction logs and also contain data backup, log backup and configuration back of
HANA system. Backups are stored as save points in the Data Volumes via a Save Point coordinator, which is normally set to take
back every 5-10 minutes.
Preprocessor Server
Preprocessor Server in SAP HANA system is used for text data analysis.
Index Server uses preprocessor server for analyzing text data and extracting the information from text data when text search
capabilities are used.
Name Server
NAME server contains System Landscape information of HANA system. In distributed environment, there are multiple nodes
with each node has multiple CPU’s, Name server holds topology of HANA system and has information about all the running
components and information is spread on all the components.
Statistical Server
6
This server checks and analyzes the health of all components in HANA system. Statistical Server is responsible for collecting the
data related to system resources, their allocation and consumption of the resources and overall performance of HANA system.
It also provides historical data related to system performance for analyses purpose, to check and fix performance related issues in
HANA system.
XS Engine
XS engine helps external Java and HTML based applications to access HANA system with help of XS client. As SAP HANA
system contains a web server which can be used to host small JAVA/HTML based applications.
XS Engine transforms the persistence model stored in database into consumption model for clients exposed via HTTP/HTTPS.
SAP Host agent should be installed on all the machines that are part of SAP HANA system Landscape. SAP Host agent is used by
Software Update Manager SUM for installing automatic updates to all components of HANA system in distributed environment.
LM Structure
LM structure of SAP HANA system contains information about current installation details. This information is used by Software
Update Manager to install automatic updates on HANA system components.
This diagnostic agent provides all data to SAP Solution Manager to monitor SAP HANA system. This agent provides all the
information about HANA database, which include database current state and general information.
It provides configuration details of HANA system when SAP SOLMAN is integrated with SAP HANA system.
SAP HANA studio repository helps HANA developers to update current version of HANA studio to latest versions. Studio
Repository holds the code which does this update.
SAP Market Place is used to install updates for SAP systems. Software Update Manager for HANA system helps is update of
HANA system from SAP Market place.
7
It is used for software downloads, customer messages, SAP Notes and requesting license keys for HANA system. It is also used to
distribute HANA studio to end user’s systems.
8
When you create a table, you also need to define attributes inside it.
SAP HANA supports 7 categories of SQL data types and it depends on the type of data you have to store in
a column.
Numeric
Character/ String
Boolean
Date Time
Binary
Large Objects
Multi-Valued
The following table gives the list of data types in each category −
Date Time
These data types are used to store date and time in a table in HANA database.
DATE − data type consists of year, month and day information to represent a date value in a column.
Default format for a Date data type is YYYY-MM-DD.
TIME − data type consists of hours, minutes, and seconds value in a table in HANA database.
Default format for Time data type is HH: MI: SS.
SECOND DATE − data type consists of year, month, day, hour, minute, second value in a table in
HANA database. Default format for SECONDDATE data type is YYYY-MM-DD HH:MM:SS.
TIMESTAMP − data type consists of date and time information in a table in HANA database.
Default format for TIMESTAMP data type is YYYY-MM-DD HH:MM:SS:FFn, where FFn
represents fraction of second.
Numeric
TinyINT − stores 8 bit unsigned integer. Min value: 0 and max value: 255
9
SMALLINT − stores 16 bit signed integer. Min value: -32,768 and max value: 32,767
Integer − stores 32 bit signed integer. Min value: -2,147,483,648 and max value: 2,147,483,648
BIGINT − stores 64 bit signed integer. Min value: -9,223,372,036,854,775,808 and max value:
9,223,372,036,854,775,808
SMALL − Decimal and Decimal: Min value: -10^38 +1 and max value: 10^38 -1
REAL − Min Value:-3.40E + 38 and max value: 3.40E + 38
DOUBLE − stores 64 bit floating point number. Min value: -1.7976931348623157E308 and max
value: 1.7976931348623157E308
Boolean
Boolean data types stores Boolean value, which are TRUE, FALSE
Character
Binary
VARBINARY − stores binary data in bytes. Max integer length is between 1 and 5000.
Large Objects
LARGEOBJECTS are used to store a large amount of data such as text documents and images.
Multivalued
Multivalued data types are used to store collection of values with same data type.
Array
Arrays store collections of value with the same data type. They can also contain null values.
10
An operator is a special character used primarily in SQL statement's with WHERE clause to perform
operation, such as comparisons and arithmetic operations. They are used to pass conditions in a SQL query.
Arithmetic Operators
Comparison/Relational Operators
Logical Operators
Set Operators
Arithmetic Operators
Arithmetic operators are used to perform simple calculation functions like addition, subtraction,
multiplication, division and percentage.
Operator Description
+ Addition − Adds values on either side of the operator
- Subtraction − Subtracts right hand operand from left hand operand
* Multiplication − Multiplies values on either side of the operator
/ Division − Divides left hand operand by right hand operand
% Modulus − Divides left hand operand by right hand operand and returns remainder
Comparison Operators
Operator Description
= Checks if the values of two operands are equal or not, if yes then condition becomes true.
Checks if the values of two operands are equal or not, if values are not equal then condition
!=
becomes true.
Checks if the values of two operands are equal or not, if values are not equal then condition
<>
becomes true.
Checks if the value of left operand is greater than the value of right operand, if yes then condition
>
becomes true.
Checks if the value of left operand is less than the value of right operand, if yes then condition
<
becomes true.
Checks if the value of left operand is greater than or equal to the value of right operand, if yes
>=
then condition becomes true.
Checks if the value of left operand is less than or equal to the value of right operand, if yes then
<=
condition becomes true.
Checks if the value of left operand is not less than the value of right operand, if yes then condition
!<
becomes true.
Checks if the value of left operand is not greater than the value of right operand, if yes then
!>
condition becomes true.
Logical operators
11
Logical operators are used to pass multiple conditions in SQL statement or are used to manipulate the results
of conditions.
Operator Description
ALL The ALL Operator is used to compare a value to all values in another value set.
The AND operator allows the existence of multiple conditions in an SQL statement's WHERE
AND
clause.
The ANY operator is used to compare a value to any applicable value in the list according to the
ANY
condition.
The BETWEEN operator is used to search for values that are within a set of values, given the
BETWEEN
minimum value and the maximum value.
The EXISTS operator is used to search for the presence of a row in a specified table that meets
EXISTS
certain criteria.
IN The IN operator is used to compare a value to a list of literal values that have been specified.
LIKE The LIKE operator is used to compare a value to similar values using wildcard operators.
The NOT operator reverses the meaning of the logical operator with which it is used. Eg − NOT
NOT
EXISTS, NOT BETWEEN, NOT IN, etc. This is a negate operator.
OR The OR operator is used to compare multiple conditions in an SQL statement's WHERE clause.
IS NULL The NULL operator is used to compare a value with a NULL value.
UNIQUE The UNIQUE operator searches every row of a specified table for uniqueness (no duplicates).
Set Operators
Set operators are used to combine results of two queries into a single result. Data type should be same for
both the tables.
UNION − It combines the results of two or more Select statements. However it will eliminate
duplicate rows.
UNION ALL − This operator is similar to Union but it also shows the duplicate rows.
INTERSECT − Intersect operation is used to combine the two SELECT statements, and it returns
the records, which are common from both SELECT statements. In case of Intersect, the number of
columns and datatype must be same in both the tables.
MINUS − Minus operation combines result of two SELECT statements and return only those results,
which belong to first set of result and eliminate the rows in second statement from the output of first.
12
Numeric Functions
String Functions
Fulltext Functions
Datetime Functions
Aggregate Functions
Data Type Conversion Functions
Window Functions
Series Data Functions
Miscellaneous Functions
Numeric Functions
These are inbuilt numeric functions in SQL and use in scripting. It takes numeric values or strings with
numeric characters and return numeric values.
ACOS, ASIN, ATAN, ATAN2 (These functions return trigonometric value of the argument)
Various other numeric functions can also be used − MOD, POWER, RAND, ROUND, SIGN, SIN, SINH,
SQRT, TAN, TANH, UMINUS
String Functions
Various SQL string functions can be used in HANA with SQL scripting. Most common string functions are
−
Other string functions that can be used are − LPAD, LTRIM, RTRIM, STRTOBIN, SUBSTR_AFTER,
SUBSTR_BEFORE, SUBSTRING, TRIM, UNICODE, RPAD, BINTOSTR
There are various Date Time functions that can be used in HANA in SQL scripts. Most common Date Time
functions are −
These functions are used to convert one data type to other or to perform a check if conversion is possible or
not.
Most common data type conversion functions used in HANA in SQL scripts −
Other similar Data Type conversion functions are − TO_BIGINT, TO_BINARY, TO_BLOB, TO_DATE,
TO_DATS, TO_DECIMAL, TO_DOUBLE, TO_FIXEDCHAR, TO_INT, TO_INTEGER, TO_NCLOB,
14
There are also various Windows and other miscellaneous functions that can be used in HANA SQL scripts.
Case Expressions
Function Expressions
Aggregate Expressions
Subqueries in Expressions
Case Expression
This is used to pass multiple conditions in a SQL expression. It allows the use of IF-ELSE-THEN logic without using procedures
in SQL statements.
Example
SELECT COUNT( CASE WHEN sal < 2000 THEN 1 ELSE NULL END ) count1,
COUNT( CASE WHEN sal BETWEEN 2001 AND 4000 THEN 1 ELSE NULL END ) count2,
COUNT( CASE WHEN sal > 4000 THEN 1 ELSE NULL END ) count3 FROM emp;
This statement will return count1, count2, count3 with integer value as per passed condition.
Function Expressions
Aggregate Expressions
Aggregate functions are used to perform complex calculations like Sum, Percentage, Min, Max, Count, Mode, Median, etc.
Aggregate Expression uses Aggregate functions to calculate single value from multiple values.
Aggregate Functions − Sum, Count, Minimum, Maximum. These are applied on measure values (facts) and It is always
associated with a dimension.
Average ()
Count ()
Maximum ()
Median ()
Minimum ()
Mode ()
Sum ()
Subqueries in Expressions
A subquery as an expression is a Select statement. When it is used in an expression, it returns a zero or a single value.
A subquery is used to return data that will be used in the main query as a condition to further restrict the data to be retrieved.
Subqueries can be used with the SELECT, INSERT, UPDATE, and DELETE statements along with the operators like =, <, >, >=,
<=, IN, BETWEEN etc.
An ORDER BY cannot be used in a subquery, although the main query can use an ORDER BY. The GROUP BY can be
used to perform the same function as the ORDER BY in a subquery.
Subqueries that return more than one row can only be used with multiple value operators, such as the IN operator.
The SELECT list cannot include any references to values that evaluate to a BLOB, ARRAY, CLOB, or NCLOB.
A subquery cannot be immediately enclosed in a set function.
The BETWEEN operator cannot be used with a subquery; however, the BETWEEN operator can be used within the
subquery.
Subqueries are most frequently used with the SELECT statement. The basic syntax is as follows −
Example
SELECT * FROM CUSTOMERS
WHERE ID IN (SELECT ID
FROM CUSTOMERS
WHERE SALARY > 4500) ;
+----+----------+-----+---------+----------+
| ID | NAME | AGE | ADDRESS | SALARY |
+----+----------+-----+---------+----------+
| 4 | Chaitali | 25 | Mumbai | 6500.00 |
| 5 | Hardik | 27 | Bhopal | 8500.00 |
| 7 | Muffy | 24 | Indore | 10000.00 |
+----+----------+-----+---------+----------+
The simplest way in MySQL to use sequences is to define a column as AUTO_INCREMENT and leave rest of the things to
MySQL to take care.
Example
Try out the following example. This will create table and after that it will insert few rows in this table where it is not required to
give record ID because it is auto-incremented by MySQL.
Triggers could be defined on the table, view, schema, or database with which the event is associated.
Benefits of Triggers
Auditing
Synonyms permit applications to function irrespective of user who owns the table and which database
holds the table or object.
Create Synonym statement is used create a Synonym for a table, view, package, procedure, objects,
etc.
Example
There is a table Customer of efashion, located on a Server1. To access this from Server2, a client
application would have to use name as Server1.efashion.Customer. Now we change the location of
Customer table the client application would have to be modified to reflect the change.
To address these we can create a synonym of Customer table Cust_Table on Server2 for the table on
Server1. So now client application has to use the single-part name Cust_Table to reference this table.
19
Now, if the location of this table changes, you will have to modify the synonym to point to the new
location of the table.
As there is no ALTER SYNONYM statement, you have to drop the synonym Cust_Table and then re-
create the synonym with the same name and point the synonym to the new location of Customer
table.
Public Synonyms
Public Synonyms are owned by PUBLIC schema in a database. Public synonyms can be referenced by
all users in the database. They are created by the application owner for the tables and other objects
such as procedures and packages so the users of the application can see the objects.
Syntax
Private Synonyms
Private Synonyms are used in a database schema to hide the true name of a table, procedure, view or
any other database object.
Private synonyms can be referenced only by the schema that owns the table or object.
Syntax
Drop a Synonym
Synonyms can be dropped using DROP Synonym command. If you are dropping a public Synonym,
you have to use the keyword public in the drop statement.
Syntax
----------------------------------------------------------------------------------------------------------------
1. Select your Schema. Right click and select "SQL Console". Otherwise you can also click on "SQL"
button on top panel as shown in below figure.
Right-click on Tables under the Navigator Tab and click on Refresh to display the name of the newly
created table.
Right click on the table and click on "Open Definition" to see the table details.
1. Open SQL Editor and copy below SQL statements. Click on Execute icon (or F8).
2. Right click on the table and select "Open Content". This will show the content of the table as shown in
image below.
1. You can change the table type from COLUMN to ROW or vice-versa. Open the SQL editor and copy
the below SQL statement. Click on Execute (or F8).
2. Open the table definition by right clicking on table and selecting "Open Definition". Check that the table
type has been changed from COLUMN to ROW.
24
Relational databases typically use row-based data storage. However Column-based storage is more suitable
for many business applications. SAP HANA supports both row-based and column-based storage, and is
particularly optimized for column-based storage.
As shown in the figure below, a database table is conceptually a two-dimensional structure composed of
cells arranged in rows and columns.
Because computer memory is structured linearly, there are two options for the sequences of cell values
stored in contiguous memory locations:
Better Compression:
25
Columnar data storage allows highly efficient compression because the majority of the columns contain only
few distinct values (compared to number of rows).
o The application needs to only process a single record at one time (many selects and/or
updates of single records).
o The application typically needs to access a complete record (or row).
o Neither aggregations nor fast searching are required.
o The table has a small number of rows (e. g. configuration tables, system tables).
o In case of analytic applications where aggregation are used and fast search and processing is
required. In row based tables all data in a row has to be read even though the requirement
may be to access data from a few columns.
In case of analytic applications where aggregations are used and fast search and processing is required row-
based storage are not good. In row based tables all data in a row has to be read even though the requirement
may be to access data from a few columns. Hence these queries on huge amounts of data take a lot of time.
In columnar tables, this information is stored physically next to each other, significantly increasing the speed
of certain data queries.
The following example shows the different usage of column and row storage, and positions them relative to
row and column queries. Column storage is most useful for OLAP queries (queries using any SQL aggregate
functions) because these queries get just a few attributes from every data entry. But for traditional OLTP
queries (queries not using any SQL aggregate functions), it is more advantageous to store all attributes side-
by-side in row tables. HANA combines the benefits of both row- and column-storage tables.
26
Conclusion:
To enable fast on-the-fly aggregations, ad-hoc reporting, and to benefit from compression mechanisms it is
recommended that transaction data is stored in a column-based table.
The SAP HANA data-base allows joining row-based tables with column-based tables. However, it is more
efficient to join tables that are located in the same row or column store. For example, master data that is
frequently joined with transaction data should also be stored in column-based tables.
27
R script
The main goal of SQLScript is to allow the execution of data intensive calculations inside SAP HANA.
There are two reasons why this is required to achieve the best performance:
Moving calculations to the database layer eliminates the need to transfer large amounts of data from the
database to the application
Calculations need to be executed in the database layer to get the maximum benefit from SAP HANA
features such as fast column operations, query optimization and parallel execution.
If applications fetch data as sets of rows for processing on application level they will not benefit from
these features
28
Advantage of SQLScript:
A table type is created using statement CREATE TYPE and can be deleted using statement DROP TYPE.
Syntax:
CREATE TYPE [schema.]name AS TABLE
(name1 type1 [, name2 type2,...])
Example:
1. Open HANA studio and run the below SQL statement to create a table type.
Replace SCHEMA_NAME with your schema.
2. After executing the statement you can go to THE schema and find the table type under Procedures ->
Table Types section.
30
4. Remember that we cannot add record to table type. If you try to insert record, you will get an error.
What is Procedure?
A procedure is a unit/module that performs a particular task. Procedures are reusable processing blocks, and
describe a sequence of data transformations. Procedures can have multiple input and output parameters
(scalar or table types) DROP and CREATE statements are used to modify the definition of a procedure. A
procedure can be created as read only (without side-effects) or read-write (with side-effects)
There are 3 ways to create a procedure in HANA. Using the SQL editor , Using the Modeler wizard in
Modeler perspective, Using SAP HANA XS project in "SAP HANA Development" perspective We will
learn about each of these approaches in details.
Note: Do not get confused by the questions like - why so many ways to create procedure? Which is one
should I use? Which one is better? We will explain these later. Right now let us just learn each of these
approaches.
Prerequisites:
Before creating a procedure as part of this exercise, we need to create the tables that will be used. Let us do
that.
Example Scenario:
We need to find out sales value for different region. We also need to calculate NET_AMOUNT based on the
DISCOUNT. DISCOUNT value will be passed as input parameter.
We will create a procedure to achieve this.
Create Tables:
Open HANA Studio and expand the SAP HANA system. Go to your schema. Right-click on your schema
and select SQL editor.
Note: In this example schema name is "SAP_HANA_TUTORIAL". In case you want to create a new
schema use below query.
create schema <schema_name>;
Copy and paste the below script in SQL editor and execute.
Open the SQL editor of your schema and execute the following command line:
GRANT SELECT ON SCHEMA <YOUR SCHEMA> TO _SYS_REPO WITH GRANT OPTION;
If you miss this step, an error will occur when you activate your views later.
We need to create a table type, which will be used for output parameter of the procedure.
Execute the below SQL statement.
34
Syntax:
CREATE PROCEDURE {schema.}name
{({IN|OUT|INOUT}
param_name data_type {,...})}
{LANGUAGE <LANG>} {SQL SECURITY <MODE>}
{READS SQL DATA {WITH RESULT VIEW <view_name>}} AS
BEGIN
...
END
READS SQL DATA defines a procedure as read-only. Implementation LANGUAGE can be specified.
Default is SQLScript. WITH RESULT VIEW is used to create a column view for the output parameter of
type table
FROM :var2
GROUP BY PRODUCT_NAME, REGION_NAME, SUB_REGION_NAME;
END;
Refresh the procedure folder under schema in the left Systems tab. You will see the created procedure there.
We call procedure using CALL statement. Execute the below statement to call this procedure.
For table output parameters it is possible to either pass a (temporary) table name or to pass NULL. The
option NULL will display the output directly on the client output screen.
36
Please note if no input parameter and want to see the output directly on the client output screen. Then use
this way :
CALL SCHEMA_NAME."PROCEDURE_SALES_REPORT" ( null);
This statement is used to manually trigger procedure recompilation with or without plan.
This is very useful statement to test the procedures for any error without recreating them
mainly when we move objects to production system.
It is recommended to run this statement WITH OUT PLAN in production to avoid overhead on
the system.
Examples:
CALL:
This statement is used to execute the procedures created in SAP HANA database.
While executing the procedure using CALL statement we can pass both INPUT and OUTPUT
parameters.
Parameters passed to a procedure are scalar constants and can be passed either as IN, OUT or
INOUT parameters.
Scalar parameters are assumed to be NOT NULL.
Arguments for IN parameters of table type can either be physical tables or views.
The actual value passed for tabular OUT parameters must be`?`.
We have two main options while executing the procedure using CALL statement, they are,
1.WITH OVERVIEW:
37
Defines that the result of a procedure call will be stored directly into a physical table.
Calling a procedure WITH OVERVIEW will return one result set that holds the information of
which table contains the result of a particular table’s output variable.
Scalar outputs will be represented as temporary tables with only one cell.
When we pass existing tables to the output parameters WITH OVERVIEW will insert the result
set tuples of the procedure into the provided tables.
When we pass NULL to the output parameters, temporary tables holding the result sets will be
generated.
These tables will be dropped automatically once the database session is closed.
When specified additional debug information will be created during the execution of the
procedure.
This information can be used to debug the instantiation of the internal execution plan of the
procedure.
Examples:
Introduction:
SQLScript supports local variable declaration in a nested block. Local variables are only visible in the scope
of the block in which they are defined. It is also possible to define local variables inside LOOP / WHILE
/FOR / IF-ELSE control structures.
Create procedure:
Call procedure:
CALL "<SCHEMA_NAME>"."VARIABLE_SCOPE_EXAMPLE"(?);
The output will be 2.
From this result you can see that the inner most nested block value of 3 has not been passed to the val
variable.
Syntax:
IF <bool_expr1>
THEN
<then_stmts1>
ELSEIF <bool_expr2>
THEN
<then_stmts2>
ELSE
<else_stmts3>
END IF
The IF statement consists of a Boolean expression <bool-expr1>. If this expression evaluates to true then the
statements <then-stmts1> in the mandatory THEN block are executed. The IF statement ends with END IF.
The remaining parts are optional.
In this example we will pass product id, product name and category as input. If the product already exist, we
will update the record. If it does not exist, we will create a new record.
Copy and paste the below script in SQL editor and execute.
Note: If you already have created the PRODUCT table in previous example, then skip this step.
Create procedure:
IN productid VARCHAR(10),
IN productname VARCHAR(20),
IN category VARCHAR(20))
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
AS
/*********BEGIN PROCEDURE SCRIPT ************/
BEGIN
IF :found = 0
THEN
INSERT INTO "<SCHEMA_NAME>"."PRODUCT"
VALUES (:productid, :productname, :category);
ELSE
UPDATE "<SCHEMA_NAME>"."PRODUCT"
SET PRODUCT_NAME = :productname, CATEGORY = :category
WHERE PRODUCT_ID =:productid;
END IF;
SELECT PRODUCT_ID, PRODUCT_NAME, CATEGORY FROM
"<SCHEMA_NAME>"."PRODUCT" ;
END;
Call procedure:
BREAK:
CONTINUE:
Specifies that a loop should stop processing the current iteration, and should immediately start processing
the next.
In this example we will define a loop sequence. If the loop value :x is less than 3 the iterations will be
Create procedure:
FOR v_index IN 0 .. 10 DO
IF :v_index < 3 THEN
CONTINUE;
ELSEIF :v_index = 5 THEN
BREAK;
ELSE
count := count + 1;
END IF;
END FOR;
output_var := count;
END;
Call procedure:
CALL <SCHEMA_NAME>."FOR_LOOP_EXAMPLE"(?);
In this article we will show an example on - How to use Exception inside procedures.
43
Introduction:
Exception handling is a method for handling exception and completion conditions in an SQLScript
procedure. There are three tools can be used:
o EXIT HANDLER
o CONDITION
o SIGNAL or RESIGANL.
The DECLARE EXIT HANDLER parameter allows you to define exception handlers to process exception
conditions in your procedures.
You can explicitly signal an exception and completion condition within your code using SIGNAL and
RESIGNAL.
In this example we will try to insert record into a table in procedure. If the record is already there, then it
will throw a “Unique constraints violated” error.
Create Table:
CREATE COLUMN TABLE <SCHEMA_NAME>.TABLE1 (
ID INTEGER PRIMARY KEY,
NAME VARCHAR(10)
);
Create procedure:
Copy and paste the below script to create the procedure.
Call procedure:
Call the procedure using below statement.
CALL <SCHEMA_NAME>."EXCEPTION_EXAMPLE1"(1, 'A', ?);
44
First time we call this procedure, we will get below message as output.
“Product "1" inserted successfully”
Next time if we call the procedure with same input parameter, we will get below message as error.
“SQL Exception occured. Error Code is: 301 Error message is: unique constraint violated: Table(TABLE1)”
Explanation:
When exception occurs, the invocation well be suspended, and the subsequent operations will not be
executed. After suspending the procedure, the action operations of EXIT HANDLER will be executed.
Notice: We can use "::SQL_ERROR_CODE","::SQL_ERROR_MESSAGE" to get the SQL ERROR CODE
and related error message of the caught exception.
Cursor in HANA
A cursor is a set of rows together with a pointer that identifies a current row.
Cursors are defined after the signature of the procedure and before the procedure’s body. The cursor is
defined with a name, optionally a list of parameters, and an SQL SELECT statement.
The cursor provides the functionality to iterate through a query result row-by-row. Updating cursors is not
supported.
Note: Avoid using cursors when it is possible to express the same logic with SQL. You should do this as
cursors cannot be optimized the same way SQL can.
In this example, we need to update the sales price of each record. We will pass the increased rate and use a
cursor to update each record.
Create Table:
CREATE COLUMN TABLE <SCHEMA_NAME>.PRODUCT_DETAILS (
PRODUCT_ID INTEGER PRIMARY KEY,
PRODUCT_NAME VARCHAR(100),
45
PRICE FLOAT
);
INSERT INTO "<SCHEMA_NAME>"."PRODUCT_DETAILS" VALUES(1,'SHIRTS', 500);
INSERT INTO "<SCHEMA_NAME>"."PRODUCT_DETAILS" VALUES(2,'JACKETS', 2000);
INSERT INTO "<SCHEMA_NAME>"."PRODUCT_DETAILS" VALUES(3,'TROUSERS', 1000);
INSERT INTO "<SCHEMA_NAME>"."PRODUCT_DETAILS" VALUES(4,'COATS', 5000);
INSERT INTO "<SCHEMA_NAME>"."PRODUCT_DETAILS" VALUES(5,'PURSE', 800);
Create procedure:
Call procedure:
In this article we will show an example on - How to create and use Array inside procedures.
Introduction:
In SQLScript array can be declared using ARRAY function.
Example 1:
Define an integer array that contains the numbers 1,2 and 3.
array_id INTEGER ARRAY[] := ARRAY(1, 2, 3);
Example 2:
Define an empty array of type INTEGER.
array_int INTEGER ARRAY;
UNNEST Function:
The UNNEST function converts an array into a table. UNNEST returns a table including a row for each
element of the array specified.
If there are multiple arrays given, the number of rows will be equal to the largest cardinality among the
cardinalities of the arrays. In the returned table, the cells that are not corresponding to the elements of the
arrays are filled with NULL values.
Note: The UNNEST function cannot be referenced directly in FROM clause of a SELECT statement.
In this example we are going to create 4 arrays and create a table using the arrays. The output will be sent in
output using a table type.
Create table type:
"PRICE" DECIMAL(15,2),
"SALEPRICE" DECIMAL(15,2)
);
To know more about table type, read SAP HANA Table Type.
Create procedure:
Copy and paste the below script to create the procedure.
--REPLACE <SCHEMA_NAME> WITH YOUR SCHEMA NAME
CREATE PROCEDURE <SCHEMA_NAME>."PRODUCT_ARRAY"(
OUT output_table <SCHEMA_NAME>."TT_PRODUCT_SALES" )
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
AS
/*********BEGIN PROCEDURE SCRIPT ************/
BEGIN
DECLARE productid VARCHAR(20) ARRAY;
DECLARE category VARCHAR(20) ARRAY;
DECLARE price DECIMAL(15,2) ARRAY;
DECLARE saleprice DECIMAL(15,2) ARRAY;
productid[1] := 'ProductA';
productid[2] := 'ProductB';
productid[3] := 'ProductC';
category[1] := 'CategoryA';
category[2] := 'CategoryB';
category[3] := 'CategoryC';
price[1] := 19.99;
price[2] := 29.99;
price[3] := 39.99;
saleprice[1] := 15.99;
saleprice[2] := 25.99;
saleprice[3] := 35.99;
Call procedure:
Call the procedure using below statement.
CALL <SCHEMA_NAME>."PRODUCT_ARRAY"(?)
48
49
HANA Modeling is done on the top of tables available in Catalog tab under Schema in HANA studio and all
views are saved under Content table under Package.
You can create new Package under Content tab in HANA studio using right click on Content and New.
All Modeling Views created inside one package comes under the same package in HANA studio and
categorized according to View Type.
Each View has different structure for Dimension and Fact tables. Dim tables are defined with master data
and Fact table has Primary Key for dimension tables and measures like Number of Unit sold, Average delay
time, Total Price, etc.
Example of Measures − Number of unit sold, Total Price, Average Delay time, etc.
Dimension Table contains master data and is joined with one or more fact tables to make some business
logic. Dimension tables are used to create schemas with fact tables and can be normalized.
Suppose a company sells products to customers. Every sale is a fact that happens within the company and
the fact table is used to record these facts.
50
For example, row 3 in the fact table records the fact that customer 1 (Brian) bought one item on day 4. And,
in a complete example, we would also have a product table and a time table so that we know what she
bought and exactly when.
The fact table lists events that happen in our company (or at least the events that we want to analyze- No of
Unit Sold, Margin, and Sales Revenue). The Dimension tables list the factors (Customer, Time, and Product)
by which we want to analyze the data.
Schemas are logical description of tables in Data Warehouse. Schemas are created by joining multiple fact
and Dimension tables to meet some business logic.
Database uses relational model to store data. However, Data Warehouse use Schemas that join dimensions
and fact tables to meet business logic. There are three types of Schemas used in a Data Warehouse −
Star Schema
Snowflakes Schema
Galaxy Schema
Star Schema
In Star Schema, Each Dimension is joined to one single Fact table. Each Dimension is represented by only
one dimension and is not further normalized.
Dimension Table contains set of attribute that are used to analyze the data.
Example − In example given below, we have a Fact table FactSales that has Primary keys for all the Dim
tables and measures units_sold and dollars_ sold to do analysis.
Each Dimension table is connected to Fact table as Fact table has Primary Key for each Dimension Tables
that is used to join two tables.
Facts/Measures in Fact Table are used for analysis purpose along with attribute in Dimension tables.
Snowflakes Schema
51
In Snowflakes schema, some of Dimension tables are further, normalized and Dim tables are connected to
single Fact Table. Normalization is used to organize attributes and tables of database to minimize the data
redundancy.
Normalization involves breaking a table into less redundant smaller tables without losing any information
and smaller tables are joined to Dimension table.
In the above example, DimItem and DimLocation Dimension tables are normalized without losing any
information. This is called Snowflakes schema where dimension tables are further normalized to smaller
tables.
Galaxy Schema
In Galaxy Schema, there are multiple Fact tables and Dimension tables. Each Fact table stores primary keys
of few Dimension tables and measures/facts to do analysis.
In the above example, there are two Fact tables FactSales, FactShipping and multiple Dimension tables
joined to Fact tables. Each Fact table contains Primary Key for joined Dim tables and measures/Facts to
perform analysis.
Attribute Views
Attribute Views in SAP HANA Modeling are created on the top of Dimension tables. They are used to join
Dimension tables or other Attribute Views. You can also copy a new Attribute View from already existing
Attribute Views inside other Packages but that doesn’t let you change the View Attributes.
52
When you click on Attribute View, New Window will open. Enter Attribute View name and description.
From the drop down list, choose View Type and sub type. In sub type, there are three types of Attribute
views − Standard, Time, and Derived.
Time subtype Attribute View is a special type of Attribute view that adds a Time Dimension to Data
Foundation. When you enter the Attribute name, Type and Subtype and click on Finish, it will open three
work panes −
Output pane where we can add attributes from Detail pane to filter in the report.
You can add Objects to Data Foundation, by clicking on ‘+’ sign written next to Data Foundation. You can
add multiple Dimension tables and Attribute Views in the Scenario Pane and join them using a Primary Key.
When you click on Add Object in Data Foundation, you will get a search bar from where you can add
Dimension tables and Attribute views to Scenario Pane. Once Tables or Attribute Views are added to Data
Foundation, they can be joined using a Primary Key in Details Pane as shown below.
Once joining is done, choose multiple attributes in details pane, right click and Add to Output. All columns
will be added to Output pane. Now Click on Activate option and you will get a confirmation message in job
log.
Now you can right click on the Attribute View and go for Data Preview.
Note − When a View is not activated, it has diamond mark on it. However, once you activate it, that
diamond disappears that confirms that View has been activated successfully.
Once you click on Data Preview, it will show all the attributes that has been added to Output pane under
Available Objects.
These Objects can be added to Labels and Value axis by right click and adding or by dragging the objects as
shown below −
54
Analytic View
Analytic View is in the form of Star schema, wherein we join one Fact table to multiple Dimension tables.
Analytic views use real power of SAP HANA to perform complex calculations and aggregate functions by
joining tables in form of star schema and by executing Star schema queries.
Analytic Views are used to perform complex calculations and Aggregate functions like Sum, Count,
Min, Max, Etc.
Analytic Views are designed to run Start schema queries.
Each Analytic View has one Fact table surrounded by multiple dimension tables. Fact table contains
primary key for each Dim table and measures.
Analytic Views are similar to Info Objects and Info sets of SAP BW.
When you click Finish, you can see an Analytic View with Data Foundation and Star Join option.
Click on Data Foundation to add Dimension and Fact tables. Click on Star Join to add Attribute Views.
Add Dim and Fact tables to Data Foundation using “+” sign. In the example given below, 3 dim tables have
been added: DIM_CUSTOMER, DIM_PRODUCT, DIM_REGION and 1 Fact table FCT_SALES to
Details Pane. Joining Dim table to Fact table using Primary Keys stored in Fact table.
Select Attributes from Dim and Fact table to add to Output pane as shown in snapshot shown above. Now
change the data type of Facts, from fact table to measures.
Click on Semantic layer, choose facts and click on measures sign as shown below to change datatype to
measures and Activate the View.
56
Once you activate view and click on Data Preview, all attributes and measures will be added under the list of
Available objects. Add Attributes to Labels Axis and Measure to Value axis for analysis purpose.
Before we go ahead and see step by steps process involving the creation of an Analytic View in HANA,
let’s look what we want to build.
As we had discussed in ‘Introduction to modeling in SAP HANA’ article, the below diagram needs
building of two Analytic Views for Sales and Payments and then a calculation view to combine both
sales and payments data.
58
In this article, we will build an Analytic View for ‘Sales’ which covers the tables in blue box. With the
tables in the blue box, we now know that customer wants to analyze the sales orders information by
time period, customers with sales representatives and products.
We have the required attributes views/dimensions already created in the system for Customers, Time
and Products as shown below.
Now let’s go through the step by step process of creating an Analytic View in HANA using HANA Studio.
59
Let’s go through step by step process involved in creating an Analytic View in SAP HANA.
Step 1: All the modeling objects are grouped into packages in HANA. The first step in modeling
objects creation in HANA is to make sure that we have a package created (in our case ‘sap-
student’).To create a new analytic view, right click on the package(sap-student) → ‘New’ →
Analytic View as shown below.
Step 2: This gives us the initial screen of the creation process where we need to enter technical
name of the analytic view and description. We can also choose few options here like ‘Copy
From’ and subtype. For details about ‘Copy From’ and ‘Subtype’ options, please go through
article ‘SAP HANA – How to create Attribute View’.
Our object name here is ‘AN_ORDERS_SALES’ where AN represents ‘ANalytic View’.Click ‘Finish’ to
proceed further.
Step 4: Let’s add the required tables into Data Foundation. In our case we need to add two
tables ORDERS and ORDERDETAILS.ORDERS table contains Orders header information like who
placed order, when it was placed and when it was shipped.ORDERDETAILS tables the details
about each order like how many products were placed in each order, what was quantity and
what was selling price.So in our case even though we have two tables in Data Foundation, the
actual measures will be from one table ORDERDETAILS.We add the tables just by moving the
cursor into ‘data foundation’ which enable + sign as show below. Click on the + sign to add new
tables into data foundation.
60
Step 5: Search for the table we need, select the table and click ‘ok’ to add it to data
foundation. The other way of adding table to the modeling object is just drag and drop the
tables into data foundation area.
61
Steps 6: If we have more than one table in data foundation, the next step is to define the join
condition between them. Here we have two tables ORDERS and ORDERDETAILS which has 1:N
cardinality and inner join relation (each order from ORDERS will have one or more records in
ORDERDETAILS table).Defining the join will be done by just holding the right click of the mouse
on join column in one table and drag and drop it on the other table. Once the join is made, we
can set the join type and cardinality in the properties sections as shows below.
62
Step 7: Select required columns to output by right click on the column → ‘Add to output’ as
shown below.
Note: We can also apply static filters by choosing ‘Filter’ and then restricting data to specific values. If
user wants to give filter during the run time, then it can be achieved with the help of ‘Input
Parameters’ or ‘Variables’.
63
Step 8: Now let’s move onto to ‘start join’ area where we will add the required attribute views
and define the joins with the data foundation structure.The objects can be added in the same
way we did in step 4&5, just drag and drop or click on ‘+’ sign and search the objects.
Step 9: Once all the required objects have been added, the next step is to define the joins with
the data foundation. Here in our scenario we have three attribute views namely AT_CUSTOMER,
AT_PRODUCT and AT_TIME (AT represents ‘ATtribute View’)
Note: Adding attribute views is optional, depends on the requirement. By default all the columns we
have in attribute views will be added to ‘Analytic View’.
64
We also have some advanced options here like creating additional columns as ‘Calculated Columns’,
‘Restricted Measures’ and ‘Input Parameters’ which will be discussed in different articles.
Step 10: Once the object is designed, we can maintain its properties/settings in ‘semantics’
section. This section contains four tabs namely,
o Columns: This contains all the columns selected for the output of this analytic view. Here we
can define the additional description, choosing the column as either attribute or measure, hiding
the unwanted columns and defining aliases.
65
o Properties: This section contains information like category of the analytic view, Default
Schema, Enabling Cache and Enabling History.
66
o Hierarchies: This sections shows the list of hierarchies available in this analytic view. We
cannot create new hierarchies in Analytic View, however the hierarchies defined in the attribute
views used in that analytic view will be shows here.In our scenario, we don’t have any
hierarchies.
o Parameters/Variables: This sections shows use the list of variables and input parameters
created in this Analytic Views. We can also create new variable/input parameters from this
screen.
67
Step 11: Once the objects is build, the next step is to activate it and make sure it is activated
successfully. We can save and activate the object using green arrow mark shown below.
Job Log:
Step 12: Upon successful activation we can confirm that the object is working by looking at the
data preview and performing by initial data validation.
Data Preview:
Data Preview of any Modeling object in SAP HANA has three tabs like Analysis, Distinct Values and Raw
Data using which we can quickly do basic data validations.Below screen shots shows us the few data
representations with the help of data preview.
Raw Data:
68
Analysis:
Thus we have successfully created Analytic View in SAP HANA. In coming articles, we will see some of
the advanced features available in Analytic View and how to design those.
Thank you for reading and hope this information is helpful. Please do share with your friends if you feel
the information is useful.
69
Step 3: This step will show us the design area for Analytic View where we build the complete
object and activate it after the building is completed.This area contains majorly three areas,
they are
o Data Foundation: This is where we add our fact tables. We can have multiple tables added to
this area depending on the requirement, however measures can only be selected from one
table.
Note: The definition of the star schema says central fact table is surrounded by dimension
tables/master data which means we can only have one central fact table in analytic view. Central fact
table is the one that has measures selected for output.
o Star Join (Logical Join): All the dimensions required based on the requirement can be added
here. We can only add Attribute Views in this area but not direct tables.Would like to know how
to create Attribute View in HANA? Please go through article ‘How to create Attribute View in
HANA’.
Note: Before HANA SP9, the name of this section is ‘Logical Join’, however from SP9 onwards SAP
changed the name to ‘Start Join’ to make it more familiar with Start Schema.
o Semantics: We can define and maintain the object settings here and can also make use of
advanced features like variables and input parameters.
70
Calculation View
A calculation view is a powerful and flexible information view, which can be used to define more
advanced logic or complex calculations on the tables or information views available in the SAP HANA.
Calculation Views provides the functionality that is available both in Attribute and Analytic View.
Mostly calculation views are used when we were not able to design the business logic with the help of
attribute and analytic views. For example, we need measures from more than one fact table or need to
define filters on calculated columns.
The data foundation/source of the calculation view can include any combination of tables, column
views, attribute views and analytic views.
Graphical
SQL Script
Graphical: The graphical calculation view is built with the help of nodes available in it. We have 5
different nodes available in graphical calculation view, they are
1. Aggregation
71
2. Projection
3. Join
4. Union All
5. Rank
SQL Script: When it comes to SQL script based calculation view, we can write the business logic using
native SQL in the way we want. This can be created with the help of CE functions or SQL script.
1. CE Functions
2. SQL Script
1. We can create both multi dimension and relational type calculation views.
2. Supports complex expressions (i.e. IF, Case, Counter, filters, union, RANK).
3. Supports reusing Analytic views, Attribute views and other Calculation views (Graphical and
Scripted).
4. Supports SAP ERP specific features (i.e. client handling, language, currency conversion).
5. Provides ability to combine facts from multiple tables.
6. Provides support for additional data processing operations, (i.e. Union, explicit aggregation).
7. Provides ability to leverage specialized languages (i.e. R-Lang).
8. Provides ability to leverage both Column and Row tables.
9. Performance of certain operations (i.e. star-join, aggregation) are inferior to Analytical Views.
Scenario:
The below diagram needs building of two Analytic Views for Sales and Payments and then a
calculation view to combine both sales and payments data.
Combining Sales and Finance data to analyze how much amount has been received and how
much yet to receive from exactly where.
72
We have already created both the analytic views required to create a calculation view. Please go
through ‘SAP HANA – Analytic View Creation’ for details on how to create analytic view in SAP
HANA.
Now business want analyze their sales revenue with their finance data to know more insight on
which customer has paid how much amount, what is the remaining debt and so on.
To be able to analyze both sales and finance data at the same time, we have to create a
calculation views because this involves measures from more than one place.
Let’s go through the step by step process of creating a calculation view in SAP HANA.
Step1:
All the modeling objects are grouped into packages in HANA. This first step in modeling objects
creation in HANA is to make sure we have package created (in our case ‘sap-student’).
To create new calculation view, right click on the package(sap-student) → ‘New’ → Calculation
View as shown below.
73
Step2:
This gives us the initial screen of the creation process where we need to enter technical name of
the calculation view and description/label.
We can also choose few options here like ‘Copy From’ and ‘Subtype’. For details about ‘Copy
From’ and ‘Subtype’ options, please go through article ‘SAP HANA – How to create Attribute
View’.
Our object name here is ‘CA_SALES_PAYMENTS’ where CA represents ‘CAlculation View’.
The other options we have here along with ‘Copy From’ and ‘Subtype’ are,
Type: We can create two types of calculation views as we discussed above. In this article we
are going to create a graphical calculation view.
Data Category: We have two options here, they are
1. CUBE: We select this option when we have measures coming out from this view (This is same
as OLAP model like analytic view)
2. DIMENSION: We select this option when we don’t need any measures from the view (this is
similar to OLAP structure or relational table).
With Star Join: If we would like to build a calculation like how we see in analytic view with star
join, then we need to select this option.
Note: In start join, we cannot use either attribute view or analytic view as source. We can only use
either table or calculation views as source.
Step3:
This step will show us the design area for Calculation View where we build the complete object
and activate it once the designing is completed.
Palette: This area contains all the nodes that can be used as source to build our calculation
views.
1. Join: This node is used to join two source objects and project the result to the next node. The
join types can be inner, left outer, right outer and text join.
Note: We can only have two source objects mapped to a join node.
2. Union: This is used to perform union all operation between multiple sources. The source can be
n number of objects.
3. Projection: This is used to fine tune our source objects before we use in next nodes like union,
aggregation and rank.
We can choose the selective columns, filter the data and create additional columns.
4. Aggregation: This is used to perform aggregation on specific columns based the selected
attributes.
5. Rank: This is the exact replacement for RANK function in SQL. We can define the partition and
order by clause based on the requirement.
Semantics: We can define and maintain the object settings here and can also make use of
advanced features like variables, hierarchies and input parameters.
As we have selected data category as ‘CUBE’, we can see the default node as ‘Aggregation’. If the
data category was ‘DIMENSION’ then we will have default node as ‘Projection’. We cannot remove
the default node, however we can change from one to another.
Step4:
In our scenario, we are going to use ‘union’ node to combine both the analytic views that we
have built. Before we use the union node, we need to select the required columns from analytic
views. This can be easily done with the help of projection node.
Let’s add two projections into our design area and add the object by selecting ‘Add Objects’
option as shown below. We can also do this by drag and drop.
76
Step5:
After selecting ‘Add Objects’, we can search for our objects, select the appropriate one and click
‘ok’ to add that object to the projection.
77
Once it is added, we need to select the required columns to the output. In our case we need the
customer, time and measures information for the output. All the required objects has been
selected to the output.
Step6:
In the same way, let’s add the AN_PAYMENTS analytic view to the other projection and select
the required columns to the output as shown below.
78
Note: One of the biggest advantage we have in calculation views is ‘Filter Expression’. Using this filter
expression we apply the filters on calculated columns (which are not part of actual tables) and we can
also use different operators like and, or, not and so on. Please look at the below image for details.
79
Step7:
Once we have the required columns selected from both the projections (source is analytic view),
the next step is to go ahead and combine them with the help of a union node.
Let’s add the union node into the design and join both the sources to union by dragging the
arrow onto the union node as shown below.
80
Step8:
After we add both the sources (projections) to Union node, we need to select the columns from
both the sources that we need for output/target by using one of the two options available.
81
1. Add to Target: This option will directly add a new column to the output with the same name.
2. Map to Target: This option is required when we have same column coming from both the
sources (mostly attributes). In this case when we add the columns for the first time, we will
select ‘Add to target’ and for the columns coming from second source we need to select ‘Map to
Target’ which gives us the below screen to select the target for which we want to map as shown
below.
Step9:
82
The next step is to perform ‘manage mappings’. When we use union operation in SQL, the first
thing comes to our mind is: number of columns from the sources should be same otherwise it is
going to throw an error.
Here in HANA, when we add any column from just one source, SAP HANA considers ‘null’ as
default value from other source.
We can change this default value with the help of manage mappings. One reason to change this
is, for attributes ‘null’ value works but for measures where we need aggregation having ‘null’
might give us wrong result. It is a good practice to maintain 0 (zero) as default value for
measures.
To perform this, right click on any target column and select manage mappings. In the next
screen we can see the ‘null’ is selected by default. If we would like to have some other value
than ‘Null’, then uncheck the box and enter the value in ‘Constant Value’ column as shown the
below images.
Step10:
After completing the designing of union node, we need to map the output of union to default
aggregation node in the calculation view and then select the columns we need for the final
output.
If we need the columns as ‘attribute (dimension)’, then we have to select the option ‘Add to
Output’. If we need any aggregation on columns (measures), then we have to select the option
‘Add As Aggregated Column’ as shown below.
83
Step11:
The final thing we need to do before we go ahead and activate this object is, maintain the
proper settings and additional options in semantics section.
1. Columns: This sections contains the columns (attributes and measures) that have been picked for
the final output and we can maintain the below properties for these columns
2. View Properties: This tab contains properties of the calculation view like data category, type,
default client, default schema and so on. The below image shows all the properties available.
84
3. Hierarchies: In this section, we can create the hierarchies based on the business requirement. We
can create two types of hierarchies in SAP HANA, they are
Level Hierarchy
Parent-Child Hierarchy
Please go through the article ‘Hierarchy creation in SAP HANA’ for more details.
4. Parameters/Variable: This sections shows use the list of variables and input parameters created
in this Calculation View. We can also create a new variable/input parameter from this screen.
Step12:
85
Once the object is designed and settings are maintained in semantics, the next step is to
activate it and make sure it is activated successfully. We can save and activate the object using
green arrow mark shown below.
Job Log:
Step13:
Upon successful activation we can confirm that the object is working by looking at the data
preview and performing initial data validation.
Data Preview of any Modeling object in SAP HANA has three tabs like Analysis, Distinct Values
and Raw Data using which we can quickly do basic data validations.
Below screen shots shows us the few data representations with the help of data preview.
86
Raw Data:
Analysis:
Decision Tables:
1. Decision tables are used to define the business rules at Database level even without having any SQL
or SQL Script knowledge, like by uploading the rules from MS Excel sheet.
2. Source for the decision tables can be Columnar Tables, Any Modeling Object, Table Types and
procedures.
3. The output of a decision table can be used to update the actual values if the source is Columnar Table
or create new column for result if the source is modeling object.
4. User who creates Decision table in HANA should give EXECUTE and UPDATE privileges to
_SYS_REPO user on his/her schema.
For Example: Let us assume that there is rule in bank credit card department which says, IF the
Customer.ID = CUST_120000 has Customer.CurrentBalance Greater Than 2,000,000, THEN set
Customer.Type = Gold ELSE set Customer.Type = Silver. This flat rule can be represented in tabular
form as:
Figure – 1.1: A decision table to set the Customer type in Credit Card department
While Business rules in tabular format is not new and has been a known concept in market for quite a long
time – what we introduce here is decision tables integrated to SAP HANA – a modern in-memory platform
Hence, when you think about writing business rules that aids better decision making with big data and
performing analytics over the results –Business Rules powered by SAP HANA is the answer.
Scenario:
Business wants to see how the revenue will be effected if they give discounts on specific product
groups depending on the quantity brought by customers.
We have table ORDERDETIALS in SAP_STUDENT schema which stores the information about
product group and quantity brought in each order.
Business wants to apply below rules for their orders data and would like to see the outcome.
We will see how to implement this scenario using decision tables in SAP HANA.
Using the above conditions, we can either update the PRICEACH column in ORDERDETAILS table
or create new column with the PRICEEACH value after applying business rules.
If we create the former one then it is called decision tables with update values and the latter one is
called decision tables with return values.
88
Step 1:
All the modeling objects are grouped into packages in HANA. This first step in modeling objects
creation in HANA is to make sure we have package created (in our case ‘sap-student’).
To create new decision table, right click on the package(sap-student) → ‘New’ → ‘Decision
Table’ as shown below.
Step 2:
In this step, we need to provide ‘Name’ and ‘Description’ for the decision table and choose
whether to create new one or copy from the already existing object.
After selecting proper options and entering the values, click ‘Finish’ to enter into Decision Tables
design area.
Step 3:
89
Decision Table contains three panes namely Scenario, Details and Output as shown in below
image.
The Scenario pane consist of Data Foundation and Decision Table nodes. Selecting each node
shows us the respective screen elements in the details and output panes.
The Details pane of the Data Foundation node displays the tables or information models used
for defining the decision table. The Details pane of the Decision Table node displays the
modeled rules in tabular format.
The Output pane displays the vocabulary, conditions, and actions, and allows us to perform
edit operations. Expanding the vocabulary node, we can see the parameters, attributes, and
calculated attributes sub-nodes. In the Output pane, we can also see the properties of the
selected objects within the editor.
Step 4:
The first thing we need to do while designing decision tables is adding source objects into the
data foundation section.
Here we are going to build decision table on ‘ORDERDETAILS’ table. So let’s add that table to
the area either by drag and drop option or using ‘+’ option available when we hover mouse on
to the data foundation as shown below.
90
Once we add the required tables/information views into the data foundation, the next step is
selecting the required columns to the output.
In this scenario, we will be building business rules based on column PRODUCT_GRP,
QUANTITYORDERED and PRICEEACH. So let’s add those columns to the output as shown below.
Step 5:
The additional folders we have apart from ‘Attributes’ in the output are
91
Calculated Attributes: This are same as what we have in other modeling objects. These are
used to create new columns based on the existing columns after applying specific business logic
and can be used as either Condition or Action.
Parameters: This are used to store result value after applying conditions or can be used to
provide the values for conditions during runtime. Parameters can be used for both conditions
and parameters.
Conditions: Conditions are the columns on which we will write our business rules. In our
scenario, we will build the business rules based on columns PRODUCT_GRP and
QUANTITYORDERED. Let’s add those two columns to conditions as shown below.
Actions: Actions are the columns that will be effected by business rules (conditions). In our scenario,
the column that will be effected is PRICEEACH. So let’s add that column to actions as shown below.
Things to remember:
1. If we assign the actual column (from physical table) as action, then when we
execute the decision table the values for that column will be modified
permanently. This is called decision tables with update values.
2. If we don’t want to modify the actual values and wants to have the result
values in a new column, then we need to add ‘parameter’ as action instead of
actual column. This is called decision tables with return values.
Step 6:
92
Once the required conditions and actions are assigned in ‘Data Foundation’
section, the next step is defining business rules in ‘Decision table’ section.
If we don’t have at least one columns for both conditions and actions in ‘Data
Foundation’ and try to build the business rules in ‘Decision Table’ section, we
get the below error.
Once we add the required columns, we should see the screen like below.
Step 7:
We can type the business rules based on the scenario we explained in the
stating and the final rule model should look like below.
93
Here we also have an option to import the business rules directly from excel if
they are maintained as per the supported format.
The supported operators and data types to write conditions are:
94
Step 8:
The next step after building the business rules is validating and activating the
Decision table and make sure we don’t have any errors.
Step 9:
Note: Below are the different decision table scenarios and the syntax to execute the procedure in SAP
HANA.
.
95
Step 10:
Once the procedure has been executed successfully we can see the updated
values for PRICEEACH column in ORDERDETAILS table.
Example:
One of the business rule we have is, ‘if the product group is ‘S18’ and Quantity
is ‘>50’ then discount should be 20%.
Value Earlier:
Value After:
Based on the business rule, 20% discount on 63.14 will give us 50.51 as shown above.
Thus we have seen the step by step process of creating decision table with updated values in SAP
HANA system.
96
Step 10:
Once the procedure has been executed successfully we can see the updated values for
PRICEEACH column in ORDERDETAILS table.
Example:
One of the business rule we have is, ‘if the product group is ‘S18’ and Quantity is ‘>50’ then
discount should be 20%.
Value Earlier:
Value After:
Based on the business rule, 20% discount on 63.14 will give us 50.51 as shown above.
Thus we have seen the step by step process of creating decision table with updated values in SAP
HANA system.
The process of creation of Decision tables with Update values and Decision
Tables with return values are same except that we use columns from the
physical tables as actions in first case while parameters will be used as actions
in the latter one.
97
For step by step process of creating decision table, please go through ‘decision
tables with update values’ article (Steps 1-5).
In step5 of the above article, instead of adding ‘PRICEEACH’ (actual column)
from table ORDERDETAILS as action, let’s create new parameter
‘NEW_PRICEEACH’ and add this as action as shown in below image.
Step 6:
Step 7:
The next step after building the business rules is validating and activating the
Decision table and make sure we don’t have any errors.
98
Step 8:
After executing the above procedure, we can see the output data in the same
screen as shown below.
99
We can quickly observe from the above result that, exact discount (20%) has
been applied based on the product group ‘S18’ and order quantity ‘50’.
One of the best way to use this kind of data for further analysis is store it in a
physical table and build reporting models on top of it.
With this we have seen the step by step process of creating decision table with return
values in SAP HANA.
We have different types of engines available in the SAP HANA Database. Based on the type of object
we execute in the system or called by reporting layer, the respective engine takes care of processing.
The below images shows us the different engines we have in SAP HANA.
100
Join Engine:
1. This engine is used when we execute any Attribute View in HANA or run native SQL on more
than one table with join condition.
2. If there are any calculations involved either in Attribute View or in native SQL then Join Engine
will use Calculation Engine for calculations and fetches the result.
OLAP Engine:
1. This engine will be called in the backend whenever we run any queries on Analytic Views in SAP
HANA.
2. If there are no additional calculations performed like calculated columns, restricted measures
and counters, then everything will be processed in OLAP Engine.
3. OLAP Engine acts as join engine for those Attribute Views used in Analytic Views, without any
calculated columns
4. All the join engine work will be converted into ‘BwPopJoin’ which is part of OLAP Engine.
5. If there are any calculations present, to be performed then Calculation engine will be used along
with OLAP Engine.
SQL Engine:
1. SQL Engine which is also known as SQL Parser/interface is used for all sorts of SQL statements
generated by frontend application via different different clients and also for native sql run at
database level.
2. From SAP HANA SP7, we have an option for Calculation Views in ‘Properties’ section, where we
have an option to choose calculation view to run in ‘SQL Engine’. Advantage of this option is
that, instead of moving data between multiple engines HANA executes the entire script in SQL
Engine to get the final result.
Note: We have to keep few things in mind before we use this option.
a. This option is only available for GUI based calculation Views (As of SAP HANA SP9 Rev 94).
b. Using this option actually makes difference in the result of calculated columns based on the
functions we used. Please go through the SAP Note 1857202 – SQL Execution of calculation
views for more information on this.
101
ROW Engine:
1. This is used to perform operations specifically on ROW tables, which are mostly system
statistics tables.
2. We have executed a native SQL statement on both ROW and COLUMN store tables and did a
visualize plan which showed usage of ROW Engine.
Calculation Engine:
SQL privileges implement authorization at object level only. Users either have access to an object, such as a
table, view or procedure, or they do not.
While this is often sufficient, there are cases when access to data in an object depends on certain values or
combinations of values. Analytic privileges are used in the SAP HANA database to provide such fine-
grained control of which data individual users can see within the same view.
Example:
102
Suppose there is a calculation view which contains the sales data of all the regions like Asia, Europe and
America.
The regional managers must have access to the calculation view to see the data. However, managers should
only see the data for their region. The manager of America region should not be able to see data of other
region.
In this case, an analytic privilege could be modeled so that they can all query the view, but only the data that
each user is authorized to see is returned.
o Analytic privileges are intended to control read-only access to SAP HANA information
models, that is
Attribute views
Analytic views
Calculation views
o Analytic privileges do not apply to database tables or views modeled on row-store tables.
To create analytic privileges, the system privilege CREATE STRUCTURED PRIVILEGE is required.
To drop analytic privileges, the system privilege STRUCTUREDPRIVILEGE ADMIN is required.
In the SAP HANA modeler, repository objects are technically created by the technical user _SYS_REPO,
which by default has the system privileges for both creating and dropping analytic privileges.
The database user requires the package privileges REPO.EDIT_NATIVE_OBJECTS and
REPO.ACTIVATE_NATIVE_OBJECTS to activate and redeploy analytic privileges in the Modeler.
Prerequisite:
We need to create the modeling view first which will be used in the Analytic Privilege.
Create a calculation view by following the article Create a calculation view in 10 minutes.
The output of the calculation view is
Let us see how we can restrict the output only for "Asia" region.
2. Select the calculation view and click on Add button. Then click on “Finish”.
105
3. Click on Add button as shown below and select the column REGION_NAME.
4. Now we need to assign the restriction. Click on the add button as shown below and select the value
“Asia”.
The analytic privilege is ready. Now we can assign this analytic privilege to any user.
1. Go to Security -> Users. Right click and create a new user. Specify user name and password.
2. Click on the “Analytic Privileges” tab and add the analytic privilege created in previous step.
1. You also need to assign following privileges required to get access to modeling views.
Execute & Select access on _SYS_BI
Execute & Select access on _SYS_BIC
Execute on REPOSITORY_REST
107
What is Hierarchy?
Hierarchies helps business to analyze their data in a tree structure through different levels/layers with
drilldown capability. Each hierarchy comprises of a set of levels having many-to-one relationships
between each other and collectively these levels make up the hierarchical structure.
For example, a time hierarchy comprises of levels such as Fiscal Year, Fiscal Quarter, Fiscal Month, and
so on.
Level Hierarchy
Parent-Child Hierarchy
Point to Remember: Hierarchies created in Attribute views are not available in a Calculation views
that reuses the attribute view, however the same will be available in Analytic View that reuses
that attribute view.
In level hierarchies each level represents a position in the hierarchy. Level hierarchies consist of
one or more levels of aggregation. Attributes roll up to the next higher level in a many-to-one
relationship and members at this higher level roll up into the next higher level, and so on, until
they reach the highest level.
108
A hierarchy typically comprises of several levels, and we can include a single level in more than
one hierarchy. A level hierarchy is rigid in nature, and we can access the root and child node in
a defined order only.
We can create hierarchies in SAP HANA either at Attribute View or Calculation View semantics
area. Let’s look at the step by step process of creating level hierarchy in SAP HANA.
Use Case:
Customer want to analyze the sales revenue by customer country, state and city. Now we will
create a level hierarchy in our attribute view and access that using ‘MS Excel’ to analyze the
sales revenue.
Pre-requisite:
We need to either have Attribute View or Calculation View created in SAP HANA to build a new
hierarchy.
Interested in learning how to create Attribute and Calculation Views in SAP HANA, please go through
articles ‘Attribute View creation in HANA’ and ‘Calculation View creation in HANA’.
Step1:
Once we have the required object created, in our case AT_CUSTOMER which is an attribute
view, go to sematic section and click ‘Hierarchies’ as shown below.
Step2:
To create a new hierarchy, click ‘+’ symbol available in ‘Hierarchy’ section which navigates us to
the creation screen.
109
Step3:
In the creation screen, we will provide the technical name, label (description) and hierarchy
type as shown below. As explained earlier we can create either level or parent-child hierarchy in
HANA.
110
Step4:
In the ‘Node’ section of the screen, we are going to define our levels for hierarchy. Along with levels we
can select the ‘Node Style’ as well.
We have three types of ‘Node Style’, they are
1. Level Name
2. Name Only
3. Name Path
Please check the below for differences between different node styles.
111
We will select ‘Name Only’ as style as we need just the level name in the report. The next step
we do is adding our attributes to the levels as shown in below picture.
The columns/Attributes for our hierarchy are
CUST_COUNTRY
CUST_STATE
CUST_CITY
ction are
1. Level Type:
The level type specifies the semantics for the level attributes.
For example, level type TIMEMONTHS indicates that the attributes are months such as,
“January”, February, and similarly level type REGULAR indicates that the level does not require
any special formatting.
Below are the different types of level types we have in HANA.
112
2. Order By:
In the Order BY dropdown list, select a column value that modeler must use to order the
hierarchy members.
Note: MDX client tools such as ‘MS Excel’ use attribute values to sort hierarchy members.
3. Sort Direction:
In the Sort Direction dropdown list, select a value that modeler must use to sort and display the
hierarchy members.
Step5:
The other section where we can define additional properties for hierarchy is ‘Advanced’. We can
configure the below options in ‘Advanced Section’.
If you want to include the values of intermediate nodes of the hierarchy to the total value of the
hierarchy’s root node, in the Aggregate All Nodes dropdown list select True.
If you set the Aggregate All Nodes value to False, modeler does not roll-up the values of
intermediate nodes to the root node.
Note:The value of Aggregate All Nodes property is interpreted only by the SAP HANA MDX engine. In
the BW OLAP engine, the modeler always counts the node values. Whether you want to select this
property depends on the business requirement. If you are sure that there is no data posted on
aggregate nodes, you should set the option to false. The engine then executes the hierarchy faster.
2. Default Member:
113
In the Default Member textbox, enter a value for the default member.
This value helps modeler identify the default member of the hierarchy. If you do not provide
any value, all members of hierarchy are default members.
3. Orphan Nodes:
Note:If you select Stepparent option to handle orphan nodes, in the Stepparent text field, enter a
value (node ID) for the step parent node. The step parent node must already exist in the hierarchy at
the root level and you must enter the node ID according to the node style that you select for the
hierarchy. For example if you select node style Level Name, the stepparent node ID can be
[Level2].[B2]. The modeler assigns all orphan nodes under this node.
4. Stepparent: This will be enabled only when we select ‘Stepparent’ as Orphan node in the above
option.
6. Multiple Parent:
If you want the level hierarchy to support multiple parents for its elements, select the Multiple
Parent checkbox.
The below picture gives the pictorial representation of the options available in ‘Advanced’ section.
Step6:
Upon selecting all the options required, click ‘ok’ to complete the hierarchy creation and see the
newly created hierarchy in the attribute view as shown below.
Let’s go ahead and activate the Attribute View as well.
114
Step7:
We should be able to see the same hierarchy in the analytic view where this attribute view have
been re-used.
One of the analytic view we have created by re-using AT_CUSTOMER is AN_ORDERS_SALES
and we can see this hierarchy immediately.
Step8:
Let’s access this analytic view using ‘MS Excel’ which uses MDX connection and see how we can
analyze the sales data with the help of level hierarchy.
After connecting AN_ORDERS_SALES view to MS Excel, we can see the level hierarchy visible
under ‘AT_CUSTOMER and its levels.
115
With the help of this hierarchy we can analyze the total sales price at different levels as shown
below which gives clear insights on dataset.
In the below report, upon moving cursor to the specific city, we can see the complete details for
that city in the database as well.
With this we have successfully completed the level hierarchy creation in SAP HANA and accessing it
using MS Excel to analyze the transactional data.
116
Microsoft Excel is considered the most common BI reporting and analysis tool by many organizations. Business
Managers and Analysts can connect it to HANA database to draw Pivot tables and charts for analysis.
Choose SAP HANA MDX Provider from this list to connect to any MDX data source → Enter HANA
system details (server name, instance, user name and password) → click on Test Connection → Connection
succeeded → OK.
118
It will give you the list of all packages in drop down list that are available in HANA system. You can choose
an Information view → click Next → Select Pivot table/others → OK.
All attributes from Information view will be added to MS Excel. You can choose different attributes and
measures to report as shown and you can choose different charts like pie charts and bar charts from design
option at the top.
MDX Provider is used to connect MS Excel to SAP HANA database system. It provides driver to connect
HANA system to Excel and is further, used for data modelling. You can use Microsoft Office Excel
2010/2013 for connectivity with HANA for both 32 bit and 64 bit Windows.
SAP HANA supports both query languages − SQL and MDX. Both languages can be used: JDBC and
ODBC are used for SQL and ODBO is used for MDX processing. Excel Pivot tables use MDX as query
language to read data from SAP HANA system. MDX is defined as part of ODBO (OLE DB for OLAP)
specification from Microsoft and is used for data selections, calculations and layout. MDX supports
multidimensional data model and support reporting and Analysis requirement.
SAP HANA alert monitoring is used to monitor the status of system resources and services that are running
in the HANA system. Alert monitoring is used to handle critical alerts like CPU usage, disk full, FS
reaching threshold, etc. The monitoring component of HANA system continuously collects information
about health, usage and performance of all the components of HANA database. It raises an alert when any of
the component breaches the set threshold value.
The priority of alert raised in HANA system tells the criticality of problem and it depends on the check that
is performed on the component. Example − If CPU usage is 80%, a low priority alert will be raised.
However, if it reaches 96%, system will raise a high priority alert.
HANA supports both type of data store in database. Row store is used when you need to use Select
statement and no aggregations are performed.
Column store is used to perform aggregations and HANA Modeling is supported only on Column based
tables.
Dell
IBM
HP
Cisco
Lenovo
SAP HANA Studio client is available for Windows XP, Windows Vista, and Windows 7 for 32 bit and 64
bit operating system.
SAP HANA uses multicore CPU architecture and stores data in row and column based storage in HANA
database.
What is the functional difference between Row and Column based storage? Where do
we use Row based storage and column based storage?
Consider below table- FCTSales
England
Iphone6
107
India
Samsung Note 6
250
US
Lenovo A110
110
England
India
US
Iphone6
Samsung Note6
Lenovo A110
107
250
110
What are the different components in SAP HANA Architecture? What is the use of Index
server?
Index Server
Name Server
Statistical Server
Preprocessor Server
121
XS Engine
LM structure
Index server contains engine to process data in HANA database. These data engines are responsible
to handle all SQL/MDX statement in HANA system. Index server also contains Session and
Transaction Manager which is responsible to manage all running and completed transactions.
Persistence layer also manages data, transaction and configuration logs and backup of these files.
Backups of data and log files are performed at save points and is normally scheduled every 5-10
minutes.
What are the different license keys type in HANA system? What is their validity?
Temporary License keys are automatically installed when you install the HANA database. These keys
are valid only for 90 days and you should request permanent license keys from SAP market place
before expiry of this 90 days period after installation.
Permanent License keys are valid till the predefine expiration date. License keys specify amount of
memory licensed to target HANA installation.
What are the different types of permanent license keys in HANA system?
There are two types of permanent License keys for HANA system −
Unenforced − If unenforced license key is installed and consumption of HANA system exceeds the
license amount of memory, operation of SAP HANA is not affected in this case.
Enforced − If Enforced license key is installed and consumption of HANA system exceeds the license
amount of memory, HANA system gets locked. If this situation occurs, HANA system has to be
restarted or a new license key should be requested and installed.
While activating SAP HANA Modeling view, you get an error message −
Repository: Encountered an error in repository runtime extension; Deploy Attribute View: SQL:
transaction rolled back by an internal error: insufficient privilege: Not authorized
Grant SELECT privileges on schemas of the used data foundation tables to user "_SYS_REPO"
122
In SAP HANA Studio, what is use of different folders when you add a HANA system to
Studio?
Backup −
It is used to perform for backup and recovery in SAP HANA system. You can check backup
configuration details, run manual backup, to check last successful back performed, etc. for data and
log backup.
Catalog −
This contains RDBMS objects like schemas, tables, views, procedures, etc. You can open SQL editor
and design database objects
Content −
You can create new packages and design Information views in HANA system. Various views can be
created under content tab to meet business requirement and to perform analytical reports on the top
of the Modeling views.
Provisioning −
This is used for Smart data access to connect to other databases like HADOOP, TERADATA and
SYBASE
Security −
This is used to define users and to assign roles. You can define various privileges on different users
using Security tab. You can assign Database and Package privileges to different users to control the
data access.
What is the difference between Open Data Preview and Open Definition?
Open Data Preview −
This is used to see the data stored in an object- table or a modeling view. When you open data
preview, you get three options −
Raw Data
Distinct Values
Analysis
Open Definition −
123
This is used to see the structure of the table − column name, column data type, keys, etc.
If you want to see all active alerts, description and priority, where this can be checked?
Go to Administration → alerts
In Administration tab, you can check system overview, landscape, volumes, configuration, system
information, etc.
To open SAP HANA Cockpit → Right click on HANA system in Studio → configuration and monitoring →
open SAP HANA cockpit
This contains RDBMS objects like schemas, tables, views, procedures, etc. You can open SQL editor
and design database objects.
Content −
This is used to maintain design time repository. You can create new packages and design Information
views in HANA system. Various views can be created under content tab to meet business requirement
and to perform analytical reports on the top of the Modeling views.
Which Information view in SAP HANA is used to implement star schema queries?
Analytic View
What is difference between Copy and derived from option while creating a new SAP
HANA Attribute information view?
Copy option allows you to copy an existing Information view and to make changes to it.
Derived option allows to create a copy of an existing view and you can’t make any changes to it.
What are the different user parameters that can be defined in Semantic layer?
Hierarchies
Parameters/Variables
What are the benefits of using Calculation view with Star join?
It simplify the design process as allows you to select multiple measures from multiple fact tables.
Join
Union
Project
Aggregation
Rank.
What is the default node, if you select a Dimension Calculation view?
Projection
Can you pass Attribute views and analytic views in a Calculation view with Star join?
No. In a Calculation view with Star Join, you can only use Dim Calculation views.
If you want that your Area Managers can only see data from their region and you want
to assign this restriction for specific duration, how this can be achieved?
Using Analytic Privileges, you can add Region attribute and values to Attribute Restrictions and Time
duration is defined in Privilege validity.
Analytic Privilege can be added to user profile in User and Roles under Security tab.
Load
Replicate
Suspend
Resume
What do you understand by trusted RFC connection? Which transaction is used to create
and configure trusted RFC?
On your source SAP system A1 you want to setup a trusted RFC to target system B1. When it is done
it would mean that when you are logged onto A1 and your user has enough Authorization in B1, you
can use the RFC connection and logon to B1 without having to re-enter user and password.
While creating data store in Data Services for SAP ECC, what is the datastore type in
drop down list?
You have to select SAP Applications in data store type and SAP HANA in Database drop down list.
How you can perform trace, monitoring, error and performance monitoring of jobs in
Data Services?
Replication job can be performed in Data Services Management Console. You have to go to status tab
and select the repository where job is created → Batch Job Status.
There you can find different tabs- Trace, Monitor, Error and Performance Monitor.
126
Go to status tab and select the repository where job is created → Batch Job configuration → Add
Schedule.
Here you can find job execution parameters while adding a schedule for the job.
While configuring a batch job in SAP Data Services repository, what does owner name
means?
Owner represents schema name where tables will be moved using batch job.
MDX Provider is used to connect MS Excel to SAP HANA database system. It provides driver to
connect HANA system to Excel and is further used for data modelling. You can use Microsoft Office
Excel 2010/2013 for connectivity with HANA for both 32 bit and 64 bit Windows.
How you can limit the size of data backup files in HANA system?
This can be done in File Based data backup settings. In Backup tab, go to Configuration → Limit
Maximum file Size check box and enter the file size.
127
SAP HANA XS is stands for SAP HANA Extended Application Services. Sometimes it is also referred as XS Engine or
just XS. The main idea of SAP HANA XS is to embed a full featured application server, web server, and development
environment within the SAP HANA appliance itself.
Please note that XS is not a completely separate technology that happens to be installed on the same hardware server as
SAP HANA; XS is actually an extension of, and tightly integrated into, the SAP HANA Database.
XS Engine is a JavaScript application server based on the Mozilla SpiderMonkey engine. This is the same engine used in the
Firefox Web browser.
Suppose you want to create a web page or an UI or a simple REST service on top of HANA table/view. There are 2 approach you
can follow.
128
The major advantages of SAP HANA XS are simplicity, low cost of operation and performance.
SAP HANA XS minimize the architecture layers. We can create applications which runs directly on
HANA without additional external servers or system landscape. This simplified architecture
decreases the total cost of operation.
Furthermore the performance is also better because of the closeness of the application and
control flow logic to the database. In case of separate application server data has to be moved
back and forth between application server and HANA database. But in case of HANA XS, it’s only
an inter-process communication which again enhances the performance.
SAP HANA XS advanced (XSA) is a completely re-engineered application server for native development of
applications in SAP HANA environment which is available with SP11. It supersedes SAP HANA XS and
provides significant improvements and advantages compared with its predecessor.
SAP HANA XSA brings dramatic improvements in terms of architecture with microservices.
From the very beginning, SAP HANA was always intended to be more than just a database. SAP has long
referred to SAP HANA as the SAP HANA Platform.
The main idea of SAP HANA XS was to embed a full featured application server, web server, and
development environment within the SAP HANA appliance itself. This enabled SAP, customers, and
partners to develop applications which ran completely within the single SAP HANA “box”.
130
Requirements change over time and so too has XS within SAP HANA. SAP HANA extended application
services in SPS 11 represents an evolution of the application server architecture building upon the previous
strengths while expanding the technical scope.
The most prominent feature of XSA is that it supports more than the JavaScript runtime.
Initially this will be:
o Node.js
o Java
o C++
So when implementing services, you can choose the runtime (and language of course). Also applications
written once can be deployed in HCP or on-premise without any changes.
One application can use one or more of these runtimes by combining them as microservices running side by
side in one application.
One of the driving requirements for the new SAP HANA XS Advanced was the desire to better unify the
architecture of solutions built in the cloud and on premise.
SAP HANA XS is offered both in on-premise HANA systems and in SAP HCP. However XS in HCP is
rather separated from the rest of the HCP technology architecture. The primary goal for XS Advanced
was to unify these two delivery channels on a single base architecture.
Therefore XS Advanced is essentially based upon Cloud Foundry. In the near future it is planned that HCP
itself will run based upon Cloud Foundry.
131
In SAP HANA XS, if one service running on HANA server fails, it impacts the dependent processes. In XS
Advanced on the other hand, these language runtimes run separately for every app by using a copy of the
runtime version or SDKs as required. They also run independently of one another, which reduces the
impact of a failed service and high scalability.
Note: SAP HANA XS and SAP HANA XSA co-exist in HANA, but are completely separated and work
independently of one another. SAP HANA XSA supersedes SAP HANA XS in HANA native development.
SAP has not removed or disabled any of the current architecture. The current XS Engine remains a part
of the HANA infrastructure, although now renamed XS Classic so as to distinguish it from the new
capabilities delivered as part of XS Advanced.
Likewise the HANA Repository remains in place even as SAP move to Git/GitHub as the future design
time/source code repository.
Eventually these older capabilities will be removed from HANA, but that point hasn’t been decided yet.
SAP won’t remove them until they see a critical mass of customers moving their development objects to the
new capabilities we describe here.
132
Before HANA SP7, HANA Studio was the only tool available for doing SAP HANA Development.
Although HANA Studio is very good and easy to use tool, there are still some problem with this.
o HANA Studio is desktop based tool. It has to be downloaded and installed. This means if you are
working from a different computer/laptop, you will have to install the HANA studio in each of them.
o It does not give the facility – “Work from anywhere with any device”.
Imagine that you have completed all your development in HANA and after so much of hard work you are
now relaxing near a sea beach. Suddenly you get a call that there is a minor change required in the files.
Would you like to open your laptop for this? Or would you prefer to do it from your mobile?
Or suppose you are in airport and want to check the status of HANA server and do some basic admin work.
Wouldn't you like if you could do it from your tablet?
Now a days we do not want to stick to our laptop for work. We want to work from anywhere with any
device (mobile, tablets, laptop). This is the main reason behind SAP HANA Web-based Tools.
o Editor: Inspect, create, change, delete and activate SAP HANA repository objects.
o Catalog: Create, edit, execute and manage SQL catalog artifacts in the SAP HANA database.
o Security: Manage users and roles.
133
o Trace: View and download SAP HANA trace files and set trace levels (for example, info, error,
debug).
>
The SAP HANA Web-based Development Workbench is available on the SAP HANA XS Web server at the
following URL:
http://<WebServerHost>:80<SAPHANAinstance>/sap/hana/xs/ide
Suppose you HANA host id is “abc” and instance no is 02, then the URL will be
http://abc:8002/sap/hana/xs/ide
Note: The SAP HANA Web-based Development Workbench supports Microsoft Internet Explorer (10+),
Mozilla Firefox, and Google Chrome Web browsers.
>
No Installation:
The SAP HANA Web-based Development Workbench allows you to develop entire applications in a Web
browser without having to install any development tools and is therefore a quick and easy alternative to
using the SAP HANA studio.
2. Select “Repositories” tab in the left side. Right click and select “Create Repository Workspace”. In
the popup window, provide Workspace name. You may keep everything as default.
136
Click on “Project Explorer” tab. Right click and select “XS Project”. Provide a name for your project and
click Next.
3. Select the Workspace. Click on the browse button and select the package where you want to create
the XS project.
4.
Note: If you need to create a new package, go to System tab and create it under Content.
5. Leave everything else blank and click on Finish. Later we will explain the components like
hdbschema, xsjs etc. But in this article we will only focus to create a simple Hello World application.
137
An XS project will be created as shown below. Note that it automatically creates 2 mandatory files - .xsapp
and .xsaccess. To read more about these files read SAP HANA XS Application descriptor and Application
access files
< HTML>
<HEAD>
<TITLE>
HANA XS Hello World Application
</TITLE>
</HEAD>
< BODY>
<H1>Hi</H1>
<P>This is my first "hello world" example.</P>
< /BODY>
< /HTML>
9. Done. You have created your first HANA XS Application. Now let’s run it. Right click on the file
once again and select “Run As” à HTML. It will open in your default browser. Provide HANA user
id and password. It should open like image below.
OData (Open Data Protocol) is an open protocol for sharing data. In other words - OData defines an abstract
data model and a protocol that let any client access information exposed by any data source.
OData is a REST (Representational State Transfer) protocol; therefore even a simple web browser can view
the data exposed through an OData service.
It uses well known web technologies like HTTP, AtomPub(XML) and JSON(JavaScript)
The fundamental idea is that any OData client can access any OData data source. Rather than creating
unique ways to expose and access data, data sources and their clients can instead rely on the single solution
that OData provides.
OData can be used freely without the need for a license or contract.
History of OData:
OData was originally created by Microsoft. Yet OData isn't a Microsoft-only technology.
Microsoft has included OData under its Open Specification Promise, guaranteeing the protocol's long-term
140
availability for others. Open Specification Promise allow anyone to freely interoperate with OData
implementations".
An OData service exposes data stored in database tables or views as OData collections for analysis and
display by client applications.
The client can be web browser, or SAPUI5 application or HTML5 application or any other application
which supports OData.
An OData service for SAP HANA XS is defined in a text file with the file suffix .xsodata, for example,
OdataSrvDef.xsodata.
The XSODATA services are great because they provide a large amount of functionality with minimal
amounts of development effort.
(“content-type: application/json”)
XSOData Examples:
An OData service for SAP HANA XS is defined in a text file with the file suffix .xsodata
The file must contain at least the entry
141
service {
}
You can also expose Attribute, Analytic and Calculation view using XSOData.
Suppose there is a calculation view called “Sales” in a package “sap.hana”, then the service will be like:
service {
"sap.hana"::"Sales" as "Sales";
1. Create a XS project as mentioned in the article Create Your First HANA XS Application
using HANA Studio
142
2. Right click on the project and select New à XS OData File. Give file name as
“ProductODataService” and click on Finish.
3. Copy paste the below code. Make sure you change <YOUR_SCHEMA> to the schema where PRODUCT
table was created.
service {
"<YOUR_SCHEMA>"."PRODUCT" as "Product";
4. Right click and select Team à Activate. This will activate the file.
5. Right click and select Run As à XS Service. This will open the XSOData service in default browser.
143
6. Enter user id and password of HANA system. You will see the result as
The OData can be called from any SAPUI5 or HTML5 application. This can also be called from any other
UI application (in Java, .Net etc.) or any tools that supports OData.
Introduction:
SAP HANA XS enables you to create database schema, tables, views, and sequences as design-time files in
the repository. This is achieved with the help of Core Data Services (CDS).
Core Data Services (CDS) artifacts are design-time definitions. When CDS file is activated, it generate
runtime objects.
CDS file must have the file extension .hdbdd, for example, MyCDSTable.hdbdd
Note: Currently CDS cannot be used to create Schema and Sequence. To create a database schema you need
to create .hdbschema file and to create a Sequence you need to create .hdbsequence file.
1. Create a XS project as mentioned in the article Create Your First HANA XS Application using
HANA Studio
2. Right click and select New à Others à SAP HANA à Database Development à DDL Source File.
145
3. Enter the name of the CDS document in the File Name box, for example, MyModel.
4. A basic CDS will be created similar to below.
namespace mypackage.data;
@Schema: 'MY_SCHEMA'
context MyModel {
};
o Namespace: The name of the repository package in which you created the new CDS document, for
example, mypackage.data
o Context Name: The name of the context in a CDS document must match the name of the CDS
document itself; this is the name you enter when using the file-creation wizard to create the new
CDS document, for example:
o Schema Name: The @Schema annotation defines the name of the schema to use to store the
artifacts that are generated when the CDS document is activated. Change the schema name to your
schema. For example, MY_SCHEMA.
@Catalog.tableType : #COLUMN
Entity Employee {
key Id : Integer;
FirstName : String(20);
146
LastName : String(20);
Salary: Decimal(15,2);
};
6. Right click and select Team à Activate. This will activate the file.
7. Go to Catalog. Expand schema MY_SCHEMA and check the new Table. You may have to refresh
the schema if new table does not appear.
147
External View
External View is Data Dictionary view in ABAP to consume the HANA models modeled in SAP
HANA repository. External views are created using the creation wizard in Eclipse IDT tools using
ABAP perspective.
External views can be created against these SAP HANA repository models
1. Attribute View
2. Analytical View
3. Calculation Views(only with default values to Input parameters)
When you create a external view the definition of HANA view is copied from HANA repository to
ABAP. We can create several external views for one and the same HANA model.
Synchronizing of external view with the HANA repository is required when there are any changes done on
the HANA model after creation of external view. The external view editor contains a button called
“Synchronize” to synchronize the external view with HANA view.
When you are creating external view to the HANA model only below mapping data types are
supported.
NCLOB STRING
Unicode character string
Step-By-Step Procedure
1. Right Click on your package and choose New → Other ABAP Repository Object.
2. In the New ABAP Repository Object window, search by “dictionary” and choose Dictionary
View under Dictionary folder and hit OK.
5.
External View will be created successfully.Click on activate button to activate the external view
created.
151
7. Before calling the view in an abap program, we do a quick and easy test to access the data.Go to
SE16N(Data Browser) enter ZEV_EMP_INFO and Execute.
152
REPORT zdemo_hana_view.
lo_salv->display( ).
9. Save and activate the program and execute the report.
Output
153