Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Table 1:
Get Started Get Productive
Creating Your First Cloud Application SAP HANA [page 1078] | Java [page
[page 10] 1021] | SAPUI5 | HTML5 [page 1111]
Secure Applications
Enable Application Providers to Access
Your Account [page 28] Authentication [page 1326] | Authoriza
tion [page 1332] | OAuth 2.0 [page 1340]
| Roles [page 1394] | ID Federation [page
1406] ...
1.1 Overview
SAP Cloud Platform is an in-memory cloud platform based on open standards. It provides access to a feature-
rich, easy-to-use development environment in the cloud. The platform includes a comprehensive set of services
for integration, enterprise mobility, collaboration, and analytics.
SAP Cloud Platform enables customers and partners to rapidly build, deploy, and manage cloud-based enterprise
applications that complement and extend your SAP or non-SAP solutions, either on-premise or on-demand.
As a Platform-as-a-Service operated by SAP, our product frees you from any infrastructure and IT costs and
offers state-of-the art quality of service - availability, scalability, multitenancy.
Application development
You can use the following programming models to build highly scalable applications:
● Java - SAP Cloud Platform is Java EE 6 Web Profile certified. You can develop Java applications just like for
any application server. You can also easily run your existing Java applications on the platform.
● SAP HANA - you can use the SAP HANA development tools to create comprehensive analytical models and
build applications with SAP HANA programmatic interfaces and integrated development environment.
● HTML5 - you can easily develop and run lightweight HTML5 applications in a cloud environment.
● SAPUI5 - use the UI Development Toolkit for HTML5 (SAPUI5) for developing rich user interfaces for modern
Web business applications.
In the context of SAP Cloud Platform, a solution is comprised of various application types and configurations
created with different technologies, and is designed to implement a certain scenario or task flow. You can deploy
solutions by using the Change and Transport System (CTS+) tool, the console client, or by using the cockpit,
where you can also monitor your solutions. To describe and technically realize the solutions, SAP Introduces the
multi-target application (MTA) model. It encompasses and describes application modules, dependencies, and
interfaces in an approach that facilitates validation, orchestration, maintenance, and automation of the
application throughout its lifecycle.
Applications developed on SAP Cloud Platform run in a modular and lightweight runtime container. The platform
provides a secure, scalable runtime environment with reusable platform services.
Virtual Machines
Virtual machines allow you to install and maintain your own applications in scenarios not covered by the platform.
A virtual machine is the virtualized hardware resource (CPU, RAM, disk space, installed OS) that blends the line
between Platform-as-a-Service and Infrastructure-as-a-Service.
Services
You can consume a set of services provided by SAP Cloud Platform according to the technology you prefer and
the use cases of your scenarios.
SAP Cloud Platform facilitates secure integration with on-premise systems running software from SAP and other
vendors. Using the platform services, such as the connectivity service, applications can establish secure
connections to on-premise solutions, enabling integration scenarios with your cloud based applications.
In-memory persistence
SAP Cloud Platform includes persistence powered by SAP HANA, taking full advantage of its real-time, in-memory
computing technology and built-in analytics.
Comprehensive, multilevel security measures have been built into SAP Cloud Platform. This security is
engineered to protect your mission critical business data and assets and to provide the necessary industry
standard compliance certifications.
Free trial
You can start by getting a free SAP Cloud Platform developer license on SAP Cloud Platform Developer Center
that also gives you access to our community and all the free technical resources, tutorials, blogs, support you
need.
Related Information
General Constraints
● Upload limit: the size of an application deployed on SAP Cloud Platform can be up to 1.5 GB. If the application
is packaged as a WAR file, the size of the unzipped content is taken into account.
● SAP Cloud Platform exposes applications only via HTTPS. For security reasons, applications cannot be
accessed via HTTP.
● SAP Cloud Platform Tools for Java and SDK have been tested with Java 7 and Java 8.
● SAP Cloud Platform Tools for Java and SDK run fine in many operating environments with Java 7 and Java 8
that are supported by Eclipse. However, we do not systematically test all platforms.
● SAP Cloud Platform Tools for Java must be installed on Eclipse IDE for Java EE developers.
For the platform development tools, SDK, Cloud connector, SAP JVM, see https://tools.hana.ondemand.com/
#cloud
Browser Support
For UIs of the platform itself, such as the SAP Cloud Platform Cockpit, the following browsers are supported on
Microsoft Windows PCs and where mentioned below on Mac OS X:
Browser Versions
If you are developing an SAPUI5 application, for the list of supported browsers see Browser and Platform
Matrixes.
For security reasons, SAP Cloud Platform does not support TLS1.0, SSL 3.0 and older, and RC4 based cipher
suites. Make sure your browser supports at least TLS1.1 and modern ciphers (for example, AES).
Services
You can find the restrictions related to each SAP Cloud Platform service in the respective service documentation.
For more information, see Services [page 307].
Accounts
For more information about the limitations of each type of account (developer, customer, partner), see Account
Types [page 14].
Table 2:
To learn about See
How to create a cloud application Creating Your First Cloud Application [page 10]
Build your first application on the platform based on your preference for development technology and language.
You might want to try several of the tutorials in these tables.
Note
The Import option for some technologies means that sample applications are available, which you can import
in your Eclipse IDE.
SAP HANA
Table 3:
Table 4:
HTML5
Table 5:
SAP Web IDE Hello World Tutorial Using SAP Web IDE [page 85]
SAPUI5
Table 6:
Table 7:
Learn about Details
Accounts
Manage your accounts and available quota using one dedicated user interface – the
cockpit, or the console client commands.
Manage all activities associated with your account from the cockpit.
Cockpit
See Cockpit [page 97]
Find out about the set of pre-defined roles offered by the platform, which can be as
signed to the platform users, and the tasks associated with them.
Use the cockpit to manage all members and roles, associated with your account.
Roles and members
See Managing Members [page 26]
Use the cockpit to manage the subscriptions purchased for your account.
Subscriptions
See Managing Subscriptions [page 31]
● Java
See Java: Development [page 1021]
● SAP HANA
Developing Applications See SAP HANA: Development [page 1078]
● HTML5
See HTML5: Development [page 1111]
● SAPUI5
See UI development toolkit for HTML5 (SAPUI5)
Connect your applications to remote services that run on the cloud or on-premise.
Connectivity
See SAP Cloud Platform Connectivity [page 311]
Configure identity federation and single sign-on with external identity providers.
Security
See Identity and Access Management [page 1318]
Make use of the services offered by the platform, to extend the functionality of your
Services applications.
An account is a hosted environment on SAP Cloud Platform in which users deploy applications and connect them
to services.
Accounts
An account on SAP Cloud Platform represents the scope of the functionality you have purchased based on your
contract with SAP and the agreed level of support you are entitled to. It is the logical representation of your SAP
Cloud Platform landscape. In a productive environment, the account model has a higher degree of complexity
than in a trial environment. It may consist of more than one account, each representing a different instance in
your landscape, for example.
There are also differences in how accounts work depending on the account type you choose. The account type
determines pricing, conditions of use, resources, services available, and landscape host. It depends on your use
case if you choose a free trial account (for testing) or a licensed account on a productive landscape.
You can start by getting a free SAP Cloud Platform developer license that is for trial use only. This also gives you
access to our community and all the free technical resources, tutorials, blogs, support you need. For productive
use, please buy a paid customer or a partner account. It is important that you are aware of these differences to
enable you to plan and set up your landscape and account model in the best possible way. For this, please answer
a couple of questions, for example:
In the cockpit, accounts are organized in a hierarchical structure and are associated with a particular data center.
To run and manage your applications on SAP Cloud Platform and to connect them to services, your user must be
assigned to an account. Your user can be assigned to one or more accounts. You can view a list of all accounts
available to you and access them using the cockpit. A user with administrative permissions can use a self-service
to create accounts, add users to accounts, and assign roles to users for the account in question.
Depending on your needs, different account types are available for testing (free) or productive use (paid
accounts). The account type determines pricing, conditions of use, resources, services available, and landscape
host. You can start by getting a free SAP Cloud Platform developer license that is for trial use only. This also gives
you access to our community and all the free technical resources, tutorials, blogs, support you need. For
productive use, you need to purchase a paid customer or a partner account.
Each account is associated with a particular data center, which is the physical location (for example, Europe, US
East) where applications, data, or services are hosted.
While developer accounts use the trial landscape, which is located in Europe only, customer and partner accounts
use a productive landscape, which is available on a regional basis.
The specific landscape associated with an account is relevant when you deploy applications (landscape host) and
access the SAP Cloud Platform cockpit (cockpit URL).
Global Accounts
Each customer or partner account is associated with a particular global account. The global account groups
together different accounts that an administrator makes available to the users in their organization for deploying
applications on the cloud platform. The customer data, billing information, and purchased quota (such as Java
quota) are stored in a global account. Accounts in a global account are independent from each other. This is
important to consider with respect to security, user management, data migration and management, integration,
and so on, when you plan the landscape and overall architecture.
Administrators can assign the available quota to the different accounts and move it between accounts that belong
to the same global account (Prerequisite: You have the Administrator role for the relevant accounts.). New
accounts are assigned automatically to the associated global account. The global account is the same on all
landscapes.
Note
The global account feature is not available in a trial environment. As a user working in a trial environment, you
see your account in which you deploy and run applications.
Related Information
This section provides an overview of account types available on SAP Cloud Platform.
The main features of each account type are described in the following table:
Use case A developer account enables you A customer account enables you A partner account enables you to
to explore the basic SAP Cloud to host productive, business-criti build applications and to sell them
Platform functionality for a non- cal applications with 24x7 support. to your customers.
committal and unlimited period.
You can purchase a customer ac
Access is open to everyone.
count just like any other SAP soft
ware.
Benefits ● Free of charge Support for productive applica ● It includes SAP Application
● Self-service registration tions Development licenses to ena
● Unlimited period ble you to get started with
scenarios across cloud and
● A trial tenant database on a
on-premise applications.
shared HANA MDC system
that you can use for 12 hours. ● It offers the opportunity to
certify applications and re
Restriction ceive SAP partner logo pack
age with usage policies.
After 12 hours, it will be
● Partners can advertise and
shut down automatically
sell applications via the SAP
to free resources (see
Store
Overview of Database Sys
tems and Databases [page
843]).
Services availa Productive and beta services Productive and beta services Productive and beta services
ble
Limitations ● One trial account for a trial Resources according to your con Predefined resources according to
user tract your partner package. More can
● One running Java application be purchased if there is a need.
● 1 GB of database storage
● 1 GB of document storage
● One user per account
● One SAP HANA tenant data
base
● 100MB for all Git repositories
● Two configured on-premise
systems with the Cloud con
nector
● Cloud connector supported
only for Java and HTML5 ap
plications
● No service level agreement
with regards to the availability
of the platform
● 2 running Mobile applications
Registration For information about how to reg For more information, see https:// To join the partner program, sign
ister, see Signing Up for a Devel hcp.sap.com/pricing.html . up at the SAP Application Devel
oper Account [page 17]. opment Partner Center .
Contact us on SAP Cloud Platform
or via an SAP sales represen
tative.
Landscape hanatrial.ondemand.com See Landscape Hosts [page 41] See Landscape Hosts [page 41]
host
Related Information
To deploy applications on SAP Cloud Platform, you need an account that corresponds to your role.
Related Information
A developer account gives you access to the trial landscape for an unlimited period and is free of charge.
Context
Developer accounts are intended for personal exploration, and not for use in a productive environment or for team
development. The scope of functionality a developer account offers is limited as compared with a productive (a
customer or partner) account. Here are some things to consider before you decide to use a developer account:
● Your trial user can have only one trial account. You cannot create additional accounts on a trial landscape.
Note
To be able to create accounts, please purchase a customer account or join a partner program.
● A developer account has only one virtual machine (VM) at its disposal. You can deploy multiple applications,
but you can start only one application at any one time.
● Applications are stopped automatically after a certain period of time for cleanup purposes.
Procedure
No. You’d like to register with the SAP ID service and create a developer account.
1. Click Register.
2. On the registration screen, enter the required data and confirm the changes.
You’ll receive a confirmation e-mail with instructions to activate your ac
count.
3. Click the link in the e-mail to confirm your address and to activate your ac
count.
Your developer account is now automatically created. The cockpit opens and shows the overview of your
newly created account. The name of your new developer account contains your user ID and the suffix trial,
for example, p0123456789trial.
When deploying to the cloud, remember to use the SAP Cloud Platform landscape host
hanatrial.ondemand.com.
Related Information
A customer account allows you to host productive, business-critical applications with 24x7 support.
When you want to purchase a customer account, you can select from a set of predefined packages. For more
information, see https://hcp.sap.com/pricing.html . Contact us on SAP Cloud Platform or via an SAP sales
representative.
In addition, you can upgrade and refine your resources later on. You can also contact your SAP sales
representative and opt for a configuration, tailored to your needs.
After you have purchased your customer account, you will receive an e-mail with a link to the landing page of SAP
Cloud Platform.
A partner account enables you to build applications and to sell them to your customers.
To become a partner, you need to fill in an application form and then sign your partner contract. You will be
assigned to an account with the respective resources. To apply for the partner program, visit https://
www.sapappsdevelopmentpartnercenter.com/en/signup/new/ . You will receive a welcome mail with further
information afterwards.
Related Information
You can manage accounts and assign the quota available for a global account to the accounts associated with this
global account.
Prerequisites
You have the Administrator role for the account in question to be able to manage the account, its members, and
the quota.
The overview of accounts lists all the accounts available to you and is your starting point for viewing and changing
account details in the cockpit. Accounts are displayed as tiles that shows details about the account, including the
number of deployed Java applications, members, and the quota information.
To view or change the details for an account, trigger the intended action directly from the tile toolbar.
Note
You can manage accounts and quota using the cockpit or the console client commands.
Related Information
You can create accounts and use a copy function to copy settings from other accounts.
Prerequisites
The overview of accounts available to you is your starting point for creating accounts in the cockpit.
The new account is added as a new tile in the overview from where you can perform further actions. You are
automatically assigned as a member of the newly created account.
Note
Account creation happens in the background. Some details including the account name and description are
available right away, while the settings you select for copy will only be created in the background with some
delay. There is no notification that the account has been created.
You can enable an account to use beta features made available by SAP for SAP Cloud Platform from time-to-time.
This option is available to administrators only and deselected by default for your productive landscape.
Caution
You should not use SAP Cloud Platform beta features in productive accounts, as any productive use of the beta
functionality is at the customer's own risk, and SAP shall not be liable for errors or damages caused by the use
of beta features.
Procedure
Next Steps
The newly created account is displayed on the overview page of available accounts.
You can view and change the details of the currently selected account.
Prerequisites
Context
The overview of accounts available to you is your starting point for viewing and changing account details in the
cockpit. To view or change the details for an account, trigger the intended action directly from the tile for the
relevant account.
The account name is a unique identifier of the account on the cloud platform that is automatically generated when
the account is created. You use this account name as a parameter for the console client commands.
● Display name: Specify a human-readable name for your account and change it later on, if necessary. This way
you can distinguish more easily your accounts in case you have more than one.
● Description: Specify a short descriptive text about the account, typically stating what it does.
● Beta Features: Enable the account to use beta features made available by SAP for SAP Cloud Platform from
time-to-time. This option is available to administrators only and deselected by default for your productive
landscape.
Caution
You should not use SAP Cloud Platform beta features in productive accounts, as any productive use of the
beta functionality is at the customer's own risk, and SAP shall not be liable for errors or damages caused by
the use of beta features.
● Default Database: Select a different default database from the list of default databases available for the
account.
3. To edit the account details, choose (Edit) on the tile for the account in question.
4. Modify the display name.
5. Specify or modify the description.
6. To enable the use of beta features in the account, select the checkbox.
7. Select a different default database.
8. Save your changes.
Related Information
Prerequisites
Context
The overview of accounts available to you is your starting point for deleting accounts in the cockpit.
● The account must not contain subscriptions, non-shared database systems, database schemas, deployed
applications, HTML5 applications, or document service repositories.
Note
You need to delete these account entities before you proceed with the account deletion. For information
how to delete them, see Related Information.
You cannot delete the last remaining account from the global account in question.
Procedure
Related Information
You can view details about the Java quota and virtual machines quota purchased for a global account and how it is
distributed between the accounts in this global account. As long as there are free quotas, you can freely distribute
them between the accounts.
Prerequisites
You have the Administrator role for the accounts for which you want to manage the quota.
The overview of accounts available to you is your starting point for viewing quota information in the cockpit. The
overview shows the different quotas in use, how they are distributed between individual accounts, and how many
free quotas there are for which purchased edition. For example, there are 2 free Java quotas out of 5 that can be
used in the different accounts.
On the Quota Management page in the cockpit, you can view quota information per account and edition, and
manage quota for the currently selected global account. The quota purchased for a global account is available to
the applications deployed in all accounts in this global account. The quota assigned to individual accounts in the
global account must not exceed the purchased limits. You can free quotas by removing them from an account.
Quotas are sold in different editions.
In the editing mode, use the + and – buttons to adjust the quota in the specified limits.
● The Edit option on the Quota Management will only be enabled if you have the Administrator role for at least
one account in this global account.
● You need the Administrator role for the account in question to be able to change the quota. Otherwise, the +
and – buttons are disabled and you can only view how the quota is distributed.
● There is a category Other Accounts for which the total quota of all accounts belonging to this category is
displayed, but no details. These are the accounts to which you are not assigned as member and that you
cannot access.
● You cannot decrease quota any further if it is still in use. You first need to release some resources before you
can continue (that means, stop some of the applications or processes in that account).
● You cannot increase quota any further if you have reached the limit of your purchased quota because you
have distributed all the available quota already.
● You can filter the quota information by quota type and by account.
● Clicking the link for the account name takes you to the overview page for this account.
Procedure
1. Log on to the cockpit and choose Quota Management in the navigation area.
2. Choose Edit.
3. Use the + and – buttons to adjust the quota in the specified limits as needed and save your changes.
Related Information
SAP may offer and a customer may choose to accept access to functionality that is not generally available and is
not validated and quality assured in accordance with SAP’s standard processes. Such functionality is defined as a
beta feature.
The aim of the beta features is to enable customers, developers, and partners to test new features on SAP Cloud
Platform. The beta features have the following characteristics:
● SAP may require that customers accept additional terms to use beta features.
● The beta features are either released on productive landscapes for customer and partner accounts, or on trial
landscapes for developer accounts, or on both landscapes.
● You can enable some of the beta features in the SAP Cloud Platform cockpit. In the overview of accounts
available to you, choose the (edit) icon on the tile for the account in question and then select the
checkbox to enable the use of beta features.
● No personal data may be processed by beta functionality in the context of contractual data processing
without additional written agreement.
Caution
You should not use SAP Cloud Platform beta features in productive accounts. Any productive use of the beta
functionality is at the customer's own risk, and SAP shall not be liable for errors or damages caused by the use
of beta features.
Related Information
Use the cockpit to manage users and their roles. You can add and remove users for an account and select and
deselect roles. All members assigned to the selected account can use the functionality provided by SAP Cloud
Platform in the scope of this account and as permitted by their assigned account member roles. These roles
support typical tasks performed by users when interacting with the platform.
Prerequisites
SAP Service Marketplace users are automatically registered with the SAP ID service, which controls user
access to SAP Cloud Platform.
Context
Procedure
Note
The name of a member is displayed only after the member visits the account for the first time.
● To select or deselect roles for a member, choose (Edit). The changes you make to the member's roles
take effect immediately.
● You can enter a comment when editing user roles. This provides you with an effective and simple way of
tracking the reasons for account membership and other important data. The comments are visible to all
members.
● You can send an e-mail to a member. This option is displayed only after the recipient visits the account for the
first time.
● To remove all the roles of a member, choose Delete (trashcan). This removes the member from the account.
● To check the member history, choose the History button to view a list of changes to members (for example,
added or removed members, or changed role assignments).
● To filter the member list for a specific role, use the filter to show only the members with this role.
Related Information
If your scenario requires it, you can add application providers as members to your SAP Cloud Platform customer
account and assign them the administrator role so that they can deploy and administer the applications you have
purchased.
Prerequisites
Tip
You can request user IDs at the SAP Service Marketplace: http://service.sap.com/request-user
SAP Service Marketplace users are automatically registered with the SAP ID service, which controls user
access to SAP Cloud Platform.
As an administrator of your SAP Cloud Platform customer account, you can add members to it and make these
members administrators of the account using the SAP Cloud Platform cockpit. For example, if you have
purchased an application from an SAP implementation partner,you may need to enable the SAP implementation
partner to deploy and administer the application.
Procedure
1. In your Web browser, open the SAP Cloud Platform cockpit using the URLs given below. Use the relevant URL
for the region with which your customer account is associated:
○ Europe: https://account.hana.ondemand.com/cockpit
○ United States: https://account.us1.hana.ondemand.com/cockpit (US East), and https://
account.us2.hana.ondemand.com/cockpit (US West)
○ Asia-Pacific: https://account.ap1.hana.ondemand.com/cockpit
The cockpit provides integrated access to all accounts you operate on the productive landscape.
2. In the cockpit, select the customer account to which you want to add members.
3. In the navigation area, choose Members.
Make sure that you have selected the relevant global account to be able to select the right account.
All members currently assigned to the account are displayed in a list.
4. In the Members section, choose Add Members.
5. In the Add Members dialog box, enter the user IDs you have received from your application provider and then
select the Administrator checkbox.
To separate the entries, use comma, space, or semicolon. The user IDs are case-insensitive and contain
alphanumeric characters only. Note that currently there is no user validation.
Note
The Developer checkbox is selected by default. Make sure you do not deselect this checkbox.
Note
You cannot remove your own administrator role.
7. Notify your application provider that they now have the necessary permissions to access the account.
Related Information
SAP Cloud Platform delivers predefined roles supporting the typical tasks performed by users when interacting
with the platform.
Roles
Table 9:
Role Description
Administrator Enables you to manage account members, create new accounts using the self-service op
tion, and move quota between accounts (prerequisite: you are an administrator in each
account).
You can also manage subscriptions, trust, authorizations, and OAuth settings, and restart
SAP HANA services on HANA databases.
In addition, you have all permissions granted by the developer role, except the debug per
mission.
Note
This role also grants permissions to view the Connectivity tab in the SAP Cloud
Platform cockpit.
Cloud Connector Admin Enables to open secure tunnels via Cloud Connector from on-premise networks to your
cloud accounts.
Note
This role also grants permissions to view the Connectivity tab in the SAP Cloud
Platform cockpit.
Developer Supports typical development tasks, such as deploying, starting, stopping, and debugging
applications. You can also change loggers and perform monitoring tasks, such as creating
availability checks for your applications and executing MBean operations.
Note
This role is assigned to a newly created user by default.
Support User Designed for technical support engineers, this role enables you to read almost all data re
lated to an account, including its metadata, configuration settings, and log files. Note that
to be able to read database content, a database administrator must assign the appropri
ate database permissions to you.
Application User Admin The account administrator assigns an account member the Application User Admin role.
This role enables you to manage user permissions on application level to access Java,
HTML5 applications, and subscriptions. You can control permissions directly by assigning
users to specific application roles or indirectly by assigning users to groups, which you
then assign to application roles. You can also unassign users from the roles or groups.
Note
The Application User Admin role does not enable you to manage account roles and to
perform actions on account level (for example, stopping or deleting applications).
Related Information
Overview
By using SAP Cloud Platform, a provider can build and run an application to be consumed by multiple customers.
For that purpose, the platform provides the multitenant functionality, which allows providers to own, deploy, and
operate the application for multiple customers with reduced costs. For example, the provider can upgrade the
application for all customers instead of performing each individually, or can share resources across many
customers. On the other side, the customers as application consumers can configure certain features of their
applications and launch them through consumer-specific URLs. Furthermore, they can protect the application by
isolating their tenants. To learn about multitenant applications, see Related Information.
Consumers do not deploy their applications in their accounts, but they simply subsribe to the provider application.
As a result, a subscription is created in the consumer account. This subscription represents the contract or
relation between an account (tenant) and a provider application.
However, Java applications can be subscribed only through the console client. When such a subscription is set in
the consumer account, the Java provider application can use a connectivity destination that is configured in the
consumer account. .
Prerequisites
● You have a customer or partner account. For more information, see Account Types [page 14].
● You have developed and deployed an application for multiple consumers. For more information, see
Multitenant Applications [page 1060].
● The provider and consumer accounts belong to the same landscape. For more information, see Landscape
Hosts [page 41].
● (Only for Java subscriptions) You have set up the console client. For more information, see Setting Up the
Console Client [page 52].
Cockpit Operations
● List all Java and HTML5 applications to which your account is subscribed.
● Launch the applications through dedicated (consumer-specific) URLs.
Related Information
Procedure
1. Open the account in the cockpit and choose Applications Subscriptions in the navigation area.
The subscriptions to Java applications are listed with the provider account from which the subscription was
obtained and the subscribed application.
2. To navigate to the subscription overview, click the application name. You have the following options:
○ To launch an application, click the link in the Application URLs panel.
● To list all Java applications subscribed to an account, use the list-subscribed-applications command.
Example
● To list all accounts subscribed to a Java application, use the list-subscribed-accounts command.
Example
Example
Note
Some subscriptions automatically created by the platform cannot be removed.
Example
Using the console client, you can create accounts and subscribe them to applications to test how applications can
be provided to multiple consumers.
Prerequisites
● You have set up the console client. For more information, see Setting Up the Console Client [page 52].
● You have developed and deployed an application that will be used by multiple consumers. For more
information, see Multitenant Applications [page 1060].
● You have a customer or partner account. For more information, see Account Types [page 14].
● You are a member of both accounts - the one where the multitenant application is deployed and the one that
you want to subscribe to the application.
Context
Note
You can subscribe an account to an application that is running in another account only if both accounts
(provider and consumer account) belong to the same landscape.
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create an account for a given consumer.
Note that you must specify the parameter -b in the format <account name>:<application>.
4. Access the application through the dedicated URL, for example https://<application name><provider
account>-<consumer account>.<landscape host>.
You can see the list of subscriptions and the corresponding application URLs to access them in the
Subscriptions pane in the cockpit.
5. (Optional) You can also check the list of your test accounts and subscriptions as follows:
Option Description
List all applications to which a given ac Execute neo list-subscribed-applications -a <account>
count is subscribed -u <user name or email> -h <landscape host>
List accounts you have created Execute neo list-accounts -a <account> -u <user name
or email> -h <landscape host>
List all accounts subscribed to a given Execute neo list-subscribed-accounts -a <account> -b
application <application> -u <user name or email> -h <landscape
host>
Related Information
Procedure
Related Information
Procedure
1. Open the account in the cockpit and choose Applications Subscriptions in the navigation area. The
subscriptions to HTML5 applications are listed with the following information:
○ The account name of the application provider from which the subscription was obtained
○ The name of the subscribed application
2. To navigate to the subscription overview, click the application name:
○ To launch an application, click the URL link in the Active Version panel.
○ To create or assign roles, choose Roles in the navigation area.
Procedure
1. On the Subscribed HTML5 Applications panel in the Subscriptions section, choose New Subscription.
2. Select the provider account from the dropdown list. (You can select accounts that provided applications to
your account as well as accounts where the current user has the administrator role.)
3. Select the application to which you want to subscribe.
4. Enter a subscription name.
Note
The subscription name must be unique across all subscription names and all HTML5 application names in
the current account.
Procedure
Related Information
The cockpit provides an overview of all the platform services that you can access and use for creating or
extending applications. Services are grouped by service category.
Some of the services are basic services, which are directly provided by the SAP Cloud Platform and are ready-to-
use. In addition, extended services are available. Which services are available for use in the relevant account,
differs for each account. In the overview of the services available in the account, a label on the tile for the service in
question indicates if the service is enabled. You can only use these services.
The Enable option appears only if you have the Administrator role for the account in question and for the
services that need further enablement.
The representation of the details for each service in the cockpit varies to some degree.
● No, one or more links to configuration screens of the service (depends on the service)
Note
Some services provide further configuration options, while others don’t.
Remember
You can only access most of the links after the service has been enabled.
To view the list of services available to you in a specific account, you have the following options:
Note
Some of the services are exposed only on the trial landscape for developer accounts. That means the services
are not, or not yet, released on the productive landscape for customer and partner accounts.
Some of the services are only exposed if you have purchased a license for them before.
Related Information
Context
SAP Cloud Platform enables users to consume a wide choice of services in a self-service fashion to build and
extend applications using the cockpit.
Procedure
1. To display only the services for a specific category, use the filter function.
This makes it easier to find the relevant service in the service overview. You can view all the services in the
overview, or filter the list of services for services in a selected category.
2. To enable a service, choose the tile of the service, and then choose Enable.
The Enable option appears only if you have the Administrator role for the account in question and for the
services that need further enablement.
3. To perform administrative tasks, choose the tile for the respective service. The overview page for the service
is displayed.
The overview page shows a description of the service and several links, including links to the documentation
available for the service, and, if available, the service start page, and configuration options. The availability of
the links differs for each service.
4. On the overview page for the service, you have the following options:
a. To read the documentation available for each service, click the link provided for documentation.
b. To go to the start page for the service, click the link provided for the service UI (there could be more than
one link).
The configuration options for a service may look like in this example for the portal service:
○ To configure connection parameters to other systems (by creating connectivity destinations), choose
Configure <Portal Service> Destinations .
This option is available only if the service is enabled.
○ To create custom roles and assign custom or predefined roles to individual users and groups, choose
Configure <Portal Service> Roles .
Related Information
Applications can be deployed on the productive landscape hana.ondemand.com or the trial landscape
hanatrial.ondemand.com.
● Productive landscape - represents the productive environment; it can be used by customer and partner
accounts only.
● Trial landscape - represents the platform for testing the SAP Cloud Platform functionality. To use this
platform, you need a developer account.
The productive landscape is available on a regional basis: Each region represents the location of a data center, the
physical location (for example, Europe, US East) where applications, data, or services are hosted. Application
performance (response time, latency) can be optimized by selecting a data center close to the users. When
deploying applications, bear in mind that a customer or partner account is associated with a particular data center
and that this is independent of your own location. You could be located in the United States, for example, but
operate your account in Europe (that is, use a data center that is situated in Europe).
To deploy an application on more than one landscape, execute the deploy separately for each landscape host.
Table 10:
Account Type Data Center Landscape Host IP Ranges
Tip
Developer accounts have a suffix trial. For example: P1234567890trial.
Related Information
Set up your Java development environment and deploy your first application in the cloud.
Table 11:
Sign Up
Set Up
Download Eclipse IDE for Java EE Developers, and set up SAP Cloud Platform Tools for Java.
Create or Import
Create a classic Java EE HelloWorld application or import an existing sample application to get started.
Deploy
Monitor
You can view the status and logs of your Java applications.
Samples
A set of sample applications allows you to explore the core functionality of SAP Cloud Platform and shows how
this functionality can be used to develop complex Web applications. See: Samples [page 60]
Before you can start developing your application, you need to download and set up the necessary tools, which
include Eclipse IDE for Java EE Developers, SAP Cloud Platform Tools, and SDK.
SAP Cloud Platform Tools, SAP Cloud Platform SDK, SAP JVM, and SAP HANA Cloud Connector, can be
downloaded from the SAP Development Tools for Eclipse page.
For more information on each step of the set up procedure, open the relevant page from the structure.
Procedure
1. Choose between three types of SAP Cloud Platform SDK for Java applications.
For more information, see Installing the SDK.
2. SAP JVM is the Java runtime used in SAP Cloud Platform. It can be set as a default JRE for your local runtime.
For instructions on how to install it, see (Optional) Installing SAP JVM.
3. Download and set up Eclipse IDE for Java EE Developers.
See Installing Eclipse IDE.
4. Download and set up SAP Development Tools for Eclipse.
See Installing SAP Development Tools for Eclipse.
5. Configure the landscape host and SDK location on which you will be deploying your application.
See Setting Up SDK Location and Landscape Host in Eclipse.
6. Add Java Web, Java Web Tomcat 7, Java Web Tomcat 8, or Java EE 6 Web Profile, according to the SDK you
use. See Setting Up the Runtime Environment.
For more information on the different SDK versions and their corresponding runtime environments, see
Application Runtime Container.
7. To set up SAP JVM as a default JRE for your local environment, see Setting Up SAP JVM in Eclipse IDE.
8. If you prefer working with the Console Client, see Setting Up the Console Client.
9. If you need to establish a connection between on-demand applications in SAP Cloud Platform and existing on-
premise systems, you can use SAP HANA Cloud Connector.
For more information, see SAP HANA Cloud Connector.
Context
● Java Web - provides support for some of the standard Java EE APIs (Servlet, JSP, JSTL, EL)
● Java Web Tomcat 7 - provides support for some of the standard Java EE APIs (Servlet, JSTL, EL)
● Java Web Tomcat 8
● Java EE 6 Web Profile - certified to support Java EE 6 Web Profile APIs
For more information on Java profiles, see section Application Runtime Container [page 1025].
Procedure
1. Open https://tools.hana.ondemand.com/#cloud
2. From the SAP Cloud Platform SDK section, download the relevant ZIP file and save it to your local file system.
3. Extract the ZIP file to a folder on your computer or network.
Your SDK is ready for use. To use the SDK with Eclipse, see Setting Up SDK Location and Landscape Host in
Eclipse [page 47]. To use the console client, see Using the Console Client [page 102].
Related Information
Context
SAP Cloud Platform infrastructure runs on SAP's own implementation of a Java Virtual Machine - SAP Java Virtual
Machine (JVM).
SAP JVM is a fully certified Java Standard Edition Virtual Machine for Java 7. It is derived from Oracle’s HotSpot
VM and JDK implementation, but enhanced with several supportability features such as the SAP JVM Profiler for
better monitoring, and profiling applications running on the SAP Cloud Platform local runtime. Customer support
is provided directly by SAP for the full maintenance period of SAP applications that use the SAP JVM. For more
information, see Java Virtual Machine [page 1023]
Procedure
1. Open https://tools.hana.ondemand.com/#cloud
2. From the SAP JVM section, download the SAP JVM archive file compatible to your operating system and save
it to your local file system.
3. Extract the archive file.
Note
If you use Windows as your operating system, you need to install the Visual C++ 2010 Runtime prior to using
SAP JVM. The installation package for the Visual C++ 2010 Runtime can be obtained from Microsoft. Download
and install vcredist_x64.exe from the following site: https://www.microsoft.com/en-us/download/
details.aspx?id=26999 . Even if you already have a different version of Visual C++ Runtime, for example
Visual C++ 2015, you still need to install Visual C++ 2010 Runtime prior to using SAP JVM. See SAP Note
1837221 .
Related Information
Prerequisites
If you are not using SAP JVM, you need to have JDK installed in order to be able to run Eclipse.
Procedure
2. Find the ZIP file you have downloaded on your local file system and unpack the archive.
3. Go to the eclipse folder and run the eclipse executable file.
4. Specify a Workspace directory.
5. To open the Eclipse workbench, choose Workbench in the upper right corner.
Note
If the version of your previous Eclipse IDE is 32-bit based and your currently installed Eclipse IDE is 64-bit
based (or the other way round), you need to delete the Eclipse Secure Storage, where Eclipse stores, for
example, credentials for source code repositories and other login information. For more information, see
Eclipse Help: Secure Storage .
To use SAP Cloud Platform features, you first need to install the relevant toolkit. Follow the procedure below.
Prerequisites
You have installed an Eclipse IDE. For more information, see Installing Eclipse IDE [page 45].
Caution
The support for Luna has entered end of maintenance. We recommend that you use Mars or Neon releases.
Procedure
Note
For some operating systems, the path is Eclipse Preferences .
4. Configure your proxy settings (in case you work behind a proxy or a firewall):
Note
If you want to have your SAP Cloud Platform Tools updated regularly and automatically, open the Preferences
window again and choose Install/Update Automatic Updates . Select Automatically find new updates and
notify me and choose Apply.
Prerequisites
● You have installed an SDK package. See Installing the SDK [page 44].
● You have installed the SAP Development Tools for Eclipse. See Installing SAP Development Tools for Eclipse
[page 46]
Context
Follow the steps below to set or configure your SDK location and the landscape host on which you want to deploy
your applications.
Note
○ If you have previously entered an account and user name for your landscape host, these names will be
prompted to you in dropdown lists.
○ A dropdown list will be displayed as well for previously entered landscapes hosts.
8. Choose the Validate button to check whether the data on this preference page is valid.
9. Choose OK.
Prerequisites
You have downloaded an SDK archive and installed it in your Eclipse IDE. For more information, see Setting Up
SDK Location and Landscape Host in Eclipse [page 47].
Context
You need to set up your runtime environment. You can add Java Web, Java Web Tomcat 7, Java Web
Tomcat 8, or Java EE 6 Web Profile, according to the SDK you use. Follow the steps below.
Procedure
Java Web
Note
When deploying your application on SAP Cloud Platform, you can change your server runtime even during
deployment. If you manually set a server runtime different than the currently loaded, you will need to republish
the application. For more information, see Deploying on the Cloud from Eclipse IDE [page 1047].
Related Information
Context
Once you have installed your SAP JVM, you can set it as a default JRE for your local runtime. Follow the steps
below.
You have downloaded and installed SAP JVM, version 7.1.011 or higher.
Procedure
You can set SAP JVM as default or assign it to a specific SAP Cloud Platform runtime.
● To use SAP JVM as default for your Eclipse IDE, follow the steps:
1. Open again the Preferences window.
2. Select sapjvm<n> as default.
3. Choose OK.
● To use SAP JVM for launching local servers only, follow the steps:
1. Double-click on the local server you have created (Java Web Server, Java Web Tomcat 7 Server,
Java Web Tomcat 8 Server, or Java EE 6 Web Profile Server).
2. Open the Overview tab and choose Open launch configuration.
3. Select the JRE tab.
4. Choose the Alternative JRE option.
5. From the dropdown menu, select the SAP JVM version you have just added.
6. Choose OK.
Related Information
Prerequisites
You have downloaded and extracted the SDK. For more information, see Installing the SDK [page 44].
Context
SAP Cloud Platform console client is part of the SDK. You can find it in the tools folder of your SDK installation.
Before using the tool, you need to configure it to work with the platform.
Procedure
cd C:\HCP\SDK
cd tools
3. In case you use a proxy server, specify the proxy settings by using environment variables. You can find sample
proxy settings in the readme.txt file in the \tools folder of your SDK location.
○ Microsoft Windows
Note
○ For the new variables to be effective every time you open the console, define them using
Advanced System Settings Environment Variables and restart the console.
○ For the new variables to be valid only for the currently open console, define them in the console
itself.
For example, if your proxy host is proxy and proxy port is 8080, specify the following environment
variables:
set HTTP_PROXY_HOST=proxy
set HTTP_PROXY_PORT=8080
set HTTPS_PROXY_HOST=proxy
set HTTPS_PROXY_PORT=8080
set HTTP_NON_PROXY_HOSTS="localhost"
If you need basic proxy authentication, enter your user name and password:
set HTTP_PROXY_USER=<user_name>
set HTTP_PROXY_PASSWORD=<password>
set HTTPS_PROXY_USER=<user_name>
set HTTPS_PROXY_PASSWORD=<password>
export http_proxy=http://proxy:8080
export https_proxy=https://proxy:8080
export no_proxy="localhost"
If you need basic proxy authentication, enter your user name and password:
export http_proxy=http://user:password@proxy:8080
export https_proxy=https://user:password@proxy:8080
Related Information
If you have already installed and used the SAP Cloud Platform Tools, SDK and SAP JVM, you only need to keep
them up to date.
Context
If you have already installed an SDK package, you only need to update it regularly. To update your SDK, follow the
steps below.
Procedure
Note
Again, if the SDK version is higher and not supported by the version of your SAP Cloud Platform Tools
for Java, a message appears prompting you to update your SAP Cloud Platform Tools for Java. You
can check for updates (recommended) or ignore the message.
4. Choose Finish.
7. After editing all local runtimes, choose OK.
Related Information
Context
If you have already installed an SAP Java Virtual Machine, you only need to update it. To update your JVM, follow
the steps below.
Note
Do not install the new SAP JVM version to a directory that already contains SAP JVM.
3. In the Eclipse IDE main menu, choose Window Preferences Java Installed JREs and select the JRE
configuration entry of the old SAP JVM version.
4. Choose the Edit... button.
5. Use the Directory... button to select the directory of the new SAP JVM version.
6. Choose Finish.
7. In the Preferences window, choose OK.
Related Information
Context
If you have already installed SAP Cloud Platform Tools, you only need to update them. To do so, follow the steps
below.
Procedure
1. Ensure that the SAP Cloud Platform Tools software site is checked for updates:
1. Find out whether you are using a Mars or Neon release of Eclipse. The name of the release is shown on the
welcome screen when the Eclipse IDE is started.
Caution
The support for Luna has entered end of maintenance. We recommend that you use Mars or Neon
releases.
2. In the main menu, choose Window Preferences Install/Update Available Software Sites .
3. Make sure there is an entry https://tools.hana.ondemand.com/mars or https://
tools.hana.ondemand.com/neon and that this entry is selected.
Note
If you want to have your SAP Cloud Platform Tools updated regularly and automatically, open the Preferences
window again and choose Install/Update Automatic Updates . Select Automatically find new updates and
notify me and choose Apply.
Related Information
This document describes how to create a simple HelloWorld Web application, which you can use for testing on
SAP Cloud Platform.
First, you create a dynamic Web project and then you add a simple HelloWorld servlet to it.
After you have created the Web application, you can test it on the local runtime and then deploy it on the cloud.
Prerequisites
You have installed the SAP Cloud Platform Tools. For more information, see Setting Up the Development
Environment [page 43].
Make sure you have downloaded the JRE that matches the SDK.
If you work in a proxy environment, set the proxy host and port correctly.
1. Open your Eclipse IDE for Java EE Developers and switch to the Workbench screen.
2. From the Eclipse IDE main menu, choose File New Dynamic Web Project .
3. In the Project name field, enter HelloWorld.
4. In the Target Runtime pane, select the runtime you want to use to deploy the HelloWorld application. In this
tutorial, we use Java Web.
Note
The application will be provisioned with JRE version matching the Web project Java facet. If the JRE version
is not supported by SAP Cloud Platform, the default JRE for the selected SDK will be used (SDK for Java
Web and for Java EE 6 Web Profile – JRE 7).
6. Optional: If you want your context root to be different from "HelloWorld", proceed as follows:
1. Choose the Next button until you reach the Web Module wizard page.
7. Choose Finish.
1. On the HelloWorld project node, open the context menu and choose New Servlet . Window Create
Servlet opens.
2. Enter hello as Java package and HelloWorldServlet as class name.
6. Choose Finish to generate the servlet. The Java Editor with the HelloWorldServlet opens.
7. Replace the body content of the doGet(…) method with the following line:
response.getWriter().println("Hello World!");
Test your HelloWorld application locally and deploy it to SAP Cloud Platform. For more information, see Deploying
and Updating Applications [page 1043].
1.2.4.4 Samples
The sample applications allow you to explore the core functionality of SAP Cloud Platform and show how this
functionality can be used to develop more complex Web applications. The samples are included in the SDK or
presented as blogs in the SCN community.
SDK Samples
The samples provided as part of the SAP Cloud Platform SDK introduce important concepts and application
features of the SAP Cloud Platform and show how common development tasks can be automated using build and
test tools.
Table 12:
Sample Feature More Information
hello-world A simple HelloWorld Web application Creating a HelloWorld Application [page 56]
connectivity Consumption of Internet services Consuming Internet Services (Java Web or Java EE 6
Web Profile) [page 394]
persistence-with-ejb Container-managed persistence with JPA Adding Container-Managed Persistence with JPA (Java
EE 6 Web Profile SDK) [page 795]
persistence-with-jdbc Relational persistence with JDBC Adding Persistence with JDBC (Java Web SDK) [page
819]
document-store Document storage in repository Using the Document Service in a Web Application [page
616]
SAP_Jam_OData_HCP Accessing data in SAP Jam via OData Source code for using the SAP Jam API
All samples can be imported as Eclipse or Maven projects. While the focus has been placed on the Eclipse and
Apache Maven tools due to their wide adoption, the principles apply equally to other IDEs and build systems.
For more information about using the samples, see Importing Samples as Eclipse Projects [page 62], Importing
Samples as Maven Projects [page 64], and Building Samples with Maven [page 65].
The Web application "Paul the Octopus" is part of a community blog and shows how the SAP Cloud Platform
services and capabilities can be combined to build more complex Web applications, which can be deployed on the
SAP Cloud Platform.
● It is intended for anyone who would like to gain hands-on experience with the SAP Cloud Platform.
● It involves the following platform services: identity, connectivity, persistence, and document.
● Its user interface is developed via SAPUI5 and is based on the Model-View-Controller concept. SAPUI5 is
based on HTML5 and can be used for building applications with sophisticated UI. Other technologies that you
can see in action in "Paul the Octopus" are REST services and job scheduling.
For more information, see the SCN community blog: Get Ready for Your Paul Position .
The Web application "SAP Library" is presented in a community blog as another example of demonstrating the
usage of several SAP Cloud Platform services in one integrated scenario, closely following the product
documentation. You can import it as a Maven project, play around with your own library, and have a look at how it
is implemented. It allows you to reserve and return books, edit details of existing ones, add new titles, maintain
library users' profiles and so on.
● The library users authenticate using the identity service. It supports Single Sign-On (SSO).
● The books’ status and features are persisted using the persistence service.
● Book’s details are retrieved using a public Internet Web service, demonstrating usage of the connectivity
service.
● The e-mails you will receive when reserving and returning books to the library, are implemented using a Mail
destination.
● When you upload your profile image, it is persisted using the document service.
For more information, see the SCN community blog: Welcome to the Library!
Related Information
To get a sample application up and running, import it as an Eclipse project into your Eclipse IDE and then deploy it
on the local runtime and SAP Cloud Platform.
Prerequisites
You have installed the SAP Cloud Platform Tools and created a SAP Cloud Platform server runtime environment
as described in Setting Up the Development Environment [page 43].
1. From the main menu of the Eclipse IDE, choose File Import… General Existing Projects into
Workspace and then choose Next.
2. Browse to locate and select the directory containing the project you want to import, for example, <sdk>/
samples/hello-world, and choose OK.
3. Under Projects select the project (or projects) you want to import.
4. Choose Finish to start the import.
The project is imported into your workspace and appears in the Project Explorer view.
Tip
Close the welcome page if it is still shown.
Note
If you have not yet set up a server runtime environment, the following error will be reported: "Faceted
Project Problem: Target runtime SAP Cloud Platform is not defined". To set up the runtime environment,
complete the steps as described in Setting Up SDK Location and Landscape Host in Eclipse [page 47] and
Setting Up the Runtime Environment [page 48].
Next Steps
Run the sample application locally and then in the cloud. For more information, see Deploying Locally from Eclipse
IDE [page 1045] and Deploying on the Cloud from Eclipse IDE [page 1047].
Note
Some samples are ready to run while others have certain prerequisites, which are described in the respective
readme.txt.
Note
When you import samples as Eclipse projects, the tests provided with the samples are not imported. To be able
to run automated tests, you need to import the samples as Maven projects.
To import the tests provided with the SDK samples, import the samples as Maven projects.
Prerequisites
You have installed the SAP Cloud Platform Tools and created a SAP Cloud Platform server runtime environment
as described in Setting Up the Development Environment [page 43].
Procedure
Note
To configure the Maven settings.xml file, choose Window Preferences Maven User Settings .
This configuration is required if you need to provide your proxy settings. For more information, see http://
maven.apache.org/settings.html .
Procedure
1. From the Eclipse main menu, choose File Import… Maven Existing Maven Projects and then choose
Next.
2. Browse to locate and select the directory containing the project you want to import, for example, <sdk>/
samples/hello-world, and choose OK.
3. Under Projects select the project (or projects) you want to import.
4. Choose Finish to start the import.
The project is imported into your workspace and appears in the Project Explorer view.
Tip
Close the welcome page if it is still shown.
Next Steps
Run the sample application locally and then in the cloud. For more information, see Deploying Locally from Eclipse
IDE [page 1045] and Deploying on the Cloud from Eclipse IDE [page 1047].
Note
Some samples are ready to run while others have certain prerequisites, which are described in the respective
readme.txt.
All samples provided can be built with Apache Maven. The Maven build shows how a headless build and test can
be completely automated.
Context
● Builds a Java Web application based on the SAP Cloud Platform API
● Demonstrates how to run rudimentary unit tests (not available in all samples)
● Installs, starts, waits for, and stops the local server runtime
● Deploys the application to the local server runtime and runs the integration test
● Starts, waits for, and stops the cloud server runtime
● Deploys the application to the cloud server runtime and runs the integration test
Related Information
You can use the Apache Maven command line tool to run local and cloud integration tests for any of the SDK
samples.
Prerequisites
● You have downloaded the Apache Maven command line tool. For more information, see the detailed Maven
documentation at http://maven.apache.org .
● You are familiar with the Maven build lifecycle. For more information, see http://maven.apache.org/guides/
introduction/introduction-to-the-lifecycle.html .
Procedure
1. Open the folder of the relevant project, for example, <sdk>/samples/hello-world, and then open the
command prompt.
2. Enter the verify command with the following profile in order to activate the local integration test:
If you are using a proxy, you need to define additional Maven properties as described below in step 4 (see
proxy details).
3. Press ENTER to start the build process.
All phases of the default lifecycle are executed up to and including the verify phase, with the resulting build
status shown on completion.
4. To activate the cloud integration test, which involves deploying the built Web application on a landscape in the
cloud, enter the following profile with the additional Maven properties given below:
○ Landscape host
The landscape host (default: hana.ondemand.com) is predefined in the parent pom.xml file (<sdk>/
samples/pom.xml) and can be overwritten, as necessary. If you have a developer account, for example,
and are therefore using the trial landscape, enter the following:
○ Account details
Provide your account, user name, and password:
○ Proxy details
Tip
If your proxy requires authentication, you might want to use the Authenticator class to pass the proxy
user name and password. For more information, see Authenticator . Note that for the sake of
simplicity this feature has not been included in the samples.
Tip
To avoid having to repeatedly enter the Maven properties as described above, you can add them directly to
the pom.xml file, as shown in the example below:
<sap.cloud.username>p0123456789</sap.cloud.username>
You might also want to use environment variables to set the property values dynamically, in particular
when handling sensitive information such as passwords, which should not be stored as plain text:
<sap.cloud.password>${env.SAP_CLOUD_PASSWORD}</sap.cloud.password>
Related Information
Set up your SAP HANA development environment and run your first application in the cloud.
Table 13:
Sign Up
Set Up
Download Eclipse IDE for Java EE Developers, and set up SAP HANA Tools.
Create a simple SAP HANA XS application using SAP HANA Web-based Development Workbench and run it in the
cloud.
You can also create an SAP HANA XS application using SAP HANA Studio [page 73].
Note
To determine the most suitable tool for your development scenario, see SAP HANA Developer Information by Sce
nario.
Monitor
Add Features
Use calculation views and visualize the data with SAPUI5. See: 8 Easy Steps to Develop an XS application on the
SAP Cloud Platform
Enable SHINE
Enable the demo application SAP HANA Interactive Education (SHINE) [page 82] and learn how to build native
SAP HANA applications.
Before developing your SAP HANA XS application, you need to download and set up the necessary tools.
Prerequisites
● You have downloaded and installed a 32-bit or 64-bit version of Eclipse IDE, version Mars or Neon. For more
information, see Installing Eclipse IDE [page 45].
Caution
The support for Luna has entered end of maintenance.
● You have configured your proxy settings (in case you work behind a proxy or a firewall). For more information,
see Installing SAP Development Tools for Eclipse [page 46] → step 3.
Note
In case you need to develop with SAPUI5, install also SAP Cloud Platform Tools UI development toolkit
for HTML5 (Developer Edition) .
5. Choose Next.
6. On the next wizard page, you get an overview of the features to be installed. Choose Next.
7. Confirm the license agreements.
8. Choose Finish to start the installation.
9. After the successful installation, you will be prompted to restart your Eclipse IDE.
Next Steps
Creating an SAP HANA XS Hello World Application Using SAP HANA Web-based Development Workbench [page
69]
Creating an SAP HANA XS Hello World Application Using SAP HANA Studio [page 73]
Create and test a simple SAP HANA XS application that displays the "Hello World" message.
Prerequisites
Make sure the database you want to use is deployed in your account before you begin with this tutorial. You can
create SAP HANA XS applications using one of the following databases:
Note
Learn more about the steps that are needed for Creating SAP HANA MDC Databases [page 859]. For more
information on purchasing a larger SAP HANA database for development or productive purposes, see SAP
Cloud Platform Pricing and Packaging .
Context
Context
You will perform all subsequent activities with this new user.
Procedure
All databases available in the selected account are listed with their ID, type, version, and related database
system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
3. Depending on the database you are using, choose one of the following options:
A productive SAP Follow the steps described in Creating a Database Administrator User [page 1084].
HANA XS data
base
A productive or 1. Select the relevant SAP HANA MDC database in the list.
trial SAP HANA
2. In the overview that is shown in the lower part of the screen, open the SAP HANA cockpit link
MDC database
under Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for the
SYSTEM user in the Enter Password field.
A message is displayed to inform you that at that point, you lack the roles that you need to open
the SAP HANA cockpit.
4. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
5. Choose Continue.
You are now logged on to the SAP HANA cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new user.
The user name always appears in upper case letters.
10. In the Authentication section, make sure the Password checkbox is selected and enter a pass
word.
Note
The password must start with a letter and only contain uppercase and lowercase letters ('a' -
'z', 'A' - 'Z'), and numbers ('0' - '9').
Note
For more information on the CONTENT_ADMIN role, see Predefined Database Roles.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database
user to work with SAP HANA Web-based Development Workbench by logging out from SAP
HANA Cockpit first. Otherwise, you would automatically log in to the SAP HANA Web-based De
velopment Workbench with the SYSTEM user instead of your new database user. Therefore,
choose the Logout button before you continue to work with the SAP HANA Web-based Develop
ment Workbench, where you need to log on again with the new database user.
Procedure
1. Open the SAP Cloud Platform cockpit and choose Persistence Databases & Schemas in the navigation
area.
2. Select the relevant database from the list and choose SAP HANA Web-based Development Workbench under
Development Tools.
3. Log on with your newly created user.
Note
If you log on to the SAP HANA Web-based Development Workbench for the first time, you are prompted to
change your initial password.
The editor is displayed. The header shows the details for your user and database. Hover over the entry for the
SID to view the details.
5. Create a new package by choosing New Package from the context menu for the Content folder.
6. Enter a package name.
Open the files under the new package hierarchy to view them in the editor.
9. Only if you are using an SAP HANA MDC database: From the context menu for the new package node, choose
Activate All.
Procedure
In the Editor of the SAP HANA Web-based Development Workbench, select the logic.xsjs file from the newly
created package and choose Run.
The program is deployed and displayed in the browser: Hello World from User <Your User>.
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also launch
your application from the SAP Cloud Platform cockpit by choosing the application URL after navigating to
Applications HANA XS Applications . For more information, see Launching SAP HANA XS Applications
[page 1079].
Create and test a simple SAP HANA XS application that displays the "Hello World" message.
Prerequisites
Make sure the database you want to use is deployed in your account before you begin with this tutorial. You can
create SAP HANA XS applications using one of the following databases:
Note
Learn more about the steps that are needed for Creating SAP HANA MDC Databases [page 859]. For more
information on purchasing a larger SAP HANA database for development or productive purposes, see SAP
Cloud Platform Pricing and Packaging .
You also need to install the tools as described in Installing SAP HANA Tools for Eclipse [page 68] to follow the
steps described in this tutorial.
Context
Context
You will perform all subsequent activities with this new user.
All databases available in the selected account are listed with their ID, type, version, and related database
system.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
3. Depending on the database you are using, choose one of the following options:
A productive SAP Follow the steps described in Creating a Database Administrator User [page 1084].
HANA XS data
base
A productive or 1. Select the relevant SAP HANA MDC database in the list.
trial SAP HANA
2. In the overview that is shown in the lower part of the screen, open the SAP HANA cockpit link
MDC database
under Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for the
SYSTEM user in the Enter Password field.
A message is displayed to inform you that at that point, you lack the roles that you need to open
the SAP HANA cockpit.
4. To confirm the message, choose OK.
You receive a confirmation that the required roles are assigned to you automatically.
5. Choose Continue.
You are now logged on to the SAP HANA cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new user.
The user name always appears in upper case letters.
10. In the Authentication section, make sure the Password checkbox is selected and enter a pass
word.
Note
The password must start with a letter and only contain uppercase and lowercase letters ('a' -
'z', 'A' - 'Z'), and numbers ('0' - '9').
Note
For more information on the CONTENT_ADMIN role, see Predefined Database Roles.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database
user to work with SAP HANA Web-based Development Workbench by logging out from SAP
HANA Cockpit first. Otherwise, you would automatically log in to the SAP HANA Web-based De
velopment Workbench with the SYSTEM user instead of your new database user. Therefore,
choose the Logout button before you continue to work with the SAP HANA Web-based Develop
ment Workbench, where you need to log on again with the new database user.
Context
Connect to a dedicated SAP HANA database using SAP HANA Tools via the Eclipse IDE.
Procedure
Note
Make sure that you specify the landscape host correctly.
b. Specify the account name, e-mail or SCN user name, and your SCN password.
c. Choose Next.
5. Select a database and provide your credentials:
a. Select the Databases radio button.
b. From the dropdown menu, select the database you want to work with.
c. Enter your database user and password.
For more information, see Creating a Database Administrator User [page 1084].
Note
Make sure that you specify the database user and password correctly.
If you select the Save password box, the entered password for a given user name is remembered and kept
in the secure store.
A dropdown list is displayed for previously entered database user names. Database passwords can be
remembered and stored in the principle mentioned above.
Results
Context
After you add the SAP HANA system hosting the repository that stores your application-development files, you
must specify a repository workspace, which is the location in your file system where you save and work on the
development files.
Results
In the Repositories view, you see your workspace, which enables you to browse the repository of the system tied
to this workspace. The repository packages are displayed as folders.
At the same time, a folder will be added to your file system to hold all your development files.
Context
After you set up a development environment for the chosen SAP HANA system, you can add a project to contain
all the development objects you want to create as part of the application-development process. There are a
variety of project types for different types of development objects. Generally, a project type ensures that only the
necessary libraries are imported to enable you to work with development objects that are specific to a project
type. In this tutorial, you create an XS Project.
Procedure
1. In the SAP HANA Development perspective in the Eclipse IDE, choose File New XS Project .
2. Make sure the Share project in SAP repository option is selected and enter a project name.
3. Choose Next.
4. Select the repository workspace you created in the previous step and choose Next.
5. Choose Finish without doing any further changes.
The Project Explorer view in the SAP HANA Development perspective in Eclipse displays the new project. The
system information in brackets to the right of the project node name in the Project Explorer view indicates that the
project has been shared; shared projects are regularly synchronized with the Repository hosted on the SAP HANA
system you are connected to.
Context
SAP HANA Extended Application Services (SAP HANA XS) supports server-side application programming in
JavaScript. In this step, you add some simple JavaScript code that generates a page which displays the
wordsHello, World!
Procedure
1. In the Project Explorer view in the SAP HANA Development perspective in Eclipse, right-click your XS project,
and choose New Other in the context-sensitive popup menu.
2. In the Select a wizard dialog, choose SAP HANA Application Development XS JavaScript File .
3. In the New XS JavaScript File dialog, enter MyFirstSourceFile.xsjs in the File name text box and choose
Next.
4. Choose Finish.
5. In the MyFirstSourceFile.xsjs file, enter the following code and save the file:
$.response.contentType = "text/html";
$.response.setBody( "Hello, World !");
Note
By default, saving the file automatically commits the saved version of the file to the repository.
The example code shows how to use the SAP HANA XS JavaScript API's response object to write HTML. By
typing $. you have access to the API's objects.
6. Check that the application descriptor files (xs.app and xs.access) are present in the root package of your
new XS JavaScript application.
The application descriptors are mandatory and describe the framework in which an SAP HANA XS application
runs. The .xsapp file indicates the root pont in the package hierarchy where content is to be served to client
requests; the .xsaccess file defines who has access to the exposed content and how.
7. Open the context menu for the new files (or the folder/package containing the files) and select Team
Activate All . The activate operation publishes your work and creates the corresponding catalog objects; you
can now test it.
Context
Check if your application is working and if the Hello, World! message is displayed.
Procedure
In the SAP HANA Development perspective in the Eclipse IDE, open the context menu of the
MyFirstSourceFile.xsjs file and choose Run As 1 XS Service .
Note
You might need to enter the credentials of the database user you created in this tutorial again.
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also launch
your application from the SAP Cloud Platform cockpit by choosing the application URL after navigating to
Applications HANA XS Applications . For more information, see Launching SAP HANA XS Applications
[page 1079].
Results
Hello, World !
Context
To extract data from the database, you use your JavaScript code to open a connection to the database and then
prepare and run an SQL statement. The results are added to the Hello, World! response. You use the following
SQL statement to extract data from the database:
The SQL statement returns one row with one field called DUMMY, whose value is X.
Procedure
1. In the Project Explorer view in the SAP HANA Development perspective in Eclipse, open the
MyFirstSourceFile.xsjs file in the embedded JavaScript editor.
2. In the MyFirstSourceFile.xsjs file, replace your existing code with the following code:
$.response.contentType = "text/html";
var output = "Hello, World !";
var conn = $.db.getConnection();
var pstmt = conn.prepareStatement( "select * from DUMMY" );
var rs = pstmt.executeQuery();
if (!rs.next()) {
$.response.setBody( "Failed to retrieve data" );
$.response.status = $.net.http.INTERNAL_SERVER_ERROR;
} else {
output = output + "This is the response from my SQL: " + rs.getString(1);
}
rs.close();
pstmt.close();
conn.close();
$.response.setBody(output);
4. Open the context menu of the MyFirstSourceFile.xsjs file and choose Team Activate All .
Context
Check if your application is retrieving data from your SAP HANA database.
In the SAP HANA Development perspective in the Eclipse IDE, open the context menu of the
MyFirstSourceFile.xsjs file and choose Run as XS Service .
Note
If you have used an SAP HANA XS database for creating your SAP HANA XS application, you can also launch
your application from the SAP Cloud Platform cockpit by choosing the application URL after navigating to
Applications HANA XS Applications . For more information, see Launching SAP HANA XS Applications
[page 1079].
Results
You can enable the SAP HANA Interactive Education (SHINE) demo application for a new or existing SAP HANA
MDC database in your trial account.
Context
SAP HANA Interactive Education (SHINE) demonstrates how to build native SAP HANA applications. The demo
application comes with sample data and design-time developer objects for the application's database tables, data
views, stored procedures, OData, and user interface. For more information, see the SAP HANA Interactive
Education (SHINE) documentation.
By default, SHINE is available for all SAP HANA MDC databases on SAP Cloud Platform's trial landscape.
Procedure
Restriction
You can enable SHINE only in your trial account.
Enable SHINE for a 1. Follow the steps described in Creating SAP HANA MDC Databases [page 859].
new SAP HANA 2. From the list of all databases and schemas, choose the SAP HANA MDC database you just
MDC database created.
3. In the overview in the lower part of the screen, choose the SAP HANA Interactive Education
(SHINE) link under Education Tools.
Enable SHINE for an 1. From the list of all databases and schemas, choose the SAP HANA MDC database for which
existing SAP HANA you want to enable SHINE.
MDC database 2. In the overview in the lower part of the screen, open the SAP HANA Cockpit link under
Administration Tools.
3. In the Enter Username field, enter SYSTEM, then enter the password you determined for the
SYSTEM user.
The first time you log in to the SAP HANA Cockpit, you are informed that you don't have ther
oles that you need to open the SAP Cloud Platform cockpit.
4. Choose OK. The required roles are assigned to you automatically.
5. Choose Continue.
You are now logged in to the SAP HANA Cockpit.
6. Choose Manage Roles and Users.
7. To create database users and assign them the required roles, expand the Security node.
8. Open the context menu for the Users node and choose New User.
9. On the User tab, provide a name for the new SHINE user.
Note
The user name can contain only uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), num
bers ('0' - '9'), and underscores ('_').
Note
The password must contain at least one uppercase and one lowercase letter ('a' - 'z', 'A' -
'Z') and one number ('0' - '9'). It can also contain special characters (except ", ' and \).
A login screen for the SHINE demo application is shown in a new browser window.
4. Enter the credentials of the SHINE user you created and choose Login.
Results
You see the SHINE demo application for your SAP HANA MDC database. Consult the SAP HANA Interactive
Education (SHINE) documentation for detailed information about using the application.
Set up your HTML5 development environment and run your first application in the cloud.
Table 14:
Sign Up
Add Users
Add users who develop and maintain HTML5 applications as account members of your account.
Set Up
To develop HTML5 applications, we recommend that you use the browser-based tool SAP Web IDE that does not re
quire any setup.
Create
Create a simple HTML5 application and run it in the cloud: Hello World Tutorial Using SAP Web IDE [page 85]
For more information about building applications in SAP Web IDE, see the SAP Web IDE documentation. There,
you will also find information on building your project first and then pushing your app to the cockpit.
Related Information
This tutorial illustrates how to build a simple HTML5 application using SAP Web IDE.
Prerequisites
Context
Context
For each new application a new Git repository is created automatically. To view detailed information on the Git
repository, including the repository URL and the latest commits, choose Applications HTML5 Applications
in the navigation area and then Versioning.
Note
To create the HTML5 application in more than one landscape, create the application in each landscape
separately and copy the content to the new Git repository.
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
If you have already created applications using this account, the list of HTML5 applications is displayed.
3. To create a new HTML5 application, choose New Application and enter an application name.
Note
Adhere to the naming convention for application names:
○ The name must contain no more than 30 characters.
○ The name must contain only lowercase alphanumeric characters.
○ The name must start with a letter.
4. Choose Save.
5. Clone the repository to your development environment.
a. To start SAP Web IDE and automatically clone the repository of your app, choose Edit Online ( ) at the
end of the table row of your application.
b. On the Clone Repository screen, if prompted enter your user and password (SCN user and SCN
password), and choose Clone.
Results
Task overview: Hello World Tutorial Using SAP Web IDE [page 85]
Related Information
A project is needed to create files and to make them available in the cockpit.
Procedure
1. In SAP Web IDE, choose Development (</>), and then select the project of the application you created in the
cockpit.
2. To create a project and to clone your app to the development environment, right-click the project, and choose
New Project from Template .
3. Choose the SAPUI5 Application button, and choose Next.
4. In the Project Name field, leave the proposed name for your project, and choose Next.
5. Fill in the following fields, and then choose Next:
Table 15:
Field Entry
6. Choose Finish.
Task overview: Hello World Tutorial Using SAP Web IDE [page 85]
SAP Web IDE already created an HTML page for your project. You now adapt this page.
Procedure
1. In SAP Web IDE, expand the project node in the navigation tree and open the HelloWorld.view.js using a
double-click.
4. To test your Hello World application, select the index.html file and choose Run ( ).
Task overview: Hello World Tutorial Using SAP Web IDE [page 85]
Next task: Deploying Your App to SAP Cloud Platform [page 88]
With this step you create a new active version of your app that is started on SAP Cloud Platform.
Procedure
1. In SAP Web IDE, select the project node in the navigation tree.
2. To deploy the project, right-click it and choose Deploy Deploy to SAP Cloud Platform .
3. On the Login to SAP Cloud Platform screen, enter your password and choose Login.
4. On the Deploy Application to SAP Cloud Platform screen, increment the version number and choose Deploy.
Task overview: Hello World Tutorial Using SAP Web IDE [page 85]
1.2.7 Tutorials
Follow the tutorials below to get familiar with the services offered by SAP Cloud Platform.
Table 16:
How to create a "HelloWorld" Web application Creating a HelloWorld Application [page 56]
How to create a "HelloWorld" Web application using Java EE 6 Using Java EE 6 Web Profile [page 1036]
Web Profile
Connectivity service scenarios Consuming Internet Services (Java Web or Java EE 6 Web
Profile) [page 394]
How to secure your HTTPS connections Tutorial: Using the Keystore Service for Client Side HTTPS
Connections [page 1363]
How to create an SAP HANA XS application ● Creating an SAP HANA XS Hello World Application Using
SAP HANA Studio [page 73]
● Creating an SAP HANA XS Hello World Application Using
SAP HANA Web-based Development Workbench [page
69]
Business Services with YaaS scenarios Tutorial: Creating a Wishlist Service [page 1015]
Video Tutorials
Tutorial Navigator
1.2.8 Glossary
A-G
Table 17:
Account [page 13] A hosted environment provided to a customer organization, representing a named collec
tion of configurations, authorizations, platform resources and applications.
Application process Each application is started on a dedicated SAP Cloud Platform Runtime. This is called ap
plication process. You can start one or many application processes of your application at
any given time, according to the compute unit quota that you have. Each application
process has a unique process ID that you can use to manage it.
Application runtime container Java applications developed on SAP Cloud Platform run on a modular and lightweight
[page 1025] runtime container, which allows them to consume standard Java EE APIs and platform
services.
Compute units [page 1030] The virtualized hardware resources used by an SAP Cloud Platform application.
Cockpit [page 97] SAP Cloud Platform cockpit is the central point of entry to key information about your ac
counts and applications, and for managing all activities associated with your account.
Connectivity service [page 311] Provides a secure, reliable and easy-to-consume access to business systems, running ei
ther on-premise or in the cloud.
Console client [page 102] SAP Cloud Platform console client enables development, deployment and configuration
of a Web application outside the Eclipse IDE as well as continuous integration and auto
mation tasks. The tool is part of the SAP Cloud Platform SDK.
Cloud connector [page 480] Cloud connector serves as the link between on-demand applications in SAP Cloud
Platform and existing on-premise systems. It combines an easy setup with a clear config
uration of the systems that are exposed to SAP Cloud Platform.
Customer account [page 14] Allows customers to build applications and host them in a productive environment for
their own purposes. A customer account can be purchased as part of a predefined or tail
ored package.
Database An organized collection of the data that can be backed up and restored separately. The
database is the technical unit that contains the data where DBMS is a service that enables
See Overview of Database Sys
users to define, create, query, update and administer the data. SAP Cloud Platform ac
tems and Databases [page 843] count administrators can create databases on database management systems in their
account.
Database type A specific database product, such as the SAP HANA database
Developer account [page 14] Offers access to the SAP Cloud Platform trial landscape for evaluation purposes. A devel
oper account is free of charge and valid for an unlimited period. It allows restricted use of
the platform resources.
Developer Center SAP HANA Cloud Developer Center is the place on the SAP Community Network
where you can find information, news, discussions, blogs, and more about SAP Cloud
Platform.
Document service [page 606] Provides an on-demand repository for applications to manage unstructured content for
an application-specific context using the CMIS protocol.
Global account Accounts are organized in a global account. A global account corresponds to a customer
who buys an account for deploying applications on the cloud platform. The customer
See Accounts [page 13]
data, billing information, and purchased resources (such as compute units) are stored in
a global account.
I-R
Table 18:
Infrastructure as a Service (IaaS) A provisioning model in which an organization outsources the equipment used to support
operations, including storage, hardware, servers and networking components.
Identity provider (IdP) An authorization authority containing all user information and credentials. In SAP Cloud
Platform, user information is provider by identity providers, not stored in SAP Cloud
Platform itself.
Multitenant database container A self-contained database container in a multiple-container system. A tenant database
container has its own isolated set of database users and its own database catalog. No
data is shared between the tenant databases in a system. Clients can connect to tenant
databases individually.
OAuth [page 1340] Widely adopted security protocol for protection of resources over the Internet. It is used
by many social network providers and by corporate networks. It allows an application to
request authentication on behalf of users with third-party user accounts, without the user
having to grant its credentials to the application.
Partner account [page 19] Allows partners to build applications and sell them to their customers. A partner account
is available through a partner program, which provides a package of predefined resources
and the opportunity to certify, advertise, and ultimately sell products.
Platform as a Service An environment to develop, deploy, run and manage your business applications in the
cloud. The underlying software and hardware infrastructure is provided on demand (as a
service).
Quota [page 19] An account’s entitlement to an allocated resource, such as CPU, memory, database stor
age, and bandwidth. The resources purchased for an account are available to all applica
tions deployed within that account, within the specified limits.
Runtime for Java [page 1023] The components which create the environment for deploying and running Java applica
tions on SAP Cloud Platform - Java Virtual Machine, Application Runtime Container and
Compute Units.
S-Z
Table 19:
SAP Community Network SAP's professional social network for SAP customers, partners, employees and experts,
(SCN) which offers insight and content about SAP solutions and services in a collaborative envi
ronment: http://scn.sap.com. To use SAP Cloud Platform, you have to be registered on
SCN.
SAP Cloud Platform [page 5] SAP Cloud Platform is an in-memory cloud platform that enables customers and partners
to build, deploy, and manage cloud-based enterprise applications that complement and
extend SAP or non-SAP solutions, either on-premise or on-demand.
SAP ID Service [page 1318] The default identity provider for SAP Cloud Platform applications. It manages the user
base for SAP Community Network and other SAP Web sites. SAP ID service is also used
for authentication in the cockpit and operations such as deploying, updating, and so on.
SDK [page 95] SAP Cloud Platform Software Development Kit is the toolset you need to build and run
SAP Cloud Platform applications. It contains console client for deployment and configura
tion editing; binaries for local testing runtime; javadoc.
SAP Cloud Platform Identity SAP Cloud Platform Identity Authentication service is a cloud solution for identity lifecy
Authentication Service cle management for SAP Cloud Platform applications, and optionally for on-premise ap
plications. You can use Identity Authentication as an identity provider for SAP Cloud
Platform applications.
UI development toolkit for HTML5 A framework providing UI controls for developing Web applications.
(SAPUI5)
Security Assertion Markup Lan A markup language which provides a wide-spread protocol for secure authentication and
guage SSO. SAML is implemented by SAP ID service.
Service provider The application interested in getting authentication and authorization information. In
stead of providing this information in itself, it contacts the identity provider.
Single Sign-On A property of access control of multiple related, but independent software systems,
which enables a user to log in once and have access to all systems.
Software as a Service A software distribution model in which applications are hosted by a vendor or service pro
vider and made available to customers over the Internet.
SAP Java Virtual Machine [page SAP's own implementation of a Java Virtual Machine on which the SAP Cloud Platform
44] infrastructure runs.
WTP Server Adapter A tool for deploying and testing Java EE assets on SAP Cloud Platform or for local testing.
1.3 Tools
Table 20:
Tool Description
Cockpit [page 97] This is the central point for managing all activities associated
with your account and for accessing key information about
your applications.
SAP Web IDE [page 101] This is a cloud-based meeting space where multiple applica
tion developers can work together from a common Web inter
face — connecting to the same shared repository with virtu
ally no setup required. SAP Web IDE allows you to prototype,
develop, package, deploy, and extend SAPUI5 applications.
Maven Plugin [page 101] It supports you in using Maven to develop Java applications
for SAP Cloud Platform. It allows you to conveniently call the
console client and its commands from the Maven environ
ment.
Cloud Connector [page 480] It serves as the link between on-demand applications in SAP
Cloud Platform and existing on-premise systems. You can
control the resources available for the cloud applications in
those systems.
SDK [page 95] It contains everything you need to work with SAP Cloud
Platform, including a local server runtime and a set of com
mand line tools.
Eclipse Tools [page 100] This is a Java-based toolkit for Eclipse IDE. It enables you to
develop and deploy applications as well as perform operations
such as logging, managing user roles, creating connectivity
destinations, and so on.
Prerequisites
You have the SDK installed. See Installing the SDK [page 44].
The location of the SDK is the folder you have chosen when you downloaded and unzipped it.
An overview of the structure and content of the SDK is shown in the table below. The folders and files are located
directly below the common root directory in the order given:
Folder/File Description
api The platform API containing the SAP and third-party API
JARs required to compile Web applications for SAP Cloud
Platform (for more information about the platform API, see
the "Supported APIs" section further below).
javadoc Javadoc for the SAP platform APIs (also available as online
documentation via the API Documentation link in the title bar
of the SAP Cloud Platform Documentation Center). Javadoc
for the third-party APIs is cross-referenced from the online
documentation.
server Initially not present, but created once you install a local
server runtime.
tools Command line tools required for interacting with the cloud
runtime (for example, to deploy and start applications) and
the local server runtime (for example, to install and start the
local server).
readme.txt Brief introduction to the SDK, its content, and how to set it
up.
The cloud server runtime consists of the application server, the platform API, and the cloud implementations of
the provided services (connectivity, persistence, document, and identity). The SDK, on the other hand, contains a
Supported APIs
The SDK contains the API for SAP Cloud Platform. All Web applications intended for deployment in the cloud
should be compiled against this platform API. The platform API is used by the SAP Cloud Platform Tools for Java
to set the compile-time classpath.
All JARs contained in the platform API are considered part of the provided scope and must therefore be used for
compilation. This means that they must not be packaged with the application, since they are provided and wired
at runtime in the SAP Cloud Platform runtime, irrespective of whether you run your application locally for
development and test purposes or centrally in the cloud.
When you develop applications to run on the SAP Cloud Platform, you should be aware of which APIs are
supported and provisioned by the runtime environment of the platform:
● Third-party APIs: These include Java EE standard APIs (standards based and backwards compatible as
defined in the Java EE Specification) and other APIs released by third parties.
● SAP APIs: The platform APIs provided by the SAP Cloud Platform services.
Related Information
Overview
The figure below shows an example view of the cockpit and is followed by an explanation:
The cockpit provides an overview of the applications available in the different technologies supported by SAP
Cloud Platform (SAP HANA XS, Java, and HTML5), and shows other key information about the account. The tiles
contain links for direct navigation to the relevant information.
The Favorite Applications panel shows all applications that you have added to your favorites, making key
information about them available at a glance. You can manage your favorites directly from there and navigate to
the application overview for further details and options.
Charts show the number of requests and CPU consumption on the overview page of a Java application.
Accounts
The cockpit provides integrated access to all the accounts for which you have a user. Which accounts are shown
to you in the cockpit, depends on the version of the cockpit you are using. For example, you can access all the
accounts you operate on the productive landscape (at hana.ondemand.com). If you also have a developer
account that enables you to try out things in a non-productive environment, you need to access a separate
cockpit (at hanatrial.ondemand.com) in which you will only see your trial account created for this purpose.
Logon
Log on to the cockpit using the relevant URL. The URL depends on the following:
Note
We recommend that you log on with your e-mail address.
When you log on to the cockpit for the first time, you get to the account overview page. Depending on the use
case, productive or trial, you can have a single or several accounts assigned to you. You can select an account in
the overview page. You can then drill down to the account details and to access the applications deployed in this
account and related actions.
Accessibility
SAP Cloud Platform provides High Contrast Black (HCB) theme support. You can switch between the default
theme and the high contrast theme using the Settings menu in the header toolbar. Once you have saved your
changes, the cockpit starts with the theme of your choice.
Language
You can select the language in which the cockpit should be displayed using the Settings menu in the header
toolbar:
● English
● Japanese
The main screen areas of the cockpit comprise the content area and the navigation area. The navigation area is
composed of the breadcrumb navigation that comes under the header and the navigation entries to the side of the
content area. The entries are grouped into categories. For example, choose Applications to manage the
applications for the account in question.
Use the breadcrumb navigation to access the different applications deployed in your account and associated
activities. Note the following:
● A dropdown menu is available for each of the elements that enables you to switch to other objects by clicking
the triangular selector. For example, use the dropdown menu to switch between different applications in your
account.
● The element that is currently selected appears as a hyperlink in the breadcrumb navigation. For example, a
click the link for the application entry launches the application.
● You can navigate upwards in the hierarchy or backwards to the previous navigation target using the links in
the breadcrumb navigation.
● Each level determines which navigation options are available and the information that is displayed.
Browser Support
For more information, see Product Prerequisites and Restrictions [page 8].
Notifications
Use Notifications to stay informed about different operations and events in the cockpit, for example, to monitor
the progress of copying an account. The Notification icon in the header toolbar provides a quick access to the list
of notifications and shows the number of available notifications. The icon is visible only if there are currently
notifications.
Each notification includes a short statement, a date and time, and the relevant account. A notification informs you
about the status of an operation or asks for an action. For example, if copying an account failed, an administrator
of the account can assign the corresponding notification to himself and provide a fix. The other members of this
account will see that the notification is already assigned to someone else.
● Dismiss a notification.
● Assign a notification to yourself. It's possible also to unassign yourself from a notification without processing
it further.
● Once you have you completed the related action, you can set the status to complete. This dismisses the
corresponding notification for everyone else.
You can access the full list of notifications (also the ones you have dismissed earlier) by choosing Notifications in
the navigation area at the data center level.
SAP Cloud Platform Tools is a Java-based toolkit for Eclipse IDE. It enables you to perform the following
operations in SAP Cloud Platform:
Features
You can download SAP Cloud Platform Tools from the SAP Development Tools for Eclipse page. The toolkit
package contains:
Support
SAP Cloud Platform Tools come with a wizard for gathering support information in case you need help with a
feature or operation (during deploying/debugging applications, logging, configurations, and so on). For more
information, see Support Information (Eclipse IDE) [page 1446].
SAP Web IDE is a fully extensible and customizable experience that accelerates the development life cycle with
interactive code editors, integrated developer assistance, and end-to-end application development life cycle
support. SAP Web IDE was developed by developers for developers.
SAP Web IDE is a next-generation cloud-based meeting space where multiple application developers can work
together from a common Web interface — connecting to the same shared repository, with virtually no setup
required. It includes multiple interactive features that allow you to collaborate with your project colleagues and
prototype, develop, package, deploy, and extend SAPUI5 applications.
Related Information
SAP offers a Maven plugin that supports you in using Maven to develop Java applications for SAP Cloud Platform.
It allows you to conveniently call the SAP Cloud Platform console client and its commands from the Maven
environment.
Most commands that are supported by the console client are available as goals in the plugin. To use the plugin,
you require a SAP Cloud Platform SDK, which can be automatically downloaded with the plugin. Each version of
the SDK always has a matching Maven plugin version.
For a list of goals and parameters, usage guide, FAQ, and examples, see:
SAP Cloud Platform console client enables development, deployment and configuration of an application outside
the Eclipse IDE as well as continuous integration and automation tasks. The tool is part of the SDK. You can find it
in the tools folder of your SDK location.
Table 21:
To learn more about See
Downloading and setting up the console client Setting Up the Console Client [page 52]
Opening the tool and working with the commands and param Using the Console Client [page 102]
eters
Console Client Video Tutorial
Verbose mode of output Verbose Mode of the Console Commands Output [page 105]
You execute a console client command by entering neo <command name> with the appropriate parameters. To
list all parameters available for the respective command, execute neo help <command name>.
You can define the parameters of the different commands either directly in the command line, or, in a properties
file:
The console client is part of the SAP Cloud Platform SDK. You can find it in the tools folder of your SDK
installation.
To start it, open the command prompt and change the current directory to the <SDK_installation_folder>\tools
location, which contains the neo.bat and neo.sh files.
Command Line
You can deploy the same application as in the example above by executing the following command directly in the
command line:
Properties File
Within the tools folder, a file example_war.properties can be found in the samples/deploy_war folder. In
the file, enter your own user and account name:
################################################
# General settings - relevant for all commands #
################################################
# Your account name
account=<your account>
# Application name
application=<your application name>
# User for login to hana.ondemand.com.
user=<email or user name>
# Host of the landscape admin server. Optional. Defaults to hana.ondemand.com.
host=hana.ondemand.com
#################################################################
# Deployment descriptor settings - relevant only for deployment #
#################################################################
# List of file system paths to *.war files and folders containing them
source=samples/deploy_war/example.war
Note that you can have more than one properties file. For example, you can have a different properties file for
each application or user in your account.
For more information about using the properties file, watch the video tutorial .
Argument values specified in the command line override the values specified in the properties file. For example, if
you have specified account=a in the properties file and then enter account=b in the command line, the
operation will take effect in account b.
Parameter Values
Since the client is executed in a console environment, not all characters can be used in arguments. There are
special characters that should be quoted and escaped.
Consult your console/shell user guide on how to use special characters as command line arguments.
For example, to use argument with value abc&()[]{}^=;!'+,`~123 on Windows 7, you should quote the value
and escape the! character. Therefore you should use "abc&()[]{}^=;^!'+,`~123".
User
Password
Do not specify your password in the properties file or as a command line argument. Enter a password only when
prompted by SAP Cloud Platform console client.
instead of
Restriction
Your password cannot start with the "@" character.
Proxy Settings
If you work in a proxy environment, before you execute commands, you need to configure the proxy.
For more information, see Setting Up the Console Client [page 52]
Output Mode
You can configure the console to print detailed output during command execution.
Related Information
● Local code - executed inside a local JVM, which is started when the command is started.
● Remote code - executed at back end (generally, the REST API that was called by the local code), which is
started in a separate JVM on the cloud.
Note
The trace level for remote code cannot be changed.
For local code execution, a LOG4J library is used. It is easy to be configured and, by default, there is a
configuration file located inside the commands class path, that is .../tools/lib/cmd.
For each command execution, two appenders are defined - one for the session and one for the console. They both
define different files for all messages that are logged by the SAP infrastructure and by apache.http. By default,
the console commands output is written in a number of log files. However, you are allowed to change the
log4j.properties file, and define additional appenders or change the existing ones. If you want, for example,
the full output to be printed in the console (verbose mode), or you want to see details from the execution of
specific libraries (partially verbose mode), you need to adjust the LOG4J configuration file.
To adjust the level of a specific logger, you have to add log4j.logger.<package> = <level> in the code of
the log4j.properties file.
In the file defined for the session, only loggers with level ERROR are logged. If you want, for example, to log debug
information about the apache.http library, you have to change log4j.category.org.apache.http=ERROR,
session to log4j.category.org.apache.http=DEBUG, session.
Example
This example demonstrates how you can change the output of command execution so that it is printed in the
console instead of collecting the information within log files. To do this, open your SDK folder and go to directory /
tools/lib/cmd. Then, open the log4j.properties file and replace its content with the code below.
##########
# Log levels
##########
log4j.rootLogger=INFO, console
log4j.additivity.rootLogger=false
##########
# System out console appender
##########
log4j.appender.console.Threshold=ALL
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.Target=System.out
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d %-5p [%t] %C: %m%n
log4j.appender.console.filter.1=org.apache.log4j.varia.StringMatchFilter
log4j.appender.console.filter.1.StringToMatch=>> Authorization: Basic
log4j.appender.console.filter.1.AcceptOnMatch=false
Related Information
Context
The console commands can return structured, machine-readable output. When you use the optional --output
parameter in a command, the command returns values and objects in a format that a machine can easily parse.
The currently supported output format is JSON.
Cases
When --output json is specified, the console client prints out a single JSON object containing information
about the command execution and the result, if available.
Table 22:
Property Name Type Description
Here is a full example of a command ( neo start ) that supports structured output and displays result values:
{
"command": "start",
"argLine": "-a myaccount -b myapplication -h hana.ondemand.com -u myuser -p
******* -y",
"pid": 6523,
"exitCode": 0,
"errorMsg": null,
"commandOutput": "Requesting start for:
application : myapplication
account : myaccount
host : https://hana.ondemand.com
synchronous : true
SDK version : 1.48.99
user : myuser
web: STARTED
URL: https://myapplicationmyaccount.hana.ondemand.com
Access points:
https://myapplicationmyaccount.hana.ondemand.com
Application processes
ID State Last Change Runtime
fc735dc STARTED 25-Feb-2014 18:07:48 1.47.10.2
",
"commandErrorOutput": "",
"result": {
"status": "STARTED",
"url": "https://myapplicationmyaccount.hana.ondemand.com",
"accessPoints": [
"https://myapplicationmyaccount.hana.ondemand.com",
"https://myapplicationmyaccount.hana.ondemand.com/app2"
],
"applicationProcesses": [
{
"id": "fc735dc",
"state": "STARTED",
"lastChange": "2014-02-25T18:07:48Z",
"runtime": "1.47.10.2"
}
]
}
}
Related Information
Table 23:
Group Commands
Local Server install-local [page 212]; deploy-local [page 171]; start-local [page
282]; stop-local [page 287]
Deployment deploy [page 166]; start [page 280]; status [page 278]
Account and Quota Management create-account [page 125]; delete-account [page 145]; list-accounts
[page 216]; set-quota [page 277]
Virtual Machines create-vm [page 142]; delete-vm [page 163]; list-vms [page 242]
1.3.6.4.1 add-ecm-tenant
Table 24:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
Type: string
Type: string
Type: string
Table 25:
Optional
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for viruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we rec
ommend that you enable the virus scanner by setting this parameter to true. Enabling
the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
1.3.6.4.2 add-custom-domain
Use this command to add a custom domain to an application URL. This will route the traffic for the custom domain
to your application on SAP Cloud Platform.
Parameters
To list all parameters available for this command, execute neo help add-custom-domain in the command line.
Table 26:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-i, --application-url The access point of the application on SAP Cloud Platform default domains (hana.onde
mand.com, etc.)
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
Related Information
1.3.6.4.3 add-platform-domain
Adds a platform domain (under hana.ondemand.com) on which the application will be accessed.
Parameters
To list all parameters available for this command, execute neo help add-platform-domain in the command
line.
Table 27:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
The chosen platform domain will be parent domain in the absolute application domain.
Acceptable values:
● svc.hana.ondemand.com
● cert.hana.ondemand.com
Example
Related Information
1.3.6.4.4 bind-db
Parameters
Table 28:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Default: <DEFAULT>
Type: string (uppercase and lowercase letters, numbers, and the following special charac
ters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of the
data source name.)
Example
Related Information
1.3.6.4.5 bind-domain-certificate
To list all parameters available for this command, execute neo help bind-domain-certificate in the
command line.
Table 30:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--certificate Name of the certificate that you set to the SSL host
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
Related Information
1.3.6.4.6 bind-hana-dbms
This command binds a Java application to a productive SAP HANA database via a data source.
You can only bind an application to a productive SAP HANA database if the application is deployed.
Note
To bind your application to a database that is owned by another account of your global account, see bind-
db [page 115].
Parameters
Table 31:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Note
The host must be on the productive landscape.
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--db-password Password of the database user used to access the productive SAP HANA database
--db-user Name of the database user used to access the productive SAP HANA database
Table 32:
Optional
Type: string (uppercase and lowercase letters, numbers, and the following special charac
ters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of the
data source name.)
Example
Related Information
1.3.6.4.7 bind-schema
This command binds a schema to a Java application via a data source. If a data source name is not specified, the
schema will be automatically bound to the default data source of the application.
Table 33:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
--access-token Identifies a schema access grant. The access token and schema ID parameters are mutu
ally exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 34:
Optional
The application will be able to access the schema via the specified data source.
Type: string (uppercase and lowercase letters, numbers, and the following special charac
ters: `/`, `_`, `-`, `@`. Do not use special characters as first or last charachters of the
data source name.)
Example
Related Information
1.3.6.4.8 clear-alert-recipients
neo clear-alert-recipients
Parameter
Table 35:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Table 36:
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
Example
1.3.6.4.9 clear-downtime-app
The command deregisters a previously configured downtime page for an application. After you execute the
command, the default HTTP error will be shown to the user in the event of unplanned downtime.
Parameters
To list all parameters available for this command, execute neo help clear-downtime-app in the command
line.
Table 37:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
1.3.6.4.10 close-db-tunnel
This command closes one or all database tunnel sessions that have been opened in a background process using
the open-db-tunnel --background command.
A tunnel opened in a background process is automatically closed when the last session using the tunnel is closed.
The background process terminates after the last tunnel has been closed.
Parameters
Table 38:
Required
--all Closes all tunnel sessions that have been opened in the background
--session-id Tunnel session to be closed. Cannot be used together with the parameter --all.
Example
Related Information
1.3.6.4.11 close-ssh-tunnel
Closes the ssh-tunnel to the specified virtual machine. If no virtual machine ID is specified, closes all tunnels.
Table 39:
Required
Type: string
Optional
-r, --port Port on which you want to close the SSH tunnel
Example
1.3.6.4.12 create-account
Creates a new account with an automatically generated unique ID as account name and the specified display
name and assigns the user as an account owner. The user is authorized against the existing account passed as --
account parameter. Optionally, you can clone an existing account configuration to save time and effort.
Note
If you clone an existing extension account [page 1272], the new account will not be an extension account but a
regular one. The new account will not have the trust and destination settings typical for extension accounts.
Parameters
To list all parameters available for this command, execute neo help create-account in the command line.
Table 40:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
If you want to create an account whose display name has intervals, use quotes when exe
cuting the command. For example: neo ... --display-name "Display Name with Intervals"
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
--clone (Optional) List of settings that will be copied (re-created) from the existing account into
the new account. A comma separated list of values, which are as follows:
● trust
● members
● destinations
● all
Tip
We recommend listing explicitly the required cloning options instead of using --
clone all in automated scripts. This will ensure backward compatibility in case the
available cloning options, enveloped by all, change in future releases.
Example
Table 41:
all All settings (trust, members and destinations) from the exist
ing account will be copied into the new one.
Caution
The list of cloned configurations might be extended in the
future.
trust The following trust settings will be re-created in the new ac
count similarly to the relevant settings in the existing account:
Note
SAP Cloud Platform will generate a new pair of key
and certificate on behalf of the new account. Remem
ber to replace them with your proprietary key and cer
tificate when using the account for productive pur
poses.
Note
If you do not have any trusted Identity Authentication ten
ants in the existing account, cloning the trust settings will
result in trust with SAP ID Service (as default identity pro
vider) in the new account.
members All members with their roles from the existing account will be
copied into the new one.
destinations All destinations from the existing account will be created into
the new one. In addition, the relevant certificates and pass
words for the destinations will also be cloned so the destina
tion configurations will be fully functional in the new account.
Example of cloning an existing account to create a new account with the same trust settings and existing
destinations:
1.3.6.4.13 create-availability-check
neo create-availability-check
Parameters
Table 42:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string
Table 43:
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Default: 50
Type: string
Default: 60
Type: string
-w, --overwrite Should be used only if there is an existing alert that needs to be updated.
Default: false
Type: boolean
Example
Example for creating an availability check for application demo:
Related Information
1.3.6.4.14 create-db-ase
This command creates an ASE database with the specified ID and settings on an ASE database system.
Table 44:
Required
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console cli
ent and not explicitly as a parameter in the properties file or the command
line.
Type: string
Type: string
Type: string
--db-password Password of the database user used to access the ASE database (op
tional, queried at the command prompt if omitted)
Note
This parameter sets the maximum database size. The minimum data
base size is 24 MB. You receive an error if you enter a database size
that exceeds the quota for this database system.
The size of the transaction log will be at least 25% of the database size
you specify.
Note
The number of databases you can create is limited. You receive an error message once the maximum number
of databases is reached. For more information on user database limits, see Creating Databases [page 857].
Related Information
1.3.6.4.15 create-db-hana
This command creates a SAP HANA database with the specified ID and settings, on a SAP HANA database
system enabled for multitenant database containers.
Parameters
Table 45:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Note
To create a tenant database on a trial landscape, use -trial- instead of the ID of a pro
ductive HANA database system.
--db-password Password of the SYSTEM user used to access the HANA database (optional, queried at
the command prompt if omitted)
Table 46:
Optional
--dp-server Enables or disables the data processing server of the HANA database: 'enabled', 'disa
bled' (default).
--script-server Enables or disables the script server of the HANA database: 'enabled', 'disabled' (default).
--web-access Enables or disables access to the HANA database from the Internet: 'enabled' (default),
'disabled'
--xsengine-mode Specifies how the XS engine should run: 'embedded' (default), 'standalone'.
Note
The number of databases you can create is limited. You receive an error message once the maximum number
of databases is reached. For more information on tenant database limits, see Creating Databases [page 857].
Example
Related Information
Parameters
Table 47:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--db-password Password of the database user used to access the ASE database (optional, queried at the
command prompt if omitted)
Example
Parameters
Table 48:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
Type: string
Type: string
Table 49:
Optional
-d, --display-name Can be used to provide a more readable name of the repository. Equals the --name value
if left blank. You cannot change the display later on.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-e, --description Description of the repository. You cannot change the description later on.
Type: string
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for viruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we rec
ommend that you enable the virus scanner by setting this parameter to true. Enabling
the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
1.3.6.4.18 create-jmx-check
Parameters
Note
The JMX check settings support the JMX specification. For more information, see Java Management
Extensions (JMX) Specification .
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
The name must be up to 99 characters long and must not contain the following symbols:
`~!$%^&*|'"<>?,()=
Type: string
-O, --object-name Object name of the MBean that you want to call
Type: string
-A, --attribute Name of the attribute inside the class with the specified object name.
Type: string
Table 51:
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
If the parameter is not used, the JMX check will be on account level for all running ap
plications in the account.
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Note
If the parameter is not used, the default host is hana.ondemand.com.
It is needed only if the attribute is a composite data structure. This key defines the item in
the composite data structure. For more information about the composite data structure,
see Class CompositeDataSupport .
Type: string
-o, --operation Operation that has to be called on the MBean after checking the attribute value.
It is useful for resetting statistical counters to restart an operation on the same MBean.
Type: string
Type: string
The threshold can be a regular expression in case of string values or compliant with the
official nagios threshold/ranges format. For more information about the format in case it
is a number, see the official nagios documentation .
The threshold can be a regular expression in case of string values or compliant with the
official nagios threshold/ranges format. For more information about the format in case it
is a number, see the official nagios documentation .
Default: false
Type: boolean
Note
When you use this parameter, a new JMX check is not created when the one you spec
ify does not exist.
For a typical example how to configure a JMX check for your application and subscribe recipients to receive
notification alerts, see Configuring a JMX Check to Monitor Your Application [page 1197].
The following example creates a JMX check that returns a warning state of the metric if the value is between 10
and 100 bytes, and returns a critical state if the value is greater than 100 bytes. If the value is less than 10 bytes,
the returned state is OK.
Related Information
1.3.6.4.19 create-schema
This command creates a HANA database or schema with the specified ID on a shared or dedicated database.
Caution
This command is not supported for productive SAP HANA database systems. For more information about how
to create schemas on productive SAP HANA database systems, see Binding SAP HANA Databases to Java
Applications [page 868].
Parameters
Table 52:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-d, --dbtype Creates the HANA database or schema on a shared database system. Syntax: 'type:ver
sion'. Version is optional.
Type: string
--dbsystem Creates the schema on a dedicated database system. To see the available dedicated da
tabase systems, execute the list-dbms command.
Type: string
Caution
The list-dbms command lists different database types, including productive SAP
HANA database systems. Do not use the create-schema command for productive
SAP HANA database systems. For more information about how to create schemas on
productive SAP HANA database systems, see Binding SAP HANA Databases to Java
Applications [page 868].
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
It must start with a letter and can contain lowercase letters ('a' - 'z') and numbers ('0' -
'9'). For schemas IDs, uppercase letters ('A' - 'Z') and the special characters '.' and '-' are
also allowed.
Note that the actual ID assigned in the database will be different to this version.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
This console client command creates a security group rule for a virtual machine.
Parameters
Table 53:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values, see Landscape Hosts [page 41].
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--from-port The start of the range of allowed ports. The <from_port> value must be less than or equal
to the <to_port> value.
--to-port The end of the range of allowed ports. The <to_port> value must be greater than or equal
to the <from_port> value.
--source-id The name of the system that you want to connect from.
For a SAP HANA system, the --source-id is the SAP HANA database system name.
For a Java application, it is the application name.
Related Information
1.3.6.4.21 create-ssl-host
Creates an SSL host for configuration of custom domains. This SSL host will be serving your custom domain.
Parameters
To list all parameters available for this command, execute neo help create-ssl-host in the command line.
Table 54:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Unique identifier of the SSL host. If not specified, 'default' value is set.
Example
Related Information
1.3.6.4.22 create-vm
Parameters
Table 56:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41].
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 57:
Optional
Default: off
If you do not provide -pkp as a parameter in the command line, you will be prompted to
enter a passphrase.
If you do not enter a passphrase, the command will be executed but the private key will
not be encrypted.
-l, --ssh-key-location The path to a public key of certificate that will be uploaded and used to log in on the newly
created virtual machine.
Type: string
-k, --ssh-key-name The name of the already existing public key to be used to login on the newly created vir
tual machine.
Type: string. It can contain only alphanumeric characters (0-9, a-z, A-Z), underscore (_)
and hyphen (-).
-v; --volume-id Unique identifier of the volume from which the virtual machine will be created.
Type: string
Condition: Use when you want to create a new virtual machine from a volume.
Type: string
Condition: Use when you want to create a new virtual machine from a volume snap
shot.
Default: off
Example
Related Information
1.3.6.4.23 create-volume-snapshot
Takes a snapshot of the file system of the specified virtual machine volume. The operation is asynchronous.
Parameters
Table 58:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-v, --volume-id Unique identifier of the volume from which the snapshot will be taken
Type: string
Example
Related Information
1.3.6.4.24 delete-account
Deletes a particular account. Only the user who has created the account is allowed to delete it.
Note
You cannot delete an account if it still has associated subscriptions, non-shared database systems, database
schemas, deployed applications, HTML5 applications, or document service repositories. You need to delete
Parameters
To list all parameters available for this command, execute neo help delete-account in the command line.
Table 59:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Example
Related Information
neo delete-availability-check
Parameters
Table 60:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Table 61:
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Example
Related Information
This command deletes the ASE database with the specified ID.
Parameters
Table 62:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 63:
Optional
--force or -f Forcefully deletes the ASE database, including all application bindings
Example
Related Information
This command deletes the SAP HANA database with the specified ID on a SAP HANA database system enabled
for multitenant database container support.
Parameters
Table 64:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 65:
Optional
--force or -f Forcefully deletes the HANA database, including all application bindings
Example
Parameters
Table 66:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 67:
Optional
Example
Related Information
This command deletes destination configuration properties files and JDK files. You can delete them on account,
application or subscribed application level.
Parameters
To list all parameters available for this command, execute neo help delete-destination in the command
line.
Table 68:
Required
-a, --account Your account. The account for which you provide username and password.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-b, --application The application for which you delete a destination. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host The respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Examples
Related Information
1.3.6.4.30 delete-ecm-repository
This command deletes a repository including the data of any tenants in the repository, unless you restrict the
command to a specific tenant.
Caution
Be very careful when using this command. Deleting a repository permanently deletes all data. This data cannot
be recovered.
Parameters
Table 69:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
Type: string
Type: string
Table 70:
Optional
Deletes the repository for the given tenant only instead of for all tenants. If no tenant
name is provided, the repositories for all tenants are deleted.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
1.3.6.4.31 delete-domain-certificate
Deletes a certificate.
Note
Cannot be undone. If the certificate is mapped to an SSL host, the certificate will be removed from the SSL host
too.
Parameters
To list all parameters available for this command, execute neo help delete-domain-certificate in the
command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the certificate that you set to the SSL host
Example
Related Information
1.3.6.4.32 delete-hanaxs-certificates
This command deletes certificates that contain a specified string in the Subject CN.
Note
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See
restart-hana [page 258].
To list all parameters available for this command, execute neo help delete-hanaxs-certificates in the
command line.
Table 72:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-cn-string, --contained- A part of the certificate CN. All certificates that contain this string shall be deleted.
string
Default: none
Example
To delete all certificates containing John Doe in their Subject DN, execute:
1.3.6.4.33 delete-jmx-check
or
Parameters
Table 73:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-n, --name or -A, all Name of the JMX check to be deleted or all JMX checks configured for the given account
and application are deleted.
Type: string
Table 74:
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Note
If the parameter is not used, the default host is hana.ondemand.com.
Example
Related Information
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help delete-resource in the command line.
Table 75:
Required
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 76:
Optional
Example
To delete a solution resource from the system repository for your extension account, execute:
Parameters
To list all parameters available for this command, execute neo help delete-ssl-host in the command line.
Table 77:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
Related Information
This command is used to delete a keystore by deleting the keystore file. You can delete keystores on account,
application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help delete-keystore in the command line.
Table 78:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
Type: string
Table 79:
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Example
On Subscription Level
On Account Level
Related Information
1.3.6.4.37 delete-schema
This command deletes the specified schema, including all data it contains. A schema cannot be deleted if it is still
bound to an application. To enforce the deletion, use the force parameter but bear in mind that this will also delete
all bindings that still exist.
Schema backups are kept for 14 days and may be used to restore mistakenly deleted data (available by special
request only).
Parameters
Table 80:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 81:
Optional
-f, --force Forcefully deletes the schema, including all application bindings
Default: off
Default: off
Example
Related Information
1.3.6.4.38 delete-security-rule
This console client command deletes a security rule configured for a virtual machine.
Table 82:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values, see Landscape Hosts [page 41].
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--from-port The start of the range of allowed ports. The <from_port> value must be less than or equal
to the <to_port> value.
--to-port The end of the range of allowed ports. The <to_port> value must be greater than or equal
to the <from_port> value.
--source-id The name of the system that you want to connect from.
For a SAP HANA system, the --source-id is the SAP HANA database system name.
For a Java application, it is the application name.
Example
1.3.6.4.39 delete-vm
Parameters
Table 83:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Default: off
Example
1.3.6.4.40 delete-volume
Parameters
Table 85:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-v, --volume-id Unique identifier of the volume that you want to delete
Type: string
1.3.6.4.41 delete-volume-snapshot
Parameters
Table 86:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-s, --snapshot-id Unique identifier of the volume snapshot that you want to delete
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
1.3.6.4.42 deploy
Deploying an application publishes it to SAP Cloud Platform. Use the optional parameters to make some specific
configurations of the deployed application.
Parameters
To list all parameters available for this command, execute neo help deploy in the command line.
Table 87:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing them
Note
The size of an application can be up to 1.5 GB. If the application is packaged as a WAR
file, the size of the unzipped content is taken into account.
If you want to deploy more than one application on one and the same application process,
put all WAR files in the same folder and execute the deployment with this source, or spec
ify them as a comma-separated list.
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
To deploy an application on more than one landscape, execute the deploy separately for
each landscape host.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 88:
Optional
Command-specific parameters
Default: 2
Type: integer
--delta Deploys only the changes between the provided source and the deployed content. New
content will be added; missing content will be deleted. Recommended for development
use to speed up the deployment.
--ev Environment variables for configuring the environment in which the application runs.
Sets one environment variable by removing the previously set value; can be used multiple
times in one execution.
If you provide a key without any value (--ev <KEY1>=), the –ev parameter is ignored.
(beta) You can use JRE 8 with the Java Web Tomcat 7 runtime (neo-java-web version 2.25
or higher) in accounts enabled for beta features.
-m, --minimum-processes Minimum number of application processes, on which the application can be started
Default: 1
-M, --maximum-processes Maximum number of application processes, on which the application can be started
Default: 1
System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters: -
Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if necessary
and note that this may impact the application performance or its ability to start.
Use the parameter if you want to choose an application runtime container different from
the one coming with your SDK. To view all available runtime containers, use list-runtimes
[page 233].
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will run
on the same version after a restart. Otherwise, by default, the application is started on the
latest minor version (of the same major version) which is backward compatible and in
cludes the latest corrections (including security patches), enhancements, and updates.
Note that choosing this option does not affect already started application processes.
You can view the recommended versions by executing the list-runtime-versions com
mand.
Note
If you choose your runtime version, consider its expiration date and plan updating to a
new version regularly.
For more information, see Choosing Application Runtime Version [page 1141]
Default: off
Possible values: on (allow compression), off (disable compression), force (forces com
pression for all responses) or an integer (which enables compression and specifies the
compression-min-size value in bytes).
For more information, see Enabling and Configuring Gzip Response Compression [page
1144]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connection-timeout Defines the number of milliseconds to wait for the request URI line to be presented after
accepting a connection.
Default: 20000
--max-threads Specifies the maximum number of simultaneous requests that can be handled
Default: 200
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7
.
Example
Here are examples of some additional configurations. If your application is already started, stop it and start it
again for the changes to take effect.
You can deploy an application on a host different from the default one by specifying the host parameter. For
example, to use the data center located in the United States, execute:
To specify the compute unit size on which you want the application to run, use the --size parameter with one of
the following values:
Available sizes depend on your account type and what options you have purchased. For developer accounts, only
the Lite edition is available.
For example, if you have a productive account and have purchased a package with Premium edition compute
units, then you can run your application on a Premium compute unit size, by executing the following command:
When deploying an application, name the WAR file with the desired context root.
For example, if you want to deploy your WAR in context root "/hello" then rename your WAR to hello.war.
If you want to deploy it in the "/" context root then rename your WAR to ROOT.war.
Related Information
1.3.6.4.43 deploy-local
Parameters
Table 89:
Required
-s, --source Source for deployment (comma separated list of WAR files or folders containing one or
more WAR files)
Table 90:
Optional
Related Information
1.3.6.4.44 deploy-mta
This command deploys Multi-Target Application (MTA) archives. One or more than one MTA archives can be
deployed to your account in one go.
Parameters
To list all parameters available for this command, execute neo help deploy-mta in the command line.
Table 91:
Required
-a, --account The name of the account for which you provide a user and a password.
-h, --host The landscape host on which you execute the command.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
-s, --source A comma-separated list of file locations, pointing to MTA archive files or folders contain
ing them.
Table 92:
Optional
Command-specific parameters
-y, --synchronous Triggers the deployment and waits until the deployment operation finishes. The com
mand without the --synchronous parameter triggers deployment and exits immedi
ately without waiting for the operation to finish. Takes no value.
You can deploy an MTA archive on a host different from the default one by specifying the host parameter. For
example, to use the data center located in the United States, execute:
Related Information
1.3.6.4.45 disable
This command stops the creation of new connections to an application or application process, but keeps the
already running sessions alive. You can check if an application or application process has been disabled by
executing the status command.
Parameters
To list all parameters available for this command, execute neo help disable in the command line.
Table 93:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 94:
Optional
-i, --application- Unique ID of a single application process. Use it to disable a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
account and application parameters. You can list the application process ID by using the
<status> command.
Default: none
Example
To disable a single applcation process, first identify the application process you want to disable by executing neo
status:
From the generated list of application process IDs, copy the ID you need and execute neo disable for it:
Related Information
The command displays the set of properties of a deployed application, such as runtime version, minimum and
maximum processes, Java version.
Parameters
To list all parameters available for this command, execute the neo help display-application-properties
in the command line.
Table 95:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
Parameters
To list all parameters available for this command, execute neo help display-csr in the command line.
Table 96:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 97:
Optional
-f, --file name Name of the local file where the CSR is stored
Example
1.3.6.4.48 display-ecm-repository
Parameters
Table 98:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
Type: string
Table 99:
Optional
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
1.3.6.4.49 display-db-info
This command displays detailed information about the selected database. This includes the assigned database
type, the database version, and a list of bindings with the application and data source names.
Parameters
Table 100:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
1.3.6.4.50 display-schema-info
This command displays detailed information about the selected schema. This includes the assigned database
type, the database version, and a list of bindings with the application and data source names.
Parameters
Table 101:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
1.3.6.4.51 display-volume-snapshot
Parameters
Table 102:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
1.3.6.4.52 download-keystore
This command is used to download a keystore by downloading the keystore file. You can download keystores on
account, application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help download-keystore in the command line.
Table 103:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
Type: string
Table 104:
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-l,--location Local directory where the keystore will be saved. If it is not specified, the current directory
is used.
Type: string
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly include
the --overwrite argument, you will be notified and asked if you want to overwrite the
file.
Example
On Subscription Level
On Application Level
On Account Level
Related Information
1.3.6.4.53 edit-ecm-repository
Changes the name, key, or virus scan settings of a repository. You cannot change the display name or the
description.
Parameters
Table 105:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
Type: string
Type: string
Table 106:
Optional
Caution
If not used, the virus scan setting of the whole repository changes.
Type: string
Type: string
Type: string
-v, --virus-scan Can be used to activate the virus scanner and check all incoming documents for viruses.
Default: true
Type: boolean
Recommendation
For repositories that are used by untrusted users and or for unknown content, we rec
ommend that you enable the virus scanner by setting this parameter to true. Enabling
the virus scanner could impair the upload performance.
If a virus is detected, the upload process for the document fails with a virus scanner ex
ception.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
1.3.6.4.54 enable
This command enables new connection requests to a disabled application or application process. The enable
command cannot be used for an application that is in maintenance mode.
To list all parameters available for this command, execute neo help enable in the command line.
Table 107:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values, see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 108:
Optional
-i, --application- Unique ID of a single application process. Use it to enable a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
account and application parameters. You can list the application process ID by using the
<status> command.
Default: none
Example
To enable a single applcation process, first identify the application process you want to enable by executing neo
status:
Related Information
1.3.6.4.55 get-destination
This command downloads (reads) destination configuration properties files and JDK files. You can download
them on account, application or subscribed application level.
Parameters
To list all parameters available for this command, execute neo help get-destination in the command line.
Table 109:
Required
-a, --account Your account. The account for which you provide username and password.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-b, --application The application for which you download a destination. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host The respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
--localpath The path on your local file system where a destination or a JKS file will be downloaded. If
not set, no files will be downloaded.
Type: string
--name The name of the destination or JKS file to be downloaded. If not set, the names of all des
tination or JKS files for the service will be listed.
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Note
If you download a destination configuration file that contains a password field, the
password value will not be visible. Instead, after Password =..., you will only see
an empty space. You will need to learn the password in other ways.
Type: string
Examples
Related Information
Parameters
To list all parameters available for this command, execute neo help generate-csr in the command line.
Table 110:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string (It can contain alphanumerics, '.', '-' and '_')
Allowed attributes:
-s, -subject- A comma-separated list of all domain names to be protected with this certificate, used as
alternative-name value for the Subject Alternative Name field of the generated certificate.
Type: string
Example
Related Information
1.3.6.4.57 get-log
Parameters
To list all parameters available for this command, execute neo help get-log in the command line.
Table 111:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-d, --directory Local folder location under which the file will be downloaded. If the directory you have
specified does not exist, it will be created.
Type: string
Type: string
Note
To find out the name of the log file to download, use the list-logs command to see
the available log files of your application. For more information, see list-logs [page
231].
-h, --host The respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Table 112:
Optional
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly include
the --overwrite argument, you will be notified and asked if you want to overwrite the
file.
Default: true
Type: boolean
Example
Related Information
1.3.6.4.58 grant-db-access
This command gives another account permission to access a database. The account providing the permission
and the account receiving the permission must be part of the same global account.
Parameters
Table 113:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Table 114:
Optional
-to-account The account to receive access permission. The account provoding the permission and the
account receiving the permission must be part of the same global account.
-permissions Comma-separated list of access permissions to the database. Acceptable values: 'TUN
NEL', 'BINDING'.
Example
1.3.6.4.59 grant-db-tunnel-access
This command generates a token, which allows the members of another account to access a database using a
database tunnel.
Parameters
Table 115:
Required
Type: string
The account to be granted database tunnel access, based on the access token
Type: string
Example
Related Information
This command gives an application in another account access to a schema based on a one-time access token.
The access token is used to bind the schema to the application.
Parameters
Table 116:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
This command configures the connectivity of an extension application to a SAP SuccessFactors system
associated with a specified SAP Cloud Platform account. The command creates the required HTTP destination
and registers an OAuth client for the extension application in SAP SuccessFactors. The command is relevant for
Java extension applications.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-create-connection in the
command line.
Table 117:
Required
-b, --application The name of the extension application for which you are creating the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Table 118:
Optional
-w, --overwrite If a connection with the same name already exists, overwrites it. If you do not explicitly
specify the --overwrite parameter, and a connection with the same name already exists,
the command fails to execute
Example
To configure a connection of type OData with technical user for an extension application in an account located in
the United States (US East) data center, execute:
This command removes the specified connection configured between an extension application and a SAP
SuccessFactors system associated with the specified SAP Cloud Platform account.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
To list all parameters available for this command, execute neo help hcmcloud-delete-connection in the
command line.
Table 119:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To delete an OData connection for an extension application running in an extension account in the US East data
center, execute:
This command removes an extension application from the list of authorized assertion consumer services for the
SAP SuccessFactors system associated with the specified account.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-disable-application-
access in the command line.
Table 120:
Required
-b, --application The name of the extension application for which you are deleting the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are deleting the connection
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To remove a Java extension application from the list of authorized assertion consumer services for the SAP
SuccessFactors system associated with an account located in the United States (US East), execute:
The command removes the entry for the application from the list of the authorized service provider assertion
consumer services for the SuccessFactors system associated with the specified account. If entry for the
extension application does not exist the command will fail.
This command displays the status of an extension application entry in the list of assertion consumer services for
the SAP SuccessFactors system associated with the specified account. The returned results contain the
extension application URL.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-display-application-
access-status in the command line.
-b, --application The name of the extension application for which you are displaying the status in in the list
of assertion consumer services. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are creating the connection
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To display the status of an application entry in the list of authorized assertion consumer services for the SAP
SuccessFactors system associated with an account in the data center located in the United States (US East),
execute:
This command registers an extension application as an authorized assertion consumer service for the SAP
SuccessFactors system associated with the specified account to enable the application to use the SAP
SuccessFactors identity provider (IdP) for authentication.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-enable-application-
access in the command line.
Table 122:
Required
-b, --application The name of the extension application for which you are creating the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are creating the connection
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To register an extension application as an authorized assertion consumer service for the SAP SuccessFactors
system associated with an account located in the United States (US East) data center, execute:
The command creates entry for the application in the list of the authorized service provider assertion consumer
services for the SAP SuccessFactors system associated with the specified account. The entry contains the main
URL of the extension application, the service provider audience URL and service provider logout URL. If an entry
for the given extension application already exists, this entry is overwritten.
This command enables the SAP SuccessFactors role provider for the specified Java application.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-enable-role-provider in
the command line.
-b, --application The name of the extension application for which you are creating the connection. Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To enable the SAP SuccessFactors role provider for your Java application in an extension account located in the
United States (US East) data center, execute:
This command lists the SAP SuccessFactors Employee Central (EC) home page tiles registered in the SAP
SuccessFactors company instance associated with the extension account.
Note
Currently we only support v12 home pages tiles.
Parameters
To list all parameters available for this command, execute neo help hcmcloud-get-registered-home-
page-tiles in the command line.
Table 124:
Required
-b, --application The name of the extension application for which you are listing the home page tiles.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
If you do not specify the application parameter, the command lists all tiles regis
tered in the Successfactors company instance associated with the specified extension
account.
--application-type The type of the extension application for which you are listing the home page tiles
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To list the home page tiles registered for a Java extension application running in your account in the US East data
center, execute::
There is no lifecycle dependency between the tiles and the application, so the application may not be started or
may not be deployed anymore.
This command imports SAP SuccessFactors HCM suite roles into the SAP SuccessFactors customer instance
linked to an extension account.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-import-roles in the
command line.
Type: string
Note
The file size must not exceed 500 KB.
Type: string
Type: string
Example
To import the role definitions for an extension application from the system repository for your extension account
into the SuccessFactors customer instance connected to this account, execute:
If any of the roles that you are importing already exists in the target system, the commands fails to execute.
Related Information
This command lists the connections configured for the specified extension application.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-list-connections in the
command line.
Table 126:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
To list the connections for an extension application running in an extension account in the US East data center,
execute:
This command registers the SAP SuccessFactors Employee Central (EC) home page tiles in the SAP
SuccessFactors company instance associated with the extension account. The home page tiles must be
described in a tile descriptor file for the extension application in JSON format.
Note
Currently we only support v12 home pages tiles.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-register-home-page-tiles
in the command line.
Table 127:
Required
Type: string
Note
The file size must not exceed 100 KB.
-b, --application The name of the extension application for which you are registering the home page tiles.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--application-type The type of the extension application for which you are registering the home page tiles
Default: java
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To register a home page tile for a Java extension application running in your account in the US East data center,
execute::
Related Information
This command removes the SAP SuccessFactors EC home page tiles registered for the extension application in
the SAP SuccessFactors company instance associated with the specified extension account.
Note
Currently we only support v12 home pages tiles.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Parameters
To list all parameters available for this command, execute neo help hcmcloud-unregister-home-page-
tiles in the command line.
Table 128:
Required
-b, --application The name of the extension application for which you are removing the home page tiles.
Cases:
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
You must use the same application name that you have specified when registering the
tiles.
--application-type The type of the extension application for which you are listing the home page tiles
Default: java
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
To remove the home page tiles registered for a Java extension application running in your account in the US East
data center, execute::
There is no lifecycle dependency between the tiles and the application, so the application may not be started or
may not be deployed anymore.
1.3.6.4.72 hot-update
The hot-update command enables a developer to redeploy and update the binaries of an application started on
one process faster than the normal deploy and restart. Use it to apply and activate your changes during
development and not for updating productive applications.
There are three options for hot-update specified with the --strategy parameter:
Limitations:
Parameters
To list all parameters available for this command, execute neo help hot-update in the command line.
Table 129:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing them.
Acceptable values:
● replace-binaries
● restart-runtime
● reprovision-runtime
Default: 2
Type: integer
--delta Uploads only the changes between the provided source and the deployed content. New
content will be added; missing content will be deleted. Recommended for development
use to speed up the deployment.
Example
1.3.6.4.73 install-local
This command installs a server runtime in a local folder, by default <SDK installation folder>/server.
neo install-local
Parameters
Table 131:
Optional
Default: 8009
Default: 8080
Default: 8443
Default: 1717
Related Information
1.3.6.4.74 list-application-datasources
This command lists all schemas and productive database instances bound to an application.
Parameters
Table 132:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letters)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
1.3.6.4.75 list-availability-check
neo list-availability-check
Parameters
Table 133:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Table 134:
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-R, --recursively Lists availability checks recursively starting from the specified level. For example, if only
'account' is passed as an argument, it starts from the account level and then lists all
checks configured on application level.
Default: false
Type: boolean
Example
Example for listing availability checks recursively starting on account level and listing the checks configured for
Java and SAP HANA XS applications:
Sample output:
Related Information
1.3.6.4.76 list-accounts
Lists all accounts that a customer has. Authorization is performed against the account passed as --account
parameter.
Parameters
To list all parameters available for this command, execute neo help list-accounts in the command line.
Table 135:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Example
neo list-alert-recipients
Parameters
Table 136:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Table 137:
Optional
-b, --application Application name for Java applications or productive SAP HANA instance database name
and application name in the format <instance name>:<application name> for SAP HANA
XS applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-R, --recursively Lists alerts recipients recursively starting from the specified level. For example, if only 'ac
count' is passed as an argument, it starts from the account level and then lists all recipi
ents configured on application level.
Default: false
Type: boolean
Example
Sample output:
application : demo1
alert_recipients@example.com
application : demo2
alert_recipients@example.org, alert_recipients@example.net
Related Information
1.3.6.4.78 list-application-domains
Parameters
To list all parameters available for this command, execute neo help list-application-domains in the
command line.
Table 138:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Related Information
1.3.6.4.79 list-custom-domain-mappings
Parameters
To list all parameters available for this command, execute neo help list-custom-domain-mappings in the
command line.
Table 139:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
1.3.6.4.80 list-db-access-permissions
This command lists the permissions that other accounts have for accessing databases in the specified account.
Parameters
Table 140:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Table 141:
Optional
-i, --id Specify a database to view the permissions only to that database.
-to-account Specify an account to view the permissions only for that account.
-permissions Filter the result by permission. Acceptable values: comma separated list of 'TUNNEL',
'BINDING'.
Example
Related Information
1.3.6.4.81 list-dbms
This command lists the dedicated and shared database management systems available for the specified account
with the following details: database system (for dedicated databases), database type, and database version.
Parameters
Table 142:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
1.3.6.4.82 list-dbs
Parameters
Table 143:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--verbose Displays additional information about each database: database type and database ver
sion
Default: off
Example
1.3.6.4.83 list-domain-certificates
Parameters
To list all parameters available for this command, execute neo help list-domain-certificates in the
command line.
Table 145:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
1.3.6.4.84 list-db-tunnel-access-grants
This command lists all current database access permissions for databases in other accounts.
Note
The list does not include access permissions that have been revoked.
Parameters
Table 146:
Optional
Type: string
Example
Table 147:
Database ID Granted To Access Token
Related Information
1.3.6.4.85 list-ecm-repositories
Table 148:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
Table 149:
Optional
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
ExampleRepository
Display name : Example Repository
Description : This is an example repository with Virus Scan enabled.
ID : cdb158efd4212fc00726b035
Application : Neo CLI
Virus Scan : on
ExampleRepositoryNoVS
Display name : Example Repository without Virus Scan
Description : This is an example repository with Virus Scan disabled.
ID : cdb158efd4212fc00726b035
Application : Neo CLI
Virus Scan : off
Number of Repositories: 2
1.3.6.4.86 list-hanaxs-certificates
This command lists identity provider certificates available to productive HANA instances. Optionally, you can
include a part of the certificate <Subject CN> as filter.
Note
Use this command for SAP HANA version SPS09 or lower SPs only.
Parameters
To list all parameters available for this command, execute neo help list-hanaxs-certificates in the
command line.
Table 150:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 151:
Optional
-cn-string, --contained- A part of the certificate CN. If more than one certificate contain this string, all shall be
string listed.
Default: none
Example
To list all identity provider certificates that contain <John Smith> in their <Subject CN>, execute:
1.3.6.4.87 list-jmx-checks
Parameters
Table 152:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
Table 153:
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Note
If the parameter is not used, all JMX checks used for this account will be listed.
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Note
If the parameter is not used, the default host is hana.ondemand.com.
-R, --recursively Lists JMX checks recursively, starting from the specified level. For example, if only 'ac
count' is passed as an argument, it starts from the account level and then lists all checks
configured on application level.
Default: false
Type: boolean
Example
Sample output:
Related Information
1.3.6.4.88 list-keystores
This command is used to list the available keystores. You can list keystores on account, application, and
subscription levels.
Parameters
To list all parameters available for this command, execute neo help list-keystores in the command line.
Table 154:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Example
On Subscription Level
On Application Level
On Account Level
Related Information
1.3.6.4.89 list-loggers
This command lists all available loggers with their log levels for your application.
To list all parameters available for this command, execute neo help list-loggers in the command line.
Table 156:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host The respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
Related Information
1.3.6.4.90 list-logs
This command lists all log files of your application sorted by date in a table format, starting with the latest
modified.
To list all parameters available for this command, execute neo help list-logs in the command line.
Table 157:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host The respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
Related Information
1.3.6.4.91 list-mta-operations
This command shows the MTA operation status with a given ID.
To list all parameters available for this command, execute neo help list-mta-operations in the command
line.
Table 158:
Required
-a, --account The name of the account for which you provide a user and a password.
-h, --host The landscape host on which you execute the command.
-p, --password Your user password. We recommend that you enter it only when prompted, and not ex
plicitly as a parameter in a properties file or the command line.
Note
This parameter is optional. If you do not use this parameter, all operations that have
not been cleaned up within the last 24 hours will be listed.
Example
Related Information
1.3.6.4.92 list-runtimes
To list all parameters available for this command, execute neo help list-runtimes in the command line.
Table 159:
Required
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Example
Related Information
1.3.6.4.93 list-runtime-versions
The command displays the supported application runtime container versions for your SAP Cloud Platform SDK.
Only recommended versions are shown by default. You can also list supported version for a particular runtime
container.
Parameters
To list all parameters available for this command, execute neo help list-runtime-versions in the
command line.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Table 161:
Optional
--all Lists all supported application runtime container versions. Using a previously released
runtime version is not recommended.
--runtime Lists supported version only for the specified runtime container.
Example
Related Information
1.3.6.4.94 list-schemas
Table 162:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 163:
Optional
--verbose Displays additional information about each schema: database type and database version
Default: off
Example
Related Information
1.3.6.4.95 list-schema-access-grants
This command lists all current schema access grants for a specified account.
Parameters
Table 164:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 165:
Optional
Type: string
Example
Related Information
This console client command lists the security rules configured for a virtual machine.
Parameters
Table 166:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values, see Landscape Hosts [page 41].
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
As an output of the list-security-rules command, you may receive the HANA or JAVA source types
previously created with the create-security-rule command, or an internally managed security rule of type
CIDR for a registered access point. The security rule of type CIDR allows communication between the load
balancer of the SAP Cloud Platform and the virtual machine.
Related Information
1.3.6.4.97 list-ssh-tunnels
list-ssh-tunnels
1.3.6.4.98 list-ssl-hosts
Parameters
To list all parameters available for this command, execute neo help list-ssl-hosts in the command line.
Table 167:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
1.3.6.4.99 list-subscribed-accounts
Parameters
To list all parameters available for this command, execute neo help list-subscribed-accounts in the
command line.
Table 168:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of the provider
account.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Related Information
1.3.6.4.100 list-subscribed-applications
Parameters
To list all parameters available for this command, execute neo help list-subscribed applications in the
command line.
Table 169:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of the ac
count.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Example
Related Information
1.3.6.4.101 list-vms
Lists all virtual machines in the specified account. You can get information for a concrete virtual machine by
name. The command output lists information about the virtual machine, such as size; status; SSH key; floating IP
(if assigned); volume IDs.
Parameters
Table 170:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 171:
Optional
Type: string
Example
Related Information
1.3.6.4.102 list-volumes
Lists all volumes in the specified account. Use display-volume to get information about a specific volume.
Table 172:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 173:
Optional
Example
Related Information
1.3.6.4.103 list-volume-snapshots
Lists all volume snapshots in the specified account. Use display-volume-snapshot to get information about a
specific volume snapshot.
Table 174:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 175:
Optional
-v, --volume-id Unique identifier of a volume. If specified, only volume snapshots created from this vol
ume will be displayed.
Type: string
Example
Related Information
This command opens a database tunnel to the database system associated with the specified schema or
database.
Note
Make sure that you have installed the required tools correctly.
If you face trouble using this command, please check that your installation is correct.
For more information, see Setting Up the Console Client [page 52] and Using the Console Client [page 102].
● Default mode: The tunnel remains open until you explicitly close it by pressing ENTER in the command line. It
is closed automatically after 24 hours or if the command window is closed.
● Background mode: The database tunnel is opened in a separate process. Use the close-db-tunnel
command to close the tunnel once you are done, or it is closed automatically after one hour.
Parameters
Table 176:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
--access-token Identifies a database access permission. The access token and database ID parameters
are mutually exclusive.
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
Related Information
1.3.6.4.105 open-ssh-tunnel
Table 178:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Optional
-r, --port Port on which you want to open the SSH tunnel
Example
1.3.6.4.106 put-destination
This command uploads destination configuration properties files and JKS files. You can upload them on account,
application or subscribed application level.
Parameters
To list all parameters available for this command, execute neo help put-destination in the command line.
-a, Your account. The account for which you provide username and password.
--account Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
--host Type: URL, for acceptable values see Landscape Hosts [page 41]
--localpath The path to a destination or a JKS file on your local file system.
Type: string
-p, Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
--password mand line.
Type: string
Note
When uploading a destination configuration file that contains a password field, the
password value remains available in the file. However, if you later download this file,
using the get-destination command, the password value will no more be visible.
Instead, after Password =..., you will only see an empty space.
Examples
1.3.6.4.107 reconcile-hanaxs-certificates
This command re-applies all already uploaded certificates to all HANA instances. This command is useful if you
already uploaded certificates to SAP Cloud Platform but uploading failed for some of the HANA instances.
Note
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See
restart-hana [page 258].
Parameters
To list all parameters available for this command, execute neo help reconcile-hanaxs-certificates in
the command line.
Table 180:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
1.3.6.4.108 register-access-point
Registers an access point URL for a virtual machine specified by name or ID.
Parameters
Table 181:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
The register-access-point command creates an internally managed security rule of type CIDR, which allows
communication between the load balancer of the SAP Cloud Platform and the virtual machine.
Related Information
1.3.6.4.109 remove-custom-domain
Removes a custom domain as an access point of an application. Use this command if you no longer want an
application to be accessible on the configured custom domain.
Parameters
To list all parameters available for this command, execute neo help remove-custom-domain in the command
line.
Table 182:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
Related Information
1.3.6.4.110 remove-platform-domain
To list all parameters available for this command, execute neo help remove-platform-domain in the
command line.
Table 183:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: URL
Example
Related Information
If you have forgotten the repository key, use this command to request a new repository key.
This command only creates a new key that replaces the old one. You cannot use the old key any longer. The
command does not affect any other repository setting, for example, the virus scan definition. If you just want to
change your current repository key, use the edit-ecm-repository command.
Parameters
Table 184:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
This example resets the repository key for the com.foo.MyRepository repository and creates a new repository
key, for example fp0TebRs14rwyqq.
1.3.6.4.112 reset-log-levels
Parameters
To list all parameters available for this command, execute neo help reset-log-levels in the command line.
Table 185:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host The respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
1.3.6.4.113 restart
Use this command to restart your application or a single application process. The effect of the restart command is
the same as executing the stop command first and when the application is stopped, starting it with the start
command.
Parameters
To list all parameters available for this command, execute the neo help restart command.
Table 186:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values, see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-y, --synchronous Triggers the process and waits until the application is restarted. The command without
the --synchronous parameter triggers the restarting process and exits immediately
without waiting for the application to start.
Default:off
-i, --application- Unique ID of a single application process. Use it to restart a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
account and application parameters. You can list the application process ID by using the
<status> command.
Default: none
Example
To restart the whole application and wait for the operation to finish, execute:
Related Information
1.3.6.4.114 restart-hana
Note
To use this command, log on with a user with administrative rights for the account.
Note
The restart-hana operation will be executed asynchronously. Temporary downtime is expected for SAP
HANA database or SAP HANA XS Engine, including inability to work with SAP HANA studio, SAP HANA Web-
based Development Workbench and Cockpit UIs dependent on SAP HANA XS.
After you trigger the command, you can monitor the command execution in SAP HANA Studio, using
Configuration and Monitoring Open Administration .
Parameters
To list all parameters available for this command, execute neo help restart-hana in the command line.
Table 188:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: URL, for acceptable values see Landscape Hosts [page 41]
Note
You can find the SAP HANA database system ID using the list-dbms [page 221] com
mand or in the Databases & Schemas section in the cockpit by navigating to
Persistence Databases & Schemas .
It must start with a letter and can contain uppercase and lowercase letters ('a' - 'z', 'A' -
'Z'), numbers ('0' - '9'), and the special characters '.' and '-'.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--service-name The SAP HANA service to be restarted. You can choose between the following values:
--system If available, the entire SAP HANA database system will be restarted.
Example
To restart the SAP HANA database system with ID myhanaid running on the productive landscape, execute:
To restart the SAP XS Engine service on SAP HANA database system with ID myhanaid, execute:
Related Information
1.3.6.4.115 revoke-db-access
This command revokes the database access permissions given to another account.
Parameters
Table 189:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Table 190:
Optional
Example
Related Information
1.3.6.4.116 revoke-db-tunnel-access
This command revokes database access that has been given to another account.
Table 191:
Required
-- access-token Access token that identifies the permission to access the da
tabase
Type: string
Type: boolean
Table 192:
Optional
Type: string
Example
Related Information
1.3.6.4.117 revoke-schema-access
This command revokes the schema access granted to an application in another account.
neo revoke-schema-access --host <SAP HANA Cloud host> --account <account name> --
user <e-mail or user name> --access-token <access token>
Table 193:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
--access-token Access token that identifies the grant. Grants can only be revoked by the granting ac
count.
Example
Related Information
The rolling-update command performs update of an application without downtime in one go.
Prerequisites
● You have at least one application process that is not in use, see your compute unit quota.
● The command can be used with compatible application changes only.
Parameters
To list all parameters available for this command, execute neo help rolling-update in the command line.
Table 194:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-s, --source A comma-separated list of file locations, pointing to WAR files, or folders containing them
If you want to deploy more than one application on one and the same application process,
put all WAR files in the same folder and execute the deployment with this source, or spec
ify them as a comma-separated list.
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 195:
Optional
Default: off
Possible values: on (allow compression), off (disable compression), force (forces com
pression for all responses) or an integer (which enables compression and specifies the
compression-min-size value in bytes).
For more information, see Enabling and Configuring Gzip Response Compression [page
1144]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connections The number of connections used to deploy an application. Use it to speed up deployment
of application archives bigger than 5 MB in slow networks. Choose the optimal number of
connections depending on the overall network speed to the cloud.
Default: 2
Type: integer
--ev Environment variables for configuring the environment in which the application runs.
Sets one environment variable by removing the previously set value; can be used multiple
times in one execution.
If you provide a key without any value (--ev <KEY1>=), the –ev parameter is ignored.
--timeout Timeout before stopping the old application processes (in seconds)
Default: 60 seconds
-V, --vm-arguments System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters: -
Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if necessary
and note that this may impact the application performance or its ability to start.
Default: lite
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will run
on the same version after a restart. Otherwise, by default, the application is started on the
latest minor version (of the same major version) which is backward compatible and in
cludes the latest corrections (including security patches), enhancements, and updates.
Note that choosing this option does not affect already started application processes.
You can view the recommended versions by executing the list-runtime-versions com
mand.
Note
If you choose your runtime version, consider its expiration date and plan updating to a
new version regularly.
For more information, see Choosing Application Runtime Version [page 1141]
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request.
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7
.
Related Information
1.3.6.4.119 sdk-upgrade
Use this command to upgrade the SDK that you are currently working with.
neo sdk-upgrade
The command checks for a more recent version of the SDK and then upgrades the SDK. There are two possible
cases:
Note
All files and servers that you add to your SDK will be preserved during upgrade.
Example
neo sdk-upgrade
1.3.6.4.120 set-alert-recipients
● Setting an alert recipient for a Java application or SAP HANA XS application will trigger sending all alerts for
this application to the configured emails.
neo set-alert-recipients
Parameters
Table 196:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
We recommend that you use distribution lists rather than personal email addresses. Keep
in mind that you will remain responsible for handling of personal email addresses with re
spect to data privacy regulations applicable.
Type: string
Table 197:
Optional
-b, --application Application name for Java applications or productive SAP HANA database system, and
application name in the format <database name>:<application name> for SAP HANA XS
applications
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Default: false
Type: boolean
Example
1.3.6.4.121 set-application-property
Use this command to change the value of a single property of a deployed application without the need to redeploy
it. Execute the command separately for each property that you want to set. For the changes to take effect, restart
the application.
To execute the command successfully, you need to to specify the new value of one property from the optional
parameters table below.
Parameters
To list all parameters available for this command, execute the neo help set-application-property in the
command line.
Table 198:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Command-specific parameters
--ev Environment variables for configuring the environment in which the application runs.
Sets the new environment variable without removing the previously set value; can be used
multiple times in one execution.
If you provide a key without any value (--ev <KEY1>=), the environment variable KEY1 will
be deleted.
(beta) You can use JRE 8 with the Java Web Tomcat 7 runtime (neo-java-web version 2.25
or higher) in accounts enabled for beta features.
-m, --minimum-processes Minimum number of application processes, on which the application can be started
Default: 1
-M, --maximum-processes Maximum number of application processes, on which the application can be started
Default: 1
System properties (-D<name>=<value>) separated with space that will be used when
starting the application process.
Memory settings of your compute units. You can set the following memory parameters: -
Xms, -Xmx, -XX:PermSize, -XX:MaxPermSize.
We recommend that you use the default memory settings. Change them only if necessary
and note that this may impact the application performance or its ability to start.
--runtime-version SAP Cloud Platform runtime version on which the application will be started and will run
on the same version after a restart. Otherwise, by default, the application is started on the
latest minor version (of the same major version) which is backward compatible and in
cludes the latest corrections (including security patches), enhancements, and updates.
Note that choosing this option does not affect already started application processes.
You can view the recommended versions by executing the list-runtime-versions com
mand.
Note
If you choose your runtime version, consider its expiration date and plan updating to a
new version regularly.
For more information, see Choosing Application Runtime Version [page 1141]
Default: off
Possible values: on (allow compression), off (disable compression), force (forces com
pression for all responses) or an integer (which enables compression and specifies the
compression-min-size value in bytes).
For more information, see Enabling and Configuring Gzip Response Compression [page
1144]
--compressible-mime- A comma separated list of MIME types for which compression will be used
type
Default: text/html, text/xml, text/plain
--connection-timeout Defines the number of milliseconds to wait for the request URI line to be presented after
accepting a connection.
Default: 20000
--max-threads Specifies the maximum number of simultaneous requests that can be handled.
Default: 200
--uri-encoding Specifies the character encoding used to decode the URI bytes on application request.
Default: ISO-8859-1
For more information, see the encoding sets supported by Java SE 6 and Java SE 7
.
To change the minimum number of server processes on which you want your deployed application to run,
execute:
Related Information
1.3.6.4.122 set-db-properties-ase
Parameters
Table 200:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Note
This parameter sets the maximum database size. The minimum database size is 24
MB. You receive an error if you enter a database size that exceeds the quota for this
database system.
The size of the transaction log will be at least 25% of the database size you specify.
Example
1.3.6.4.123 set-db-properties-hana
This command changes the properties for a SAP HANA database enabled for multitenant database container
support.
Parameters
Table 201:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 202:
Optional
--web-access Enables or disables access to the HANA database from the Internet: 'enabled' (default),
'disabled'
Example
1.3.6.4.124 set-downtime-app
This command configures a custom downtime page (downtime application) for an application. The downtime
page is shown to the user in the event of unplanned downtime of the original application.
Parameters
To list all parameters available for this command, execute neo help set-downtime-app in the command line.
Table 203:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
The downtime page application is provided by the customer and hosted in the same ac
count as the application itself.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Example
Related Information
1.3.6.4.125 set-log-level
Simple Logging Facade for Java (SLF4J) uses the following log levels:
Level Description
ALL This level has the lowest possible rank and is intended to turn
on all logging.
ERROR This level designates error events that might still allow the
application to continue running.
OFF This level has the highest possible rank and is intended to
turn off logging.
Parameters
To list all parameters available for this command, execute neo help set-log-level in the command line.
Table 204:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-h, --host The respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-l, --level The log level you want to set for the logger(s)
Type: string
-p, --password Password for the specified user. To protect your password, enter it only when prompted
by the console client and not explicitly as a parameter in the properties file or the com
mand line.
Type: string
Type: string
Example
Related Information
1.3.6.4.126 set-quota
Note
The amount you want to set cannot exceed the amount of quota you have purchased. In case you try to set
bigger amount of quota, you will receive an error message.
Parameters
To list all parameters available for this command, execute neo help set-quota in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-m, --amount Compute unit quota type and amount of the quota to be set in the format <type>:
[amount].
In this composite parameter, the <type> part is mandatory and must have one of the fol
lowing values: lite, pro, prem, prem-plus. The amount part is optional and must be an inte
ger value. If omitted, a default value 1 is assigned. Do not insert spaces between the two
parts and their delimiter ":", and use lower case for the <type> part.
Type: string
Example
1.3.6.4.127 status
You can check the current status of an application or application process. The command lists all application
processes with their IDs, state, last change date sorted chronologically, and runtime information.
The command also lists the availability zones where these application processes are running. However, this is only
valid for recently started applications and if you have the latest SDK version installed.
The availability zones ensure the high availability of your application processes. If one of the availability zones
experiences infrastructure issues and downtime, only the processes in this zone are affected. The remaining
processes continue to run normally, ensuring that your application is working as expected.
When an application process is running but cannot receive new connection requests, it is marked as disabled in its
status description. Additionally, if an application is in planned downtime and a maintenance page has been
configured for it, the corresponding application is listed in the command output.
Parameters
To list all parameters available for this command, execute neo help status in the command line.
Table 206:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 207:
Optional
-i, --application- Unique ID of a single application process. Use it to show the status of a particular applica
process-id tion process instead of the whole application. As the process ID is unique, you do not need
to specify account and application parameters.
Default: none
--show-full-process-id Shows the full length (40 characters) of the unique application process ID. You may need
to get the full ID when you try to execute a certain operation on the application process
and the process cannot be identified uniquely with the short version of the ID. In particu
lar, usage of the full length is recommended for tools and batch processing. If this param
eter is not used, the status command lists only the first 7 characters by default.
Default: off
You can list all application processes in your application with their IDs:
Then, you can request the status of a particular application process from the list using its ID:
Related Information
1.3.6.4.128 start
Starts a deployed application in order to make it available for customers. In case the application is already started,
the command starts an additional application process if the quota for maximum allowed number of application
processes is not exceeded.
Parameters
To list all parameters available for this command, execute neo help start in the command line.
Table 208:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 209:
Optional
-y,--synchronous Triggers the starting process and waits until the application is started. The command
without the --synchronous parameter triggers the starting process and exits immedi
ately without waiting for the application to start.
Default: off
Example
To start the application and wait for the operation to finish, execute:
Related Information
1.3.6.4.129 start-db-hana
This command starts the specified SAP HANA database on a SAP HANA database system enabled for multitenant
database container support.
Table 210:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
1.3.6.4.130 start-local
neo start-local
Table 211:
Optional
Default: 8003
--wait-url Waits for a 2xx response from the specified URL before exiting
--wait-url-timeout Seconds to wait for a 2xx response from the wait-url before exiting
Default: 180
Related Information
1.3.6.4.131 start-maintenance
This command starts the planned downtime of an application, during which it no longer receives requests and a
custom maintenance page for that application is shown to the user. All active connections will still be handled until
the application is stopped.
Parameters
To list all parameters available for this command, execute neo help start-maintenance in the command line.
Table 212:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
If an application is already in planed downtime, executing the status command for it will show the maintenance
application, to which the traffic is being redirected.
Example
Related Information
1.3.6.4.132 stop
Use this command to stop your deployed and started application or application process.
To list all parameters available for this command, execute neo help stop in the command line.
Table 213:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 214:
Optional
-y, --synchronous Triggers the stopping process and waits until the application is stopped. The command
without the --synchronous parameter triggers the stopping process and exits imme
diately without waiting for the application to stop.
Default: off
-i, --application- Unique ID of a single application process. Use it to stop a particular application process
process-id instead of the whole application. As the process ID is unique, you do not need to specify
account and application parameters. You can list the application process ID by using the
<status> command.
Default: none
To stop the whole application and wait for the operation to finish, execute:
Related Information
1.3.6.4.133 stop-db-hana
This command stops the specified SAP HANA database on a SAP HANA database system enabled for multitenant
database container support.
Parameters
Table 215:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
1.3.6.4.134 stop-local
neo stop-local
Parameters
Table 216:
Optional
Default: 8003
Related Information
1.3.6.4.135 stop-maintenance
This command stops the planned downtime of an application, starts traffic to it and deregisters the maintenance
application page.
To list all parameters available for this command, execute neo help stop-maintenance in the command line.
Table 217:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Example
Related Information
1.3.6.4.136 subscribe
Subscribes the account of the consumer to a provider Java application. Once the command is executed
successfully, the subscription is visible in the Subscriptions panel of the cockpit in the consumer account.
Note
You can subscribe an account to a Java application that is running in another account only if both accounts
(provider and consumer account) belong to the same landscape.
Parameters
To list all parameters available for this command, execute neo help subscribe in the command line.
Table 218:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
This parameter must be specified in the format <provider account >:<provider applica
tion>.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of both the
provider and the consumer accounts and must possess the Administrator role in those
accounts. The command is not available for trial accounts as the same user cannot be a
member of both accounts.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Example
1.3.6.4.137 unbind-db
This command unbinds a database from a Java application for a particular data source.
The application retains access to the database until the next application restart. After the restart, the application
will no longer be able to access it.
Parameters
Table 219:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Default: <DEFAULT>
Example
Related Information
1.3.6.4.138 unbind-domain-certificate
Unbinds a certificate from an SSL host. The certificate will not be deleted from SAP Cloud Platform storage.
Parameters
To list all parameters available for this command, execute neo help unbind-domain-certificate in the
command line.
Table 221:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --ssl-host SSL host as defined with the --name parameter when created, or 'default' if not speci
fied.
Example
Related Information
1.3.6.4.139 unbind-hana-dbms
This command unbinds a productive SAP HANA database system from a Java application for a particular data
source.
The application retains access to the productive SAP HANA database system until the next application restart.
After the restart, the application will no longer be able to access the database system.
Table 222:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 223:
Optional
Example
Related Information
This command unbinds a schema from an application for a particular data source.
The application retains access to the schema until the next application restart. After the restart, the application
will no longer be able to access the schema.
Parameters
Table 224:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Table 225:
Optional
Example
1.3.6.4.141 undeploy
Undeploying an application removes it from SAP Cloud Platform. To undeploy an application, you have to stop it
first.
Parameters
To list all parameters available for this command, execute the neo help undeploy in the command line.
Table 226:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Related Information
1.3.6.4.142 unregister-access-point
Unregisters all access point URLs registered for a virtual machine specified by name or ID.
Parameters
Table 227:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
Type: string
Example
Related Information
1.3.6.4.143 unsubscribe
Remember
You must have the Administrator role in the provider and consumer account to execute this command.
Parameters
To list all parameters available for this command, execute neo help unsubscribe in the command line.
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
To be able to execute this command, the specified user must be a member of the both the
provider and the consumer accounts.
Type: string
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
Example
Related Information
Uploads an SSL certificate to SAP Cloud Platform. The certificate must be signed using the previously generated
CSR via the generate-csr command.
Parameters
To list all parameters available for this command, execute neo help upload-domain-certificate in the
command line.
Table 229:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-n, --name Name of the certificate that you set to the SSL host
Note that some CAs issue chained root certificates that contain an intermediate certifi
cate. In such cases, put all certificates in the file for upload starting with the signed SSL
certificate.
Example
1.3.6.4.145 upload-hanaxs-certificates
This command uploads and applies identity provider certificates to productive HANA instances running on SAP
Cloud Platform.
Note
After executing this command, a you need to restart the SAP HANA XS services for it to take effect. See
restart-hana [page 258].
Parameters
To list all parameters available for this command, execute neo help upload-hanaxs-certificates in the
command line.
Table 230:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL. For acceptable values see Landscape Hosts [page 41]
-p, --password To protect your password, enter it only when prompted by the console client and not ex
plicitly as a parameter in the properties file or the command line.
Type: string
Type: string
-l, --localpath Path to a X.509 certificate or a directory containing certificates on a loca l file system. If
the local path is a directory, all files in it shall be uploaded. You need to restart the HA NA
instances to activate the certificates.
Default: none
Type: string
Example
1.3.6.4.146 upload-keystore
This command is used to upload a keystore by uploading the keystore file. You can upload keystores on account,
application, and subscription levels.
Parameters
To list all parameters available for this command, execute neo help upload-keystore in the command line.
Table 231:
Required
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-h, --host Use the respective landscape host for your account type.
Type: URL, for acceptable values see Landscape Hosts [page 41]
-l,--location Path to a keystore file to be uploaded from the local file system. The file extension deter
mines the keystore type. The following extensions are sup
ported: .jks, .jceks, .p12, .pem. For more information about the keystore formats,
see Features [page 1359]
Type: string
Type: string
Table 232:
Optional
Type: string (up to 30 characters; lowercase letters and numbers, starting with a letter)
-w, --overwrite Overwrites a file with the same name if such already exists. If you do not explicitly include
the --overwrite argument, you will be notified and asked if you want to overwrite the
file.
Example
On Subscription Level
On Application Level
On Account Level
1.3.6.4.147 version
This command is used to list the SDK version and the runtime. It also lists the command versions and the JAR files
in the SDK and checks whether the SDK is up to date.
Use this command to show the SDK version and the runtime. You can use parameters to list the command
versions and the JAR files in the SDK and to check whether the SDK version is up to date.
Parameters
To list all parameters available for this command, execute neo help version in the command line.
Table 233:
Required
-c, --commands Lists all commands available in the SDK and their versions.
-j, --jars Lists all JAR files in the SDK and their versions.
-u, --updates Checks if there are any updates and hot fixes for the SDK and whether the SDK version is
still supported. It also provides the version of the latest available SDK.
Table 234:
Optional
Type: string
neo version
neo version -c
To list all JAR files in the SDK and their versions, execute:
neo version -j
neo version -u
Related Information
Overview
The exit code is a number that indicates the outcome of a command execution. It shows whether the command
completes successfully or defines an error if something goes wrong during the execution.
When commands are executed as part of automated scripts, the exit codes provide feedback to the scripts, which
allows the script to bypass known errors that can be met during execution. A script can also interact with the user
in order to request additional information required for the script to complete.
All exit codes in SAP Cloud Platform are aligned to the Bash-Scripting Guide. For more information, see Exit Codes
With Special Meanings .
The set of exit codes is divided into ranges, based on the error type and the reason.
Table 235:
Error Type Start Number End Number Count
No error 0 0 1
Common errors 1 9 9
Missing parameters 10 39 30
Exit Codes
Exit codes can be defined as general (common for all commands) and command-specific (cover different cases
via different commands).
Table 236:
Code Meaning Type/Reason
0 OK
Related Information
You download and set up the Cloud Foundry command line interface (cf CLI) to start working with the Cloud
Foundry environment. You use cf CLI to deploy and manage your applications.
Procedure
1. Download the latest version of cf CLI from GitHub at the following URL: https://github.com/cloudfoundry/
cli#downloads
○ For Microsoft Windows, unpack the ZIP file and run the cf executable file. When prompted, choose Install.
○ For Mac OS, proceed as follows:
1. Open the PGK file.
2. In the installation wizard, choose Continue, and then select the destination folder for the cf CLI
installation.
3. Choose Continue, and when prompted, choose Install.
Next Steps
If you have an HTTP proxy server, configure the proxy settings. For more information, see http://
docs.cloudfoundry.org/cf-cli/http-proxy.html .
1.4 Services
Table 237:
Service Description
Authorization Management API [page The authorization management service REST API provides functionality to manage
1333] roles of your applications and their assignments to users.
Business Services for YaaS [page 1012] You can build business services and Builder modules for YaaS on SAP Cloud
Platform, and then use those services in cloud applications which again can run on
SAP Cloud Platform.
SAP Cloud Platform Connectivity [page SAP Cloud Platform Connectivity provides a secure, reliable and easy-to-consume
311] access to business systems, running either on-premise or in the cloud. SAP Cloud
Platform provides a trusted channel to your business systems while, at the same
time, your IT administrator has complete control and auditability of what is techni
cally exposed to the on-demand world.
Data Quality Management, microservi Offers microservices for address cleansing, geocoding, and reverse geocoding. En
ces for location data ables you to embed address cleansing and enrichment services within any business
process or application so that you can quickly reap the value of complete and accu
rate address data.
Debugging Applications [page 1056] Enables you to inspect a Java application's runtime behavior and state.
Document Service [page 606] SAP Cloud Platform Document service provides a content repository for unstruc
tured or semi-structured content. Applications access it using the OASIS standard
protocol Content Management Interoperability Services (CMIS).
The applications consume the service using the provided client library.
Feedback Service (Beta) [page 662] SAP Cloud Platform, feedback service provides developers, customers, and part
ners with the option to collect end-user feedback for their applications. The feed
back service also delivers detailed text analysis of user sentiment (positive, nega
tive, or neutral). The feedback service consists of a client API, exposed through the
HTTPS REST protocol, and administration and analysis user interface.
The feedback service is a beta functionality that is available on the SAP Cloud
Platform trial landscape for developer accounts.
Forms by Adobe SAP Cloud Platform Forms by Adobe is a solution for generating print and interac
tive forms using Adobe Document Services running on SAP Cloud Platform.
Gamification Service [page 680] The SAP Cloud Platform, gamification service enables the rapid introduction of ga
mification concepts into applications. The service includes an online development
and administration environment (gamification workbench) for easy implementation
and analysis of gamification concepts. The underlying gamification rule manage
ment provides support for sophisticated gamification concepts, covering time con
straints, complex nested missions and collaborative games. The built-in analytics
module makes it possible to perform advanced analysis of the player's behavior to
facilitate continuous improvement of game concepts.
Git Service [page 997] SAP Cloud Platform Git service enables you to store and version source code of ap
plications, for example HTML5 and Java applications, in Git repositories.
OData provisioning OData provisioning is a solution that enables you to consume data from an SAP
Business Suite backend system in SAP Cloud Platform. It establishes a connection
between SAP Business Suite data and target clients, platforms, and programming
framework. OData provisioning exposes business data and business logic as OData
services on SAP Cloud Platform, enabling customers to run user-centric approach
on SAP Cloud Platform.
Identity Provisioning Service Identity Provisioning Service automates provisioning and de-provisioning of identi
ties and authorizations for cloud applications. It provides secure, fast and efficient
identity lifecycle management in the cloud. The service can use existing corporate
identity stores (LDAP, ABAP and others) as identity source systems.
Internet of Things Service The Internet of Things Service is designed to facilitate and support the implementa
tion of Internet of Things applications. The service provides interfaces for register
ing devices and their specific data types, sending data to a database running on
SAP Cloud Platform in a secure and efficient manner, storing the data in SAP Cloud
Platform as well as provide easy access to the data stored.
Keystore Service [page 1358] Provides a repository for cryptographic keys and certificates to the applications
hosted on SAP Cloud Platform.
Lifecycle REST API The lifecycle REST API provides functionality for application lifecycle management.
Monitoring Service [page 773] The monitoring service REST API enables you to fetch the overall monitoring status
and detailed metric values for your Java applications.
OAuth 2.0 Service [page 1425] After the OAuth-protected application (resource server) is deployed in SAP Cloud
Platform, configure the OAuth authorizations to define the clients authorized to ac
cess the application and other communication information with them.
Performance Statistics Service (Beta) Performance statistics enable you to monitor the resources used by your applica
[page 785] tions and to investigate the causes of performance issues.
Persistence Service [page 791] SAP Cloud Platform, persistence service provides in-memory and relational persis
tence. All maintenance activities, such as data replication, backup and recovery, are
handled by the platform.
Predictive Services SAP Cloud Platform, predictive services is a collection of RESTful web services that
deliver business analytics insights. The services are ready to be integrated in your
cloud applications and extensions.
Profiling Applications [page 1181] Using SAP JVM Profiler, you can analyze resource-related problems in your Java
application regardless of whether the JVM is running locally or on the cloud.
Remote Data Sync Service [page 974] SAP Cloud Platform provides a service for synchronizing huge numbers of remote
databases into a consolidated SAP HANA database in the cloud. This service is
based on SAP SQL Anywhere and its MobiLink technology.
SAP Cloud Platform Identity Identity Authentication is a cloud solution for identity lifecycle management. It pro
Authentication Service vides services for user login, registration, authentication, and access to SAP Cloud
Platform applications.
Mobile Services SAP Cloud Platform is an open, standard-based cloud platform that enables simpli
fied mobile application development, configuration, and management.
Portal Service Portal service is a cloud-based solution for easy site creation and consumption with
a superior user experience. Designed primarily for mobile consumption, it runs on
top of SAP HANA Cloud and is built to operate with SAP HANA, for in-memory com
puting.
SAP Jam Build socially-infused applications on the SAP Cloud Platform with SAP Jam. SAP
Jam delivers secure, social collaboration that extends across SAP's entire technol
ogy landscape - giving you social capabilities where and when you need them in
your business processes.
For more information, refer to our SAP Jam Developer Guide for HANA Cloud Plat
form.
SAP Document Center SAP Document Center is a solution that protects your content in an easy-to-use na
tive mobile application, giving users anytime, anywhere access to view, edit, and
collaborate on corporate and personal documents.
Enterprise Messaging (Beta) [page 599] Enterprise Messaging (Beta) is SAP’s scalable, robust, and reliable messaging-as-
a-service in the Cloud. This service enables you to manage connectivity between
different applications that are even based on different technology platforms.
SAP Translation Hub SAP Translation Hub enables customers and partners to satisfy the demands of a
global market by translating the short texts of products into additional languages.
Note
Beta features and services can be tested with the free developer account, which you can request on http://
hanatrial.ondemand.com.
Caution
You should not use SAP Cloud Platform beta features in productive accounts, as any productive use of the beta
functionality is at the customer's own risk, and SAP shall not be liable for errors or damages caused by the use
of beta features.
Overview
SAP Cloud Platform Connectivity allows SAP Cloud Platform applications to access securely remote services that
run on the Internet or on-premise. This service:
● Consists of a Java API that application developers can use to consume remote services.
● Allows account-specific configuration of application connections via HTTP and Mail destinations.
● Offers a technical connectivity solution, which can be used to establish a secure tunnel from the customer
network to an on-demand application in SAP Cloud Platform. At the same time, the customer IT department
has full control and auditability of what is technically exposed to the on-demand world.
● Allows you to make connections to both Java and ABAP on-premise systems.
A company that uses SAP Cloud Platform has been granted an account on the platform to which only authorized
users of the company have access. The company can subscribe applications to its account or deploy its own
applications, and those applications can then be used in this account. The administrator of the Cloud connector
can set up a secure tunnel from the customer network to his or her account. The platform ensures that the tunnel
can be only used by the account applications. This means that applications of other accounts have no access to
the tunnel. The tunnel itself is encrypted via transport layer security so that connection privacy can be
guaranteed.
The connectivity service supports the following protocols relevant for both Java and SAP HANA development:
● HTTP Protocol - this protocol enables you to exchange data between your on-demand application and on-
premise systems or internet services. For this aim, you can create and configure HTTP destinations to make
the needed Web connections. For on-premise connectivity, you can reach backend systems using the Cloud
connector via HTTP.
● Mail Protocols - the SMTP protocol allows you to send electronic mail messages from your Web applications
using e-mail providers that are accessible on the Internet, such as Google Mail (Gmail). The IMAP and POP3
allow you to retrieve e-mails from the mailbox of your e-mail account. Applications use the standard
javax.mail API. The e-mail provider and e-mail account are configured using mail destinations.
● RFC Protocol - this protocol enables you to invoke ABAP function modules. You can create and configure RFC
destinations as well as make connections to back-end systems using the Cloud connector via RFC.
You can create XS destinations for connecting your HANA XS applications to Internet and on-premise services.
For more information, see Consuming SAP Cloud Platform Connectivity (HANA XS) [page 466].
Java Development
● Consume a service from the Internet. More information: Consuming Internet Services (Java Web or Java EE 6
Web Profile) [page 394]
● Make connections between Web applications and on-premise backend services via HTTP protocol. More
information: Consuming Back-End Systems (Java Web or Java EE 6 Web Profile) [page 409]
● Make connections between Web applications and on-premise backend services via RFC protocol. More
information: Tutorial: Invoking ABAP Function Modules in On-Premise ABAP Systems [page 444]
● Establish connections from on-premise systems to SAP Cloud Platform, using the Cloud connector. More
information: SAP Cloud Platform Cloud Connector [page 480]
● Send and fetch e-mails. More information: Sending and Fetching E-Mail [page 453]
Restrictions
● For the on-demand to on-premise connectivity scenario, the currently supported protocols are HTTP(S),
LDAP, and RFC.
● For Internet connections, you are allowed to use any port > 1024. For on-demand to on-premise solutions
there are no port limitations.
● You can use destination configuration files with extension .props, .properties, .jks, and .txt, as well as
files with no extension.
Related Information
In this section, you will learn how to use SAP Cloud Platform Connectivity to connect Web applications to Internet,
make on-demand to on-premise connections to Java and ABAP on-premise systems and configure destinations
to send and fetch e-mail. To do all these tasks, you need to create and configure destinations, according to the
relevant protocol type. For more information, see: Destinations [page 324]
The following user groups are involved in the end-to-end use of the connectivity service:
● Application developers - develop the SAP Cloud Platform application. They create a connectivity-enabled
application by using the connectivity service API.
● Application operators - access SAP Cloud Platform cockpit and are responsible for productive deployment
and operation of an application. They are also responsible for configuring the remote connections that an
application might need.
● IT administrators - set up the connectivity to SAP Cloud Platform in the customer's on-premise network, using
the Cloud connector.
Scenarios
● Making Internet connections between Web applications and external servers via HTTP protocol: Consuming
Internet Services (Java Web or Java EE 6 Web Profile) [page 394]
● Making connections between Web applications and on-premise backend services via HTTP protocol:
Consuming Back-End Systems (Java Web or Java EE 6 Web Profile) [page 409]
● Making connections between Web applications and on-premise backend services via RFC protocol: Tutorial:
Invoking ABAP Function Modules in On-Premise ABAP Systems [page 444]
● Sending and fetching e-mail via mail protocols: Sending and Fetching E-Mail [page 453]
Tips
The Cloud connector provides light and easy way to establish secure connections from on-premise systems to
SAP Cloud Platform accounts. It supports Microsoft Windows OS, Linux OS and Mac OS X operating systems. For
more information, see SAP Cloud Platform Cloud Connector [page 480].
Related Information
Destinations are part of SAP Cloud Platform Connectivity and are used for the outbound communication from a
cloud application to a remote system. They contain the connection details for the remote communication of an
application, which can be configured for each customer to accommodate the specific customer back-end
systems and authentication requirements. For more information, see Destinations [page 324].
Destinations should be used by application developers when they aim to provide applications that:
● Integrate with remote services or back-end systems that need to be configured by customers
● Integrate with remote services or back-end systems that are located in a fenced environment (that is, behind
firewalls and not publicly accessible)
Tip
HTTP clients created by destination APIs allow parallel usage of HTTP client instances (via class
ThreadSafeClientConnManager).
Connectivity APIs
Package Description
org.apache.http http://hc.apache.org
org.apache.http.client http://hc.apache.org/httpcomponents-client-ga/httpclient/
apidocs/org/apache/http/client/package-summary.html
org.apache.http.util http://hc.apache.org/httpcomponents-core-ga/httpcore/
apidocs/org/apache/http/util/package-summary.html
javax.mail https://javamail.java.net/nonav/docs/api/
The SAP Cloud Platform SDK for Java Web uses version 1.4.1
of javax.mail, the SDK for Java EE 6 Web Profile uses
version 1.4.5 of javax.mail, and the SDK for Java Web
Tomcat 7 uses version 1.4.7 of javax.mail.
Destination APIs
All connectivity API packages are visible by default from all Web applications. Applications can consume the
destinations via a JNDI lookup.
Procedure
Prerequisites
You have set up your Java development environment. See also: Setting Up the Development Environment [page
43]
To consume destinations using HttpDestination API, you need to define your destination as a resource in the
web.xml file.
1. An example of a destination resource named myBackend, which is described in the web.xml file, is as follows:
<resource-ref>
<res-ref-name>myBackend</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.http.HttpDestination;
...
Note
If you want the lookup name to differ from the destination name, you can specify the lookup name in <res-
ref-name> and the destination name in <mapped-name>, as shown in the following example.
<resource-ref>
<res-ref-name>myLookupName</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
<mapped-name>myBackend</mapped-name>
</resource-ref>
3. With the retrieved HTTP destination, you can then, for example, send a simple GET request to the configured
remote system by using the following code:
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.HttpResponse;
...
Note
If you want to use <res-ref-name>, which contains "/", the name after the last "/" should be the same as
the destination name. For example, you can use <res-ref-name>connectivity/myBackend</res-
ref-name>. In this case, you should use java:comp/env/connectivity/myBackend as a lookup string.
If you want to get the URL of your configured destination, use the URI getURI() method. This method returns
the URL, defined in the destination configuration, converted to URI.
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
2. In your Java code, you can then look it up and use it in following way:
Note
If you have two destinations with the same name, one configured on account level and the other on application
level, the getConfiguration() method will return the destination on account level.
The preference order is: subscription level -> account level -> application level.
Related Information
If you need to also add Maven dependencies, take a look at this blog:
See also:
All connectivity API packages are visible by default from all Web applications. Applications can consume the
connectivity configuration via a JNDI lookup.
Context
Besides making destination configurations, you can also allow your applications to use their own HTTP clients.
The ConnectivityConfiguration API provides you a direct access to the destination configurations of your
applications. This API also:
● Can be used independent of the existing destination API so that applications can bring and use their own
HTTP client
● Consists of both a public REST API and a Java client API.
The ConnectivityConfiguration API is supported by all runtimes, including Java Web Tomcat 7. For more
information about runtimes, see Application Runtime Container [page 1025].
Procedure
1. To consume connectivity configuration using JNDI, you need to define ConnectivityConfiguration API
as a resource in the web.xml file. An example of a ConnectivityConfiguration resource named
connectivityConfiguration, which is described in the web.xml file, is as follows:
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</res-
type>
</resource-ref>
2. In your servlet code, you can look up the ConnectivityConfiguration API from the JNDI registry as
following:
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
...
3. With the retrieved ConnectivityConfiguration API, you can read all properties of any destination defined
on subscription, application or account level.
Note
If you have two destinations with the same name, one configured on account level and the other on
application level, the getConfiguration() method will return the destination on account level. The
preference order is: subscription level -> account level -> application level.
4. If truststore and keystore are defined in the corresponding destination, they can be accessed by using
methods getKeyStore and getTrustStore.
// create sslcontext
TrustManagerFactory tmf =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
tmf.init(trustStore);
KeyManagerFactory keyManagerFactory =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
String keyStorePassword = "myPassword";
keyManagerFactory.init(keyStore, keyStorePassword.toCharArray());
All connectivity API packages are visible by default from all Web applications. Applications can consume the
authentication header provider via a JNDI lookup.
Context
The AuthenticationHeaderProvider API allows your Web applications to use their own HTTP clients, as it
also provides them with authentication token generation (application-to-application SSO, on-premise SSO). This
API also:
● Provides additional helper methods, which facilitate the task to initialize an HTTP client (for example,
authentication method that helps you set headers for application-to-application SSO).
● Consists of both a public REST API and a Java client API.
The AuthenticationHeaderProvider API is supported by all runtimes, including Java Web Tomcat 7. For
more information about runtimes, see Application Runtime Container [page 1025].
Procedure
1. To consume the authentication header provider API using JNDI, you need to define
AuthenticationHeaderProvider API as a resource in the web.xml file. An example of a
AuthenticationHeaderProvider resource named myAuthHeaderProvider, which is described in the
web.xml file, is as follows:
<resource-ref>
<res-ref-name>myAuthHeaderProvider</res-ref-name>
<res-
type>com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider</
res-type>
</resource-ref>
2. In your servlet code, you can look up the AuthenticationHeaderProvider API from the JNDI registry as
following:
import javax.naming.Context;
import javax.naming.InitialContext;
import com.sap.core.connectivity.api.authentication.AuthenticationHeaderProvider;
...
Tip
We recommend that you pack the HTTP client (Apache or other) inside the lib folder of your Web application
archive.
Restrictions:
● Principal Propagation must be enabled for the account. For more information, see ID Federation with the
Corporate Identity Provider [page 1406] → section "Specifying Custom Local Provider Settings"
● Both applications must run on behalf of the same account.
● The receiving application must use SAML2 authentication.
Note
In case you work with Java Web Tomcat 7 runtime: Bear in mind that the following code snippet works
properly only when using Apache HTTP client version 4.1.3. If you use other (higher) versions of Apache HTTP
client, you should adapt your code.
To learn how to generate on-premise SSO authentication, see Principal Propagation Using HTTP Proxy [page
386].
SAP Cloud Platform provides support for applications to use the SAML Bearer assertion flow for consuming
OAuth-protected resources. In this way, applications do not need to deal with some of the complexities of OAuth
and can reuse existing identity providers for user data. Users are authenticated by using SAML against the
configured trusted identity providers. The SAML assertion is then used to request an access token from an OAuth
authorization server. This access token should be injected in all HTTP requests to the OAuth-protected resources.
Tip
Тhe access tokens are cached by AuthenticationHeaderProvider and are auto-renovated. When a token
is about to expire, a new token is created shortly before the expiration of the old one.
The AuthenticationHeaderProvider API provides the following method for generating such headers:
List<AuthenticationHeader>
getOAuth2SAMLBearerAssertionHeaders(DestinationConfiguration
destinationConfiguration);
SAP Java Connector (SAP JCo) is a middleware component that enables you to develop ABAP-compliant
components and applications in Java. SAP JCo supports communication with Application Server ABAP (AS
ABAP) in both directions:
SAP JCo can be implemented with Desktop applications and Web server applications.
Note
You can find generic information regarding authorizations required for the use of SAP JCo in SAP Note 460089
.
To learn in detail about the SAP JCo API, see SAP Java Connector (Standalone Version).
Note
This documentation contains sections not applicable to SAP Cloud Platform. In particular:
● SAP JCo Architecture: CPIC is only used in the last mile from your Cloud connector to the backend. From
the cloud to the Cloud connector, SSL protected communication is used.
● SAP JCo Installation: SAP Cloud Platform already includes all the necessary artifacts.
● SAP JCo Customizing and Integration: In SAP Cloud Platform, the integration is already done by the
runtime. You can concentrate on your business application logic.
● Server Programming: The programming model of JCo in SAP Cloud Platform does not include server-side
RFC communication.
● IDoc Support for External Java Applications: For the time being, there is no IDocLibrary for JCo available in
SAP Cloud Platform.
Related Information
Overview
Connectivity destinations are part of SAP Cloud Platform Connectivity and are used for the outbound
communication of a cloud application to a remote system. They contain the connection details for the remote
communication of an application. Connectivity destinations are represented by symbolic names that are used by
on-demand applications to refer to remote connections. The connectivity service resolves the destination at
runtime based on the symbolic name provided. The result is an object that contains customer-specific
configuration details, such as the URL of the remote system or service, the authentication type, and the relative
credentials.
You can use destination files with extension .props, .properties, .jks, and .txt, as well as files with no
extension.
The currently supported destination types are HTTP, Mail and RFC.
● HTTP destination [page 366] - provides data communication via HTTP protocol and is used for both Internet
and on-premise connections..
● Mail destination [page 456]- specifies an e-mail provider for sending and retrieving e-mails via SMTP, IMAP
and POP3 protocols.
● RFC destination [page 430] - makes connections to ABAP on-premise systems via RFC protocol using JCo as
API.
Destinations can be simultaneously configured on three levels: application, consumer account and subscription.
This means it is possible to have one and the same destination on more than one configuration level.
● Application level - The destination is related to an application and its relevant provider account. It is, though,
independent from the consumer account in which the application is running.
● Consumer account level - The destination is related to a particular account.
● Subscription level - The destination is related to the triad <Application, Provider Account, Consumer
Account>.
The runtime tries to resolve a destination in the following order: Subscription level → Consumer account level →
Provider application level.
For more information about the usage of consumer account, provider account and provider application, see
Configuring Destinations from the Console Client [page 326].
To use the connectivity service 2.x and the Cloud connector 2.x version, the following properties need to be
specified, according to the destination type:
● Destination configuration files and Java keystore (JKS) files are cached at runtime. The cache expiration time
is set to a small time interval (currently around 4 minutes). This means that once you update an existing
destination configuration or a JKS file, the application needs about 4 minutes until the new destination
configuration is applied. To avoid this waiting time, the application can be restarted on the cloud; following the
restart, the new destination configuration takes effect immediately.
● When you configure a destination for the first time, it takes effect immediately.
● If you change a mail destination, the application needs to be restarted before the new configuration becomes
effective.
To configure and then use a destination to remotely connect your Java EE or on-demand application, you can use
either of the following methods:
Related Information
You can see examples in the SDK package that you previously downloaded from http://
tools.hana.ondemand.com.
Open the SDK location and go to /tools/samples/connectivity. This folder contains a standard
template.properties file, weather destination, and weather.destinations.properties file, which provides all the
necessary properties for uploading the weather destination.
As an application operator, you can configure your application using SAP Cloud Platform console client. You can
configure HTTP, Mail or RFC destinations using a standard properties file.
The tasks listed below demonstrate how to upload, download, and delete connectivity destinations. You can
perform these operations for destinations related to your own account, a provider account, your own application
or an application provided by another account.
To use an application from another account, you must be subscribed to this application through your account.
Note
Destination files must be encoded in ISO 8859-1 character encoding.
Prerequisites
● You have downloaded and set up the console client. For more information, see Setting Up the Console Client
[page 52].
● For specific information about all connectivity restrictions, see SAP Cloud Platform Connectivity [page 311] →
section "Restrictions".
The number of mandatory property keys varies depending of the authentication type you choose. For more
information about HTTP destination properties files, HTTP Destinations [page 366].
Key stores and trust stores must be stored in JKS files with a standard .jks extension.
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
For more information about mail destination properties files, see Mail Destinations [page 456].
All properties except Name and Type must start with "jco.client." or "jco.destination". For more
information about RFC destination properties files, see RFC Destinations [page 430].
If mandatory fields are missing or data is specified incorrectly, you will be prompted accordingly by the console
client.
Tasks
Related Information
Context
The procedure below explains how you can upload destination configuration properties files and certificate files.
You can upload them on account, application or subcribed application level.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular landscape host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Procedure
Note
When uploading a destination configuration file that contains a password field, the password value remains
available in the file. However, if you later download this file, using the get-destination command, the
password value will no more be visible. Instead, after Password =..., you will only see an empty space.
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file as
well, instead of being specified directly in the command (with the exception of the -password parameter,
which must be specified when the command is executed). When you use a properties file, enter the path to it as
the last command line parameter.
Example:
Related Information
Context
The procedure below explains how you can download (read) destination configuration properties files and
certificate files. You can download them on account, application or subcribed application level.
You can read destination files with extension .props, .properties, .jks, and .txt, as well as files with no
extension. Destination files must be encoded in ISO 8859-1 character encoding.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular landscape host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Tips
Note
If you download a destination configuration file that contains a password field, the password value will not be
visible. Instead, after Password =..., you will only see an empty space. You will need to learn the password in
other ways.
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file as
well, instead of being specified directly in the command (with the exception of the -password parameter,
which must be specified when the command is executed). When you use a properties file, enter the path to it as
the last command line parameter. A sample weather properties file can be found in directory <SDK_location>
\tools\samples\connectivity.
Example:
Context
The procedure below explains how you can delete destination configuration properties files and certificate files.
You can delete them on account, application or subcribed application level.
Note
Bear in mind that, by default, your destinations are configured on SAP Cloud Platform, that is the
hana.ondemand.com landscape. If you need to specify a particular landscape host, you need to add the --host
parameter, as shown in the examples. Otherwise, you can skip this parameter.
Procedure
Note
The configuration parameters used by SAP Cloud Platform console client can be defined in a properties file as
well, instead of being specified directly in the command (with the exception of the -password parameter,
which must be specified when the command is executed). When you use a properties file, enter the path to it as
the last command line parameter.
Example:
Related Information
You can use the Connectivity editor in the Eclipse IDE to configure HTTP, Mail and RFC destinations in order to:
● Connect your Web application to the Internet or make it consume an on-premise backend system via
HTTP(S);
● Send an e-mail from a simple Web application using an e-mail provider that is accessible on the Internet;
● Make your Web application invoke a function module in an on-premise ABAP system via RFC.
You can create, delete and modify destinations to use them for direct connections or export them for further
usage. You can also import destinations from existing files.
Note
Destination files must be encoded in ISO 8859-1 character encoding.
Prerequisites
● You have downloaded and set up your Eclipse IDE. For more information, see Setting Up the Development
Environment [page 43] or Updating Java Tools for Eclipse and SDK [page 53].
● You have created a Java EE application. For more information, see Creating a HelloWorld Application [page
56] or Using Java EE 6 Web Profile [page 1036].
Related Information
Context
The procedure below demonstrates how you can create and configure connectivity destinations (HTTP, Mail or
RFC) on a local SAP Cloud Platform server.
Procedure
Also, a Servers folder is created and appears in the navigation tree of the Eclipse IDE. It contains configurable
folders and files you can use, for example, to change your HTTP or JMX port.
5. On the Servers view, double-click the added server to open its editor.
6. Go to the Connectivity tab view.
a. In the All Destinations section, choose the button to create a new destination.
b. From the dialog window, enter a name for your destination, select its type and then choose OK.
c. In the URL field, enter the URL of the target service to which the destination should refer.
d. In the Authentication dropdown box, choose the authentication type required by the target service to
authenticate the calls.
○ If the target service does not require authentication, choose NoAuthentication.
○ If the target service requires basic authentication, choose BasicAuthentication. You need to enter a
user name and a password.
○ If the target service requires a client certificate authentication, choose
ClientCertificateAuthentication. See Using Destination Certificates (IDE) [page 337].
e. Optional: In the Properties or Additional Properties section, choose the button to specify additional
destination properties.
f. Save the editor.
7. When a new destination is created, the changes take effect immediately.
Related Information
Context
The procedure below demonstrates how you can create and configure connectivity destinations (HTTP, Mail or
RFC) on SAP Cloud Platform.
a. In the All Destinations section, choose the button to create a new destination.
b. From the dialog window, enter a name for your destination, select its type and the choose OK.
c. In the URL field, enter the URL of the target service to which the destination should refer.
d. In the Authentication dropdown box, choose the authentication type required by the target service to
authenticate the calls.
○ If the target service does not require authentication, choose NoAuthentication.
○ If the target service requires basic authentication, choose BasicAuthentication. You need to enter a
user name and a password.
○ If the target service requires a client certificate authentication, choose
ClientCertificateAuthentication. See Using Destination Certificates (IDE) [page 337].
○ If the target service requires your cloud user authentication, choose PrincipalPropagation. You also
need to select Proxy Type: OnPremise and should enter the additional property
CloudConnectorVersion with value 2.
e. In the Proxy Type dropdown box, choose the required type of proxy connection.
Note
This dropdown box allows you to choose the type of your proxy and is only available when deploying on
SAP Cloud Platform. The default value is Internet. In this case, the destination uses the HTTP proxy for
the outbound communication with the Internet. For consumption of an on-premise target service,
choose the OnPremise option so that the proxy to the SSL tunnel is chosen and the tunnel is
established to the connected Cloud connector.
f. Optional: In the Properties or Additional Properties section, choose the button to specify additional
destination properties.
g. Save the editor. This saves the specified destination configuration in SAP Cloud Platform.
6. When new destinations are created, the changes take effect immediately.
Note
Bear in mind that changes are currently cached with a cache expiration of up to 4 minutes, so if you modify
a destination configuration the changes might not take effect immediately. However, if the relevant Web
application is restarted on the cloud, the destination changes will take effect immediately.
Prerequisites
Context
You can maintain keystore certificates in the Connectivity editor. You can upload, add and delete certificates for
your connectivity destinations. Bear in mind that:
● You can use JKS, PFX and P12 files for destination keystore, and JKS, CRT, CER, DER files for destination
truststore.
● You add certificates in a keystore file and then you upload, add, or delete this keystore.
● You can add certificates only for HTTPS destinations. Keystore is available only for
ClientCertificateAuthentication.
Procedure
Uploading Certificates
1. Press the Upload/Delete keystore button. You can find it in the All Destinations section in the Conectivity
editor.
2. Choose Upload Keystore and select the certificate you want to upload. Choose Open or double-click the
ceritificate.
Note
You can upload a certificate during creation or editing of a destination, by choosing Manage Keystore or by
pressing the Upload/Delete keystore button.
Deleting Certificates
Related Information
Prerequisites
Note
The Connectivity editor allows importing destination files with extension .props, .properties, and .txt, as
well as files with no extension. Destination files must be encoded in ISO 8859-1 character encoding.
○ If the destination does not contain client certificate authentication, it is saved as a single configuration file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a Keystore file.
5. The destination file is imported within the Connectivity editor.
Note
If the properties file contains incorrect properties or values, for example wrong destination type, the editor
only displays the valid ones in the Properties table.
Related Information
Prerequisites
You have imported or created a new destination (HTTP, Mail or RFC) in the Eclipse IDE.
Procedure
○ If the destination does not contain client certificate authentication, it is saved as a single configuration file.
Tip
You can keep the default name of the destination, or rename it to avoid overriding with previous files with
the same name.
Next Steps
After exporting the destination, you can open it to check its content. Bear in mind that all password fields will be
commented (with # symbols), and their values - deleted.
Example:
Related Information
Use the Destinations editor in SAP Cloud Platform cockpit to configure HTTP, Mail and RFC destinations in order
to:
● Connect your Web application to the Internet or make it consume an on-premise back-end system via
HTTP(S)
● Send an e-mail from a simple Web application using an e-mail provider that is accessible on the Internet.
● Make your Web application invoke a function module in an on-premise ABAP system via RFC.
You can create, delete, clone, modify, import and export destinations.
Use this editor to work with destinations on subscription, account, and application level.
Note
Destination files must be encoded in ISO 8859-1 character encoding.
Prerequisites
1. You have logged into the cockpit from the SAP Cloud Platform landing page, depending on your account type.
For more information, see Landscape Hosts [page 41].
2. Depending on the level you need to make destination configurations from the Destinations editor, make sure
the following is fulfilled:
○ Subscription level – you need to have at least one application subscribed to your account.
○ Application level – you need to have at least one application deployed on your account.
○ Account level – no prerequisites.
For more information, see Accessing the Destinations Editor [page 345].
Tasks
Related Information
Prerequisites
You have logged into the cockpit from the SAP Cloud Platform landing page, depending on your account type. For
more information, see Landscape Hosts [page 41].
Procedure
1. In the cockpit, select your account name from the Account menu in the breadcrumbs.
2. From the left-side navigation, choose Applications Subscriptions to open the page with your currently
subscribed Java applications (if any).
3. Select the application for which you need to create a destination.
4. From the left-side panel, choose Destinations.
1. In the cockpit, select your account name from the Account menu in the breadcrumbs.
2. From the left-side navigation, choose Connectivity Destinations .
3. The Destinations editor is opened.
1. In the cockpit, select your account name from the Account menu in the breadcrumbs.
2. From the left-side navigation, choose Applications Java Applications to open the page with your
currently deployed Java Web applications (if any).
3. Select the application for which you need to create a destination.
4. From the left-side panel, choose Configuration Destinations .
5. The Destinations editor is opened.
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Context
To learn how to create HTTP, RFC and Mail destinations, follow the steps on the relevant pages:
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
For more information, see also: HTTP Destinations [page 366].
Note
If you set an HTTPS destination, you need to also add truststore. For more information, see Using
Destination Certificates (Cockpit) [page 353].
Note
For a detailed description of WebIDE-specific properties, see Connecting Remote Systems.
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Note
For a detailed description of RFC-specific properties (JCo properties), see RFC Destinations [page 430].
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Procedure
Prerequisites
You have logged into the cockpit and opened the Destinations editor.
Context
You can use the Check Connection button in the Destinations editor of the cockpit to verify if the URL configured
for a HTTP Destination is reachable and if the connection to the specified system is possible.
Note
This check is available with Cloud connector version 2.7.1 or higher.
For each destination, the check button is available in the destination detail view and in the destination overview list
(icon Check availability of destination connection in section Actions).
Note
The check does not guarantee that a backend is operational. It only verifies if a connection to the backend is
possible.
This check is supported only for destinations with Proxy Type Internet and OnPremise:
Table 238:
Backend status could not be determined. ● The Cloud connector version is less ● Upgrade the Cloud connector to
than 2.7.1. version 2.7.1 or higher.
● The Cloud connector is not con ● Connect the Cloud connector to the
nected to the account. corresponding account.
● Check the server status (availabil
● The backend returns a HTTP status
ity) of the backend system.
code above or equal to 500 (server
● Check the basic Cloud connector
error).
configuration steps:
● The Cloud connector is not config Initial Configuration [page 504]
ured properly. Configuring the Cloud Connector
for HTTP [page 387]
Configuring the Cloud Connector
for RFC [page 437]
Backend is not available in the list of de The Cloud connector is not configured Check the basic Cloud connector config
uration steps:
fined system mappings in Cloud properly.
connector. Initial Configuration [page 504]
Resource is not accessible in Cloud The Cloud connector is not configured Check the basic Cloud connector config
uration steps:
connector or backend is not reachable. properly.
Initial Configuration [page 504]
Backend is not reachable from Cloud Cloud connector configuration is ok but Check the backend (server) availability.
Prerequisites
You have previously created or imported a connectivity destination (HTTP, Mail or RFC ) in the Destinations editor
of the cockpit.
Procedure
1. In the Destinations editor, go to the existing destination which you want to clone.
Related Information
Prerequisites
You have previously created or imported a connectivity destination (HTTP, Mail or RFC) in the Destinations editor
of the cockpit.
Procedure
Tip
For complete consistency, we recommend that you first stop your application, then apply your
destination changes, and then start again the application. Also, bear in mind that these steps will cause
application downtime.
● Delete a destination:
To remove an existing destination, choose the button. The changes will take effect in up to five minutes.
Related Information
Prerequisites
You have logged into the cockpit and opened the Destinations editor. For more information, see Accessing the
Destinations Editor [page 345].
Context
This page explains how you can maintain truststore and keystore certificates in the Destinations editor. You can
upload, add and delete certificates for your connectivity destinations. Bear in mind that:
● You can only use JKS, PFX and P12 files for destination key store, and JKS, CRT, CER, DER for destination
trust store.
● You can add certificates only for HTTPS destinations. Truststore can be used for all authentication types.
Keystore is available only for ClientCertificateAuthentication.
Uploading Certificates
Note
You can upload a certificate during creation or editing of a destination, by clicking the Upload and Delete
Certificates link.
Deleting Certificates
1. Choose the Certificates button or click the Upload and Delete Certificates link.
2. Select the certificate you want to remove and choose Delete Selected.
3. Upload another certificate, or close the Certificates window.
Related Information
Prerequisites
Note
The Destinations editor allows importing destination files with extension .props, .properties, .jks,
and .txt, as well as files with no extension. Destination files must be encoded in ISO 8859-1 character
encoding.
Procedure
○ If the configuration file contains valid data, it is displayed in the Destinations editor with no errors. The
Save button is enabled so that you can successfully save the imported destination.
○ If the configuration file contains invalid properties or values, under the relevant fields in the Destinations
editor are displayed error messages in red which prompt you to correct them accordingly.
Related Information
Prerequisites
You have created a connectivity destination (HTTP, Mail or RFC) in the Destinations editor.
○ If the destination does not contain client certificate authentication, it is saved as a single configuration file.
○ If the destination provides client certificate data, it is saved as an archive, which contains the main
configuration file and a JKS file.
Related Information
User → jco.client.user
Password → jco.client.passwd
Note
For security reasons, do not use these additional properties but use the corresponding main properties' fields.
Related Information
Overview
The connectivity service provides a secure way of forwarding the identity of an on-demand user to the Cloud
connector, and from there to the back end of the relevant on-premise system. This process is called principal
propagation. It uses SAML tokens as the exchange format for the user information. User mapping takes place in
the back end and, in this way, either the token is forwarded directly to the back end or an X.509 certificate is
generated, which is then used in the back end.
Restriction
This authentication is only applicable if you want to connect to your on-premise system via the Cloud
connector.
How It Works
Table 239:
Process in Steps Steps Description
1. The user authenticates at the Web application front end via the IDP (Identity Pro
vider) using a standard SAML Web SSO profile. When the back-end connection is
established by the Web application, the destination service (re)uses the received
SAML assertion to create the connection to the on-premise system (BE1-BEm).
2. The Cloud connector validates the received SAML assertion for a second time,
extracts the attributes, and uses its STS (Security Token Service) component to
issue a new token (an X.509 certificate) with the same/similar attributes to as
sert the identity to the back-end.
3. The Cloud connector and the Web application(s) share the same SP identity,
that is, the trust is only set up once in the IDP.
You can create and configure connectivity destinations making use of the PrincipalPropagation property in the
Eclipse IDE and in the cockpit. Bear in mind that this property is only available for destination configurations
created in the cloud.
Tasks
Related Information
● Call an Internet service using a simple application that queries some information from a public service:
Consuming Internet Services (Java Web or Java EE 6 Web Profile) [page 394]
Consuming Internet Services (Java Web Tomcat 7) [page 401]
● Call a service from a fenced customer network using a simple application that consumes an on-premise ping
service:
Consuming Back-End Systems (Java Web or Java EE 6 Web Profile) [page 409]
Consuming Back-End Systems (Java Web Tomcat 7) [page 419]
You can consume on-premise back-end services in two ways – via HTTP destinations and via the HTTP Proxy. For
more information, see:
To create a loopback connection, you can use the dedicated HTTP port bound to localhost. The port number can
be obtained from the cloud environment variable HC_LOCAL_HTTP_PORT.
For more information, see Using Cloud Environment Variables [page 1040] → section "List of Environment
Variables".
Note
Note that when deploying locally from the Eclipse IDE or the console client, the HTTP port may differ.
Related Information
Tutorial: Using the Keystore Service for Client Side HTTPS Connections [page 1363]
Overview
By default, all connectivity API packages are visible from all Web applications. In this classical case, applications
can consume the destinations via a JNDI lookup. For more information, see Connectivity and Destination APIs
[page 314].
Caution
● If you use SDK for Java Web, we only recommend that you create a destination before deploying the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before deploying the application.
● If you use SDK for Java Web Tomcat 7, the DestinationFactory API is not supported. Instead, you can
use ConnectivityConfiguration API [page 318].
Tip
When you know in advance the names of all destinations you need, you should better use destinations.
Otherwise, we recommend using DestinationFactory.
Procedure
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
import com.sap.core.connectivity.api.DestinationFactory;
import com.sap.core.connectivity.api.http.HttpDestination
...
Context ctx = new InitialContext();
DestinationFactory destinationFactory
=(DestinationFactory)ctx.lookup(DestinationFactory.JNDI_NAME);
HttpDestination destination = (HttpDestination)
destinationFactory.getDestination("myBackend");
3. With the retrieved HTTP destination, you can then, for example, send a simple GET request to the configured
remote system by using the following code:
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.HttpResponse;
...
// coding to call service "myService" on the system configured in the given
destination
HttpClient createHttpClient = destination.createHttpClient();
HttpGet get = new HttpGet("myService");
HttpResponse resp = createHttpClient.execute(get);
Overview
The HTTP destinations provide data communication via HTTP protocol and is used for both Internet and on-
premise connections.
The runtime tries to resolve a destination in the order: Subscription Level → Account Level → Application Level. By
using the optional "DestinationProvider" property, a destination can be limited to application level only, that
is, the runtime tries to resolve the destination on application level.
Table 240:
Property Description
Note
If you use Java Web Tomcat 7 runtime container, the DestinationProvider property is not supported.
Instead, you can use AuthenticationHeaderProvider API [page 320].
Example
Name=weather
Type=HTTP
Authentication=NoAuthentication
DestinationProvider=Application
● Internet - The application can connect to an external REST or SOAP service on the Internet.
● OnPremise - The application can connect to an on-premise back-end system through the Cloud connector.
The proxy type used for a destination must be specified by the destination property ProxyType. The property's
default value (if not configured explicitly) is Internet.
If you work in your local development environment behind a proxy server and want to use a service from the
Internet, you need to configure your proxy settings on JVM level. To do this, proceed as follows:
1. On the Servers view, double-click the added server and choose Overview to open the editor.
2. Click the Open Launch Configuration link.
3. Choose the (x)=Arguments tab page.
4. In the VM Arguments box, add the following row:
-Dhttp.proxyHost=yourproxyHost -Dhttp.proxyPort=yourProxyPort -
Dhttps.proxyHost=yourproxyHost -Dhttps.proxyPort=yourProxyPort
5. Choose OK.
6. Start/restart your SAP HANA Cloud local runtime.
For more information and example, see Consuming Internet Services (Java Web or Java EE 6 Web Profile) [page
394].
● When using the Internet proxy type, you do not need to perform any additional configuration steps.
● When using the OnPremise proxy type, you configure the setting the standard way through the Connectivity
editor in the Eclipse IDE.
For more information and example, see Consuming Back-End Systems (Java Web or Java EE 6 Web Profile)
[page 409].
Configuring Authentication
When creating an HTTP destination, you can use different authentication types for access control::
Related Information
Context
The server certificate authentication is applicable for all client authentication types, described below.
Properties
Table 241:
Property Description
TrustStoreLocation Path to the JKS file which contains trusted certificates (Certificate Authorities) for
1. When used in local environment authentication against a remote client.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the file
system.
2. The name of the JKS file.
Note
The default JDK truststore is appended to the truststore defined in the destina
tion configuration. As a result, the destination simultaneously uses both trust
stores. If the TrustStoreLocation property is not specified, the JDK trust
store is used as a default truststore for the destination.
TrustStorePassword Password for the JKS trust store file. This property is mandatory in case
TrustStoreLocation is used.
TrustAll If this property is set to TRUE in the destination, the server certificate will not be
checked for SSL connections. It is intended for test scenarios only, and should not
be used in production (since the SSL server certificate is not checked, the server is
not authenticated). The possible values are TRUE or FALSE; the default value is
FALSE (that is, if the property is not present at all).
HostnameVerifier Optional property. It has two values: Strict and BrowserCompatible. This
property specifies how the server hostname matches the names stored inside the
server's X.509 certificate. This verifying process is only applied if TLS or SSL proto
cols are used and is not applied if the TrustAll property is specified. The default
value (used if no value is explicitly specified) is Strict.
Note
You can upload TrustStore JKS files using the same command for uploading destination configuration property
file - you only need to specify the JKS file instead of the destination configuration file.
Note
Connections to remote services which require Java Cryptography Extension (JCE) unlimited strength
jurisdiction policy are not supported.
Related Information
Context
By default, all SAP systems accept SAP assertion tickets for user propagation.
Note
The SAP assertion ticket is a special type of logon ticket. For more information, see SAP Logon Tickets and
Logon Using Tickets.
The aim of the SAPAssertionSSO destination is to generate such an assertion ticket in order to propagate the
currently logged-on SAP Cloud Platform user to an SAP back-end system. You can only use this authentication
type if the user IDs on both sides are the same. The following diagram shows the elements of the configuration
process on the SAP Cloud Platform and in the corresponding back-end system:
1. Configure the back-end system so that it can accept SAP assertion tickets signed by a trusted x.509 key pair.
For more information, see Configuring a Trust Relationship for SAP Assertion Tickets.
2. Create and configure a SAPAssertionSSO destination by using the properties listed below, and deploy it on
SAP Cloud Platform.
○ Configuring Destinations from the Cockpit [page 344]
○ Configuring Destinations from the Console Client [page 326]
Note
Configuring SAPAssertionSSO destinations from the Eclipse IDE is not yet supported.
Property Description
ProxyType You can use both proxy types Internet and OnPremise.
Example
Name=weather
Type=HTTP
Authentication=SAPAssertionSSO
IssuerSID=JAV
IssuerClient=000
RecipientSID=SAP
RecipientClient=100
Certificate=MIICiDCCAkegAwI...rvHTQ\=\=
SigningKey=MIIBSwIB...RuqNKGA\=
Context
The aim of the PrincipalPropagation destination is to forward the identity of an on-demand user to the Cloud
connector, and from there – to the back-end of the relevant on-premise system. In this way, the on-demand user
will no longer need to provide his/her identity every time he/she makes a connection to an on-premise system via
the same Cloud connector.
Configuration Steps
You can create and configure a PrincipalPropagation destination by using the properties listed below, and deploy it
on SAP Cloud Platform. For more information, see:
Note
This property is only available for destination configurations created on the cloud.
Properties
Property Description
Example
Name=OnPremiseDestination
Type=HTTP
URL= http://virtualhost:80
Authentication=PrincipalPropagation
ProxyType=OnPremise
Related Information
Context
SAP Cloud Platform provides support for applications to use the SAML Bearer assertion flow for consuming
OAuth-protected resources. In this way, applications do not need to deal with some of the complexities of OAuth
and can reuse existing identity providers for user data. Users are authenticated by using SAML against the
configured trusted identity providers. The SAML assertion is then used to request an access token from an OAuth
authorization server. This access token is automatically injected in all HTTP requests to the OAuth-protected
resources.
Tip
Тhe access tokens are auto-renovated. When a token is about to expire, a new token is created shortly before
the expiration of the old one.
You can create and configure an OAuth2SAMLBearerAssertion destination by using the properties listed below,
and deploy it on SAP Cloud Platform. For more information, see:
Note
Configuring OAuth2SAMLBearerAssertion destinations from the Eclipse IDE is not yet supported.
If you use proxy type OnPremise, both OAuth server and the protected resource have to be located on premise
and exposed via the SAP Cloud Platform cloud connector. Make sure to set URL to the virtual address of the
protected resource and tokenServiceURL to the virtual address of the OAuth server (see section Properties
below).
Note
The combination on-premise OAuth server and protected resource on the Internet is not supported, as well as
OAuth server on the Internet and protected resource on premise.
Properties
The table below lists the destination properties needed for OAuth2SAMLBearerAssertion authentication type. The
values for these properties should be found in the documentation of the particular provider of OAuth-protected
services. Usually, only a subset of the optional properties are required by a particular service provider.
Table 242:
Property Description
Required
Type Destination type. Use HTTP as a value for all HTTP(S) desti
nations.
ProxyType You can use both proxy types Internet and OnPremise.
Additional
nameQualifier Security domain of the user for which access token will be re
quested
SkipSSOTokenGenerationWhenNoUser If this parameter is set and there is no user logged in, token
generation is skipped, thus allowing anonymous access to
public resources. If set, it may have any value.
Note
When the OAuth authorization server is called, it accepts the trust settings of the destination. For more
information, see Server Certificate Authentication [page 368].
Example
The connectivity destination below provides HTTP access to the OData API of SuccessFactors Jam.
URL=https://demo.sapjam.com/OData/OData.svc
Name=sap_jam_odata
TrustAll=true
ProxyType=Internet
Type=HTTP
Authentication=OAuth2SAMLBearerAssertion
tokenServiceURL=https://demo.sapjam.com/api/v1/auth/token
clientKey=Aa1Bb2Cc3DdEe4F5GHIJ
audience=cubetree.com
nameQualifier=www.successfactors.com
Context
The AppToAppSSO destinations are used in scenario of application-to-application communication where the
caller needs to propagate its logged-in user. Both applications are deployed on SAP Cloud Platform.
Configuration Steps
1. Configure your account to allow principal propagation. For more information, see ID Federation with the
Corporate Identity Provider [page 1406] → section "Specifying Custom Local Provider Settings".
Note
This setting is done per account, which means that once set to Enabled all applications within the account
will accept user propagation.
2. Create and configure an AppToAppSSO destination by using the properties listed below, and deploy it on SAP
Cloud Platform. For more information, see:
○ Configuring Destinations from the Cockpit [page 344]
○ Configuring Destinations from the Console Client [page 326]
Note
Configuring AppToAppSSO destinations from the Eclipse IDE is not yet supported.
Table 243:
Property Description
Type Destination type. Use HTTP as a value for all HTTP(S) desti
nations.
SessionCookieNames Optional.
Note
In case that a session cookie name has a variable part you
can specify it as a regular expression.
Example:
JSESSIONID, JTENANTSESSIONID_.*,
CookieName, Cookie*Name, CookieName.*
Note
The spaces after comma are optional.
Note
Recommended value for the target Java app on SAP Cloud
Platform is: JTENANTSESSIONID_.*, and for the HANA
XS app is: xsId.*.
Note
If not specified, both applications must be consumed in the
same account.
SkipSSOTokenGenerationWhenNoUser Optional.
Example
#
#Wed Jan 13 12:25:47 UTC 2016
Name=apptоapp
URL=https://someurl.com
ProxyType=Internet
Type=HTTP
SessionCookieNames=JTENANTSESSIONID_.*
Authentication=AppToAppSSO
Related Information
Context
This section lists the supported client authentication types and the relevant supported properties.
This is used for destinations that refer to a service on the Internet or an on-premise system that does not require
authentication. The relevant property value is:
Table 244:
Authentication=NoAuthentication
Note
When a destination is using HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Basic Authentication
This is used for destinations that refer to a service on the Internet or an on-premise system that requires basic
authentication. The relevant property value is:
Table 245:
Authentication=BasicAuthentication
Table 246:
Property Description
Password Password
Preemptive If this property is not set or is set to TRUE (that is, the default behavior is to use
preemptive sending), the authentication token is sent preemptively. Otherwise, it
relies on the challenge from the server (401 HTTP code). The default value (used if
no value is explicitly specified) is TRUE. For more information about preemptive
ness, see http://tools.ietf.org/html/rfc2617#section-3.3 .
Note
When a destination is using the HTTPS protocol to connect to a Web resource, the JDK truststore is used as
truststore for the destination.
Note
Basic Authentication and No Authentication can be used in combination with
ProxyType=OnPremise. In this case, also the CloudConnectorLocationId property can be specified.
Starting with SAP HANA Cloud connector 2.9.0, it is possible to connect multiple cloud connectors to an
account as long as their location ID is different. The value defines the location ID identifying the Cloud
This is used for destinations that refer to a service on the Internet. The relevant property value is:
Table 247:
Authentication=ClientCertificateAuthentication
Table 248:
Property Description
KeyStoreLocation Path to the JKS file that contains the client certificate(s) for authentication against
1. When used in local environment a remote server.
2. When used in cloud environment 1. The relative path to the JKS file. The root path is the server's location on the file
system.
2. The name of the JKS file.
KeyStorePassword The password for the key storage. This property is mandatory in case
KeyStoreLocation is used.
Note
You can upload KeyStore JKS files using the same command for uploading destination configuration property
file - you only need to specify the JKS file instead of the destination configuration file.
Configuration
Related Information
LDAP destinations carry connectivity details for accessing systems over Lightweight Directory Access Protocol
(LDAP) as specified in RFC 4511 . In combination with the SAP Cloud Platform cloud connector they enable SAP
Cloud Platform applications to access LDAP servers in an on-premise corporate network. LDAP destinations are
intended to be used with the Java JNDI/LDAP Service Provider.
For more information on how to use the Java JNDI/LDAP Service Provider see: http://docs.oracle.com/
javase/7/docs/technotes/guides/jndi/jndi-ldap.html .
Proxy Type ldap.proxyType Possible values: Internet or In case proxy type is OnPre
OnPremise mise, the resulting property is
java.naming.ldap.fac
tory.socket with value
com.sap.core.connect
ivity.api.ldap.LdapO
nPremiseSocketFactor
y.
Example: ldap://ldap
server.examplecompany.com:
389
Example: serviceuser@exam
plecompany.com
Sample Code
package com.sap.cloud.example.ldap;
import java.io.IOException;
import java.util.Properties;
import javax.annotation.Resource;
import javax.naming.NamingEnumeration;
import javax.naming.NamingException;
import javax.naming.directory.DirContext;
import javax.naming.directory.InitialDirContext;
import javax.naming.directory.SearchControls;
import javax.naming.directory.SearchResult;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
import com.sap.core.connectivity.api.configuration.DestinationConfiguration;
/**
* Servlet that obtain LDAP destination, connect to the specified LDAP server and
search for users.
*/
@WebServlet("/*")
public class LdapExample extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final String DESTINATION_NAME = "example-ldap-destination";
private static final String LDAP_PATH_TO_USERS =
"ou=users,dc=examplecompany,dc=com";
private static final String LDAP_FILTER_MATCHING_USERS =
"(objectClass=person)";
@Resource(name = "ConnectivityConfiguration")
private static ConnectivityConfiguration connectivityConfiguration;
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
DestinationConfiguration destination =
connectivityConfiguration.getConfiguration(DESTINATION_NAME);
Properties properties = new Properties();
properties.putAll(destination.getAllProperties());
try {
DirContext context = new InitialDirContext(properties);
SearchControls controls = new SearchControls();
controls.setSearchScope(SearchControls.SUBTREE_SCOPE);
NamingEnumeration<SearchResult> result =
context.search(LDAP_PATH_TO_USERS, LDAP_FILTER_MATCHING_USERS, controls);
response.getWriter().append("Found users:<br/><br/>");
while (result.hasMore()) {
response.getWriter().append(result.next().toString()).append("<br/
><br/>");
Overview
The connectivity service provides a standard HTTP Proxy for on-premise connectivity to be accessible by any
application. Proxy host and port are available as the environment variables HC_OP_HTTP_PROXY_HOST and
HC_OP_HTTP_PROXY_PORT.
Note
● The HTTP Proxy provides a more flexible way to use on-premise connectivity via standard HTTP clients. It
is not suitable for other protocols, such as RFC or Mail. HTTPS requests will not work as well.
● The previous alternative, that is, using on-premise connectivity via existing HTTP Destination API, is still
supported. For more information, see DestinationFactory API [page 364].
Multitenancy Support
By default, all applications are started in multitenant mode. Such applications are responsible to propagate
consumer accounts to the HTTP Proxy, using header SAP-Connectivity-ConsumerAccount. This header is
mandatory during the first request of each HTTP connection. HTTP connections are associated with one
consumer account and cannot be used with another account.. If the SAP-Connectivity-ConsumerAccount
header is sent after the first request, and its value is different than the value in the first request, the Proxy will
return HTTP response code 400.
Starting with SAP HANA Cloud connector 2.9.0, it is possible to connect multiple cloud connectors to an account
as long as their location ID is different. Using the header SAP-Connectivity-SCC-Location_ID it is possible to
specify the Cloud connector over which the connection shall be opened. If this header is not specified, the
connection will be opened to the Cloud connector that is connected without any location ID, which is also the case
for all Cloud connector versions prior to 2.9.0.
If an application VM is started for one consumer account, this account is known by the HTTP Proxy and the
application may not send the SAP-Connectivity-ConsumerAccount header.
On multitenant VMs, applications are responsible to propagate consumer account via SAP-Connectivity-
ConsumerAccount header. The following example shows how this can be performed.
On single-tenant VMs, the consumer account is known and account propagation via header is not needed. The
following example demonstrates this case.
// create HTTP client and insert the necessary headers in the request
HttpClient httpClient = new DefaultHttpClient();
httpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, new
HttpHost(proxyHost, proxyPort));
HttpGet request = new HttpGet("http://virtualhost:1234");
Related Information
Context
The HTTP Proxy can forward the identity of an on-demand user to the Cloud connector, and from there – to the
back-end of the relevant on-premise system. In this way, on-demand users will no longer need to provide their
identity every time they make connections to on-premise systems via one and the same Cloud connector. To
propagate the logged-in user, an application must use the AuthenticationHeaderProvider API to generate a
header, which then embeds in the HTTP request to the on-premise system.
Restrictions
● IDPs used by applications protected by SAML2 have to be denoted as trustworthy for the Cloud connector.
● Non-SAML2 protected applications have to be denoted themselves as trustworthy for the Cloud connector.
Example
Related Information
Overview
This section helps you to configure your SAP Cloud Platform Cloud Connector [page 480] when you are working
via the HTTP protocol.
Related Information
In order to setup a mutual authentication between the Cloud connector and any back-end system it connects to,
you can import an X.509 client certificate into the Cloud connector. The Cloud connector will then use the so-
called "system certificate" for all HTTPS requests to back-ends that request or require a client certificate. This
means, that the CA, which signed the Cloud connector's client certificate, needs to be trusted by all back-end
systems to which the Cloud connector is supposed to connect.
As of version 2.6.0, there is a second option - starting a Certificate Signing Request procedure, similar to
the UI certificate described in Exchanging UI Certificates in the Administration UI [page 502].
If a system certificate has been imported successfully, its distinguished name, the name of the issuer, and the
validity dates are displayed:
If a system certificate is no longer required it can be deleted. To do this, use the respective button and confirm
deletion. If you need the public key for establishing trust with a server, you can simply export the full chain via the
Export button.
Related Information
To allow your on-demand applications to access a certain back-end system on the intranet, you need to insert an
extra line into the Cloud connector access control management.
4. Protocol: This field allows you to decide whether the Cloud connector should use HTTP or HTTPS for the
connection to the back-end system. Note that this is completely independent from the setting on cloud side.
Thus, even if the HTTP destination on cloud side specifies "http://" in its URL, you can select HTTPS. This
way, you are ensured that the entire connection from the on-demand application to the actual back-end
system (provided through the SSL tunnel) is SSL-encrypted. The only prerequisite is that the back-end
system supports HTTPS on that port. For more information, see Initial Configuration (HTTP) [page 387].
○ If you specify HTTPS and there is a "system certificate" imported in the Cloud connector, the latter
attempts to use that certificate for performing a client-certificate-based login to the back-end system.
○ If there is no system certificate imported, the Cloud connector opens an HTTPS connection without client
certificate.
6. Virtual Host specifies the host name exactly as it is specified as the URL property in the HTTP destination
configuration in SAP Cloud Platform. The virtual host can be a fake name and does not need to exist. The
Virtual Port allows you to distinguish between different entry points of your back-end system, for example,
HTTP/80 and HTTPS/443, and have different sets of access control settings for them. For example, some
non-critical resources may be accessed by HTTP, while some other critical resources are to be called using
HTTPS only. The fields will be pre-populated with the values of the Internal Host and Internal Port. In case you
don't modify them, you will need to provide your internal host and port also in the cloud side destination
configuration or in the URL used for your favorite HTTP client.
7. Principal Type defines what kind of principal is used when configuring a destination on the cloud side using
this system mapping with authentication type Principal Propagation. Regardless of what you choose,
you need to make sure that the general configuration for the principal type has been done to make it work
correctly. For destinations using different authentication types, this setting is ignored. If you choose None as
principal type, it is not possible to use principal propagation to this system.
9. The summary shows information about the system to be stored and when saving the host mapping, you can
trigger a ping from the Cloud connector to the internal host, using the Check availability of internal host check
box. This allows you to make sure the Cloud connector can indeed access the internal system, and allows you
to catch basic things, such as spelling mistakes or firewall problems between the Cloud connector and the
internal host. If the ping to the internal host is successful, the Cloud connector saves the mapping without any
remark. If it fails, a warning will pop up, that the host is not reachable. Details for the reason are available in
the log files. You can execute such a check at any time later for all selected systems in the Access Control
overview.
In addition to allowing access to a particular host and port, you also need to specify which URL paths (Resources)
are allowed to be invoked on that host. The Cloud connector uses very strict white-lists for its access control, so
only those URLs for which you explicitly granted access are allowed. All other HTTP(S) requests are denied by the
Cloud connector.
To define the permitted URLs (Resources) for a particular back-end system, choose the line corresponding to that
back-end system and choose Add in section Resources Accessible On... below. A dialog appears prompting you to
enter the specific URL path that you want to allow to be invoked.
The Enabled checkbox allows you to specify, whether that resource shall initially be enabled or disabled. (See the
following section for an explanation of enabled/disabled resources.)
In some cases, it is useful for testing purposes to temporarily disable certain resources without having to delete
them from the configuration. This allows you to easily re-provide access to these resources at a later point of time
without having to type in everything once again.
● To enable the resource again, select it and choose the Enable button.
● It is also possible to mark multiple lines and then to disable/enable all of them in one go by clicking the
Enable/Disable icons in the top row.
Examples:
● /production/accounting and Path only (sub-paths are excluded) are selected. Only requests of the form
GET /production/accounting or GET /production/accounting?name1=value1&name2=value2...
are allowed. (GET can also be replaced by POST, PUT, DELETE, and so on.)
● /production/accounting and Path and all sub-paths are selected. All requests of the form GET /
production/accounting-plus-some-more-stuff-here?name1=value1... are allowed.
● / and Path and all sub-paths are selected. All requests to this server are allowed.
Related Information
1.4.1.1.4.5 Tutorials
The connectivity service allows a secure, reliable, and easy-to-consume access to remote services running either
on the Internet or in an on-premise network.
Use Cases
The tutorials in this section show how you can make connections to Internet services and on-premise networks:
Consuming Internet Services (Java Web or Java EE 6 Web Profile) [page 394]
Consuming Back-End Systems (Java Web or Java EE 6 Web Profile) [page 409]
Context
This step-by-step tutorial demonstrates consumption of Internet services using Apache HTTP Client . The
tutorial also shows how a connectivity-enabled Web application can be deployed on a local server and on the
cloud.
The servlet code, the web.xml content, and the destination file (outbound-internet-destination) used in
this tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/
connectivity. You can directly import this sample in your Eclipse IDE. For more information, see Importing
Samples as Eclipse Projects [page 62].
Prerequisites
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 43].
Note
You need to install SDK for Java Web or SDK for Java EE 6 Web Profile.
5. Choose Finish so that the ConnectivityServlet.java servlet is created and opened in the Java editor.
6. Go to ConnectivityHelloWorld WebContent WEB-INF and open the web.xml file.
7. Choose the Source tab page.
8. Add the following code block to the <web-app> element:
<resource-ref>
<res-ref-name>outbound-internet-destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
Note
The value of the <res-ref-name> element in the web.xml file should match the name of the destination
that you want to be retrieved at runtime. In this case, the destination name is outbound-internet-
destination.
9. Replace the entire servlet class with the following one to make use of the destination API. The destination API
is visible by default for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import static java.net.HttpURLConnection.HTTP_OK;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpGet;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.DestinationFactory;
Note
The given servlet can run with different destination scenarios, for which user should specify the destination
name as a requested parameter in the calling URL. In this case, the destination name should be
<applicationURL>/?destname=outbound-internet-destination. Nevertheless, your servlet can
still run even without specifying the destination name for this outbound scenario.
10. Save the Java editor and make sure the project compiles without errors.
Caution
● If you use SDK for Java Web, we only recommend that you create a destination before deploying the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before deploying the application.
-Dhttp.proxyHost=<your_proxy_host> -Dhttp.proxyPort=<your_proxy_port> -
Dhttps.proxyHost=<your_proxy_host> -Dhttps.proxyPort=<your_proxy_port>
○ Choose OK.
5. Go to the Connectivity tab page of your local server, create a destination with the name outbound-
internet-destination, and configure it so it can be consumed by the application at runtime. For more
information, see Configuring Destinations from the Eclipse IDE [page 333].
For the sample destination to work properly, the following properties need to be configured:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
6. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
7. Make sure that the Choose an existing server option is selected and choose Java Web Server.
8. Choose Finish.
The server is now started, displayed as Java Web Server [Started, Synchronized] in the Servers
view.
Result:
The internal Web browser opens with the expected output of the connectivity-enabled Web application.
Note
The application name should be unique enough to allow your deployed application to be easily identified in
SAP Cloud Platform cockpit.
7. Choose Finish.
8. A new server <application>.<account> [Stopped]> appears in the Servers view.
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
ProxyType=Internet
10. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
11. Make sure that the Choose an existing server option is selected and choose <Server_host_name>
<Server_name> .
12. Choose Finish.
Result:
The internal Web browser opens with the URL pointing to SAP Cloud Platform and displaying the expected output
of the connectivity-enabled Web application.
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
For more information, see Using Logs in the Eclipse IDE [page 1170].
Context
This step-by-step tutorial demonstrates consumption of Internet services using HttpURLConnection. The
tutorial also shows how a connectivity-enabled Web application can be deployed on a local server and on the
cloud.
The servlet code, the web.xml content, and the destination file (outbound-internet-destination) used in
this tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/
connectivity. You can directly import this sample in your Eclipse IDE. For more information, see Importing
Samples as Eclipse Projects [page 62].
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 43].
Note
You need to install SDK for Java Web Tomcat 7.
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</res-
type>
</resource-ref>
9. Replace the entire servlet class with the following one to make use of the destination API. The destination API
is visible by default for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import javax.annotation.Resource;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.cloud.account.TenantContext;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
import com.sap.core.connectivity.api.configuration.DestinationConfiguration;
/**
* Servlet class making http calls to specified http destinations.
* Destinations are used in the following example connectivity scenarios:<br>
* - Connecting to an outbound Internet resource using HTTP destinations<br>
* - Connecting to an on-premise backend using on premise HTTP destinations,<br>
* where the destinations have no authentication.<br>
*/
public class ConnectivityServlet extends HttpServlet {
@Resource
private TenantContext tenantContext;
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
HttpURLConnection urlConnection = null;
String destinationName = request.getParameter("destname");
try {
// Look up the connectivity configuration API
Context ctx = new InitialContext();
ConnectivityConfiguration configuration =
(ConnectivityConfiguration) ctx.lookup("java:comp/env/
connectivityConfiguration");
if (ON_PREMISE_PROXY.equals(proxyType)) {
// Get proxy for on-premise destinations
proxyHost = System.getenv("HC_OP_HTTP_PROXY_HOST");
proxyPort = Integer.parseInt(System.getenv("HC_OP_HTTP_PROXY_PORT"));
} else {
// Get proxy for internet destinations
proxyHost = System.getProperty("http.proxyHost");
proxyPort = Integer.parseInt(System.getProperty("http.proxyPort"));
}
return new Proxy(Proxy.Type.HTTP, new InetSocketAddress(proxyHost,
proxyPort));
}
Note
The given servlet can run with different destination scenarios, for which user should specify the destination
name as a requested parameter in the calling URL. In this case, the destination name should be
<applicationURL>/?destname=outbound-internet-destination. Nevertheless, your servlet can
still run even without specifying the destination name for this outbound scenario.
10. Save the Java editor and make sure the project compiles without errors.
Note
We recommend but not obligate that you create a destination before deploying the application.
-Dhttp.proxyHost=<your_proxy_host> -Dhttp.proxyPort=<your_proxy_port> -
Dhttps.proxyHost=<your_proxy_host> -Dhttps.proxyPort=<your_proxy_port>
○ Choose OK.
5. Go to the Connectivity tab page of your local server, create a destination with the name outbound-
internet-destination, and configure it so it can be consumed by the application at runtime. For more
information, see Configuring Destinations from the Eclipse IDE [page 333].
For the sample destination to work properly, the following properties need to be configured:
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
6. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
7. Make sure that the Choose an existing server option is selected and choose Java Web Tomcat 7 Server.
8. Choose Finish.
The server is now started, displayed as Java Web Tomcat 7 Server [Started, Synchronized] in the
Servers view.
Result:
The internal Web browser opens with the expected output of the connectivity-enabled Web application.
Note
The application name should be unique enough to allow your deployed application to be easily identified in
SAP Cloud Platform cockpit.
Name=outbound-internet-destination
Type=HTTP
URL=http://sap.com/index.html
Authentication=NoAuthentication
ProxyType=Internet
10. From the ConnectivityServlet.java editor's context menu, choose Run As Run on Server .
11. Make sure that the Choose an existing server option is selected and choose <Server_host_name>
<Server_name> .
12. Choose Finish.
The internal Web browser opens with the URL pointing to SAP Cloud Platform and displaying the expected output
of the connectivity-enabled Web application.
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
For more information, see Using Logs in the Eclipse IDE [page 1170].
Context
This step-by-step tutorial demonstrates how a sample Web application consumes a back-end system via HTTP(S)
by using the connectivity service. For simplicity, instead of using a real back-end system, we use a second sample
Web application containing BackendServlet, which mimics the back-end system and can be called via HTTP(S).
The servlet code, the web.xml content, and the destination files (backend-no-auth-destination and
backend-basic-auth-destination) used in this tutorial are mapped to the connectivity sample project
located in <SDK_location>/samples/connectivity. You can directly import this sample in your Eclipse IDE.
For more information, see Importing Samples as Eclipse Projects [page 62].
In the on-demand to on-premise connectivity end-to-end scenario, different user roles are involved. The particular
steps for the relevant roles are described below:
For more information, see SAP Cloud Platform Cloud Connector [page 480].
Prerequisites
● You have downloaded and configured the Cloud connector. For more information, see SAP Cloud Platform
Cloud Connector [page 480].
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 43].
Note
You need to install SDK for Java Web or SDK for Java EE 6 Web Profile.
This tutorial uses a Web application that responds to a request with a ping as a sample back-end system. The
connectivity service supports HTTP and HTTPS as protocols and provides an easy way to consume REST-based
Web services.
To set up the sample application as a back-end system, see Setting Up an Application as a Sample Back-End
System [page 428].
Tip
Instead of the sample back-end system provided in this tutorial, you can use other systems to be consumed
through REST-based Web services.
Once the back-end application is running on your local Tomcat, you need to configure the ping service, provided
by the application, in your installed Cloud connector. This is required since the Cloud connector only allows
access to white-listed back-end services. To do this, follow the steps below:
1. Open the Cloud connector and from the Content navigation (in left), choose Access Control.
Note
In case you use SDK with version equal to or lower than 1.44.0.1 (Java Web) and 2.24.13 (Java EE 6
Web Profile), you should find the WAR files in directory <SDK_location>/tools/samples/
connectivity/onpremise, under the names PingAppHttpNoAuth.war and
PingAppHttpBasicAuth.war. Also, the URL paths should be /PingAppHttpBasicAuth and /
PingAppHttpNoAuth.
5. Choose Finish so that the ConnectivityServlet.java servlet is created and opened in the Java editor.
6. Go to ConnectivityHelloWorld WebContent WEB-INF and open the web.xml file.
7. Add the following code block to the <web-app> element, respectively:
<resource-ref>
<res-ref-name>outbound-internet-destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>connectivity/DestinationFactory</res-ref-name>
<res-type>com.sap.core.connectivity.api.DestinationFactory</res-type>
</resource-ref>
Note
○ Destinations backend-no-auth-destination and backend-basic-auth-destination will be
looked-up via DestinationFactory JNDI lookup. For more information, see DestinationFactory API [page
364].
○ In case you use destinations as resource reference, the value of the <res-ref-name> element in the
web.xml file should match the name of the destination that you want to be retrieved at runtime. In this
case, the destination name is outbound-internet-destination.
8. Replace the entire servlet class to make use of the destination API. The destination API is visible by default for
cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.core.connectivity.api.http.HttpDestination;
import com.sap.core.connectivity.api.DestinationFactory;
/**
* Servlet class making HTTP calls to specified HTTP destinations.
* Destinations are used in the following exemplary connectivity scenarios:<br>
* - Connecting to an outbound Internet resource using HTTP destinations<br>
* - Connecting to an on-premise backend using on-premise HTTP destinations,<br>
* where the destinations could have no authentication or basic
authentication.<br>
*
* * NOTE: The connectivity
service API is located under
* <code>com.sap.core.connectivity.api</code>. The old API under
* <code>com.sap.core.connectivity.httpdestination.api</code> has been
deprecated.
*/
public class ConnectivityServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final int COPY_CONTENT_BUFFER_SIZE = 1024;
private static final Logger LOGGER =
LoggerFactory.getLogger(ConnectivityServlet.class);
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
HttpClient httpClient = null;
String destinationName = request.getParameter("destname");
try {
// Get HTTP destination
Context ctx = new InitialContext();
HttpDestination destination = null;
if (destinationName != null) {
DestinationFactory destinationFactory = (DestinationFactory)
ctx.lookup(DestinationFactory.JNDI_NAME);
destination = (HttpDestination)
destinationFactory.getDestination(destinationName);
} else {
// The default request to the Servlet will use outbound-internet-
destination
destinationName = "outbound-internet-destination";
destination = (HttpDestination) ctx.lookup("java:comp/env/" +
destinationName);
}
9. Save the Java editor and make sure the project compiles without errors.
Caution
● If you use SDK for Java Web, we just recommend that you create a destination before starting the
application.
● If you use SDK for Java EE 6 Web Profile, you must create a destination before starting the application.
1. To deploy your Web application locally or on the cloud, follow the steps described in the respective pages:
Deploying Locally from Eclipse IDE [page 1045]
Deploying on the Cloud from Eclipse IDE [page 1047]
2. Once the application is deployed successfully on a local server and on the cloud, the application issues an
exception saying that destination backend-basic-auth-destination or backend-no-auth-
destination has not been specified yet:
HTTP Status 500 - Connectivity operation failed with reason: Destination with
name backend-no-auth-destination cannot be found. Make sure it is created and
configured.. See logs for details.
2014 01 10 08:11:01#
+00#ERROR#com.sap.cloud.sample.connectivity.ConnectivityServlet##anonymous#http-
bio-8041-exec-1##conngold#testsample#web#null#null#Connectivity operation failed
com.sap.core.connectivity.api.DestinationNotFoundException: Destination with
name backend-no-auth-destination cannot be found. Make sure it is created and
configured.
at
com.sap.core.connectivity.destinations.DestinationFactory.getDestination(Destinat
ionFactory.java:20)
at
com.sap.core.connectivity.cloud.destinations.CloudDestinationFactory.getDestinati
on(CloudDestinationFactory.java:28)
at
com.sap.cloud.sample.connectivity.ConnectivityServlet.doGet(ConnectivityServlet.j
ava:50)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilte
rChain.java:305)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.j
ava:210)
at
com.sap.core.communication.server.CertValidatorFilter.doFilter(CertValidatorFilte
r.java:321)
To configure the destination in SAP Cloud Platform, you need to use the virtual host name
(virtualpingbackend) and port (1234) specified in one of the previous steps on the Cloud connector's Access
Control tab page.
Note
On-premise destinations support HTTP connections only. Thus, when defining a destination in the SAP Cloud
Platform cockpit, always enter the URL as http://virtual.host:virtual.port, even if the backend requires an
HTTPS connection.
The connection from an SAP Cloud Platform application to the Cloud connector (through the tunnel) is
encrypted with TLS anyway, so there is no need to “double-encrypt” the data. Then, for the leg from the Cloud
connector to the backend, you can choose between using HTTP or HTTPS, and the Cloud connector will
establish an SSL/TLS connection to the backend, if you choose HTTPS.
1. In the Eclipse IDE, open the Servers view and double-click on <application>.<account> to open the SAP
Cloud Platform editor.
2. Open the Connectivity tab page.
3. In the All Destinations section, choose to create a new destination with the name backend-no-auth-
destination or backend-basic-auth-destination.
○ To connect with no authentication, use the following configuration:
Name=backend-no-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpNoAuth/noauth
Authentication=NoAuthentication
ProxyType=OnPremise
CloudConnectorVersion=2
Name=backend-basic-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpBasicAuth/basic
Authentication=BasicAuthentication
User=pinguser
Password=pingpassword
ProxyType=OnPremise
CloudConnectorVersion=2
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
For more information, see Using Logs in the Eclipse IDE [page 1170].
Context
This step-by-step tutorial demonstrates how a sample Web application consumes a back-end system via HTTP(S)
by using the connectivity service. For simplicity, instead of using a real back-end system, we use a second sample
Web application containing BackendServlet, which mimics the back-end system and can be called via HTTP(S).
The servlet code, the web.xml content, and the destination file (backend-no-auth-destination) used in this
tutorial are mapped to the connectivity sample project located in <SDK_location>/samples/connectivity.
You can directly import this sample in your Eclipse IDE. For more information, see Importing Samples as Eclipse
Projects [page 62].
In the on-demand to on-premise connectivity end-to-end scenario, different user roles are involved. The particular
steps for the relevant roles are described below:
For more information, see SAP Cloud Platform Cloud Connector [page 480].
Prerequisites
● You have downloaded and configured the Cloud connector. For more information, see SAP Cloud Platform
Cloud Connector [page 480].
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 43].
Note
You need to install SDK for Java Web Tomcat 7.
This tutorial uses a Web application that responds to a request with a ping as a sample back-end system. The
connectivity service supports HTTP and HTTPS as protocols and provides an easy way to consume REST-based
Web services.
To set up the sample application as a back-end system, see Setting Up an Application as a Sample Back-End
System [page 428].
Tip
Instead of the sample back-end system provided in this tutorial, you can use other systems to be consumed
through REST-based Web services.
Once the back-end application is running on your local Tomcat, you need to configure the ping service, provided
by the application, in your installed Cloud connector. This is required since the Cloud connector only allows
access to white-listed back-end services. To do this, follow the steps below:
1. Open the Cloud connector and from the Content navigation (in left), choose Access Control.
2. Under Mapping Virtual To Internal System, choose the Add button and define an entry as shown on the
following screenshot. The Internal Host must be the physical host name of the machine on which the Tomcat
of the back-end application is running.
<resource-ref>
<res-ref-name>connectivityConfiguration</res-ref-name>
<res-
type>com.sap.core.connectivity.api.configuration.ConnectivityConfiguration</res-
type>
</resource-ref>
Note
Destination backend-no-auth-destination will be looked-up via ConnectivityConfiguration JNDI
lookup. For more information, see ConnectivityConfiguration API [page 318].
8. Replace the entire servlet class to make use of the configuration API. The configuration API is visible by
default for cloud applications and must not be added explicitly to the application class path.
package com.sap.cloud.sample.connectivity;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.HttpURLConnection;
import java.net.InetSocketAddress;
import java.net.Proxy;
import java.net.URL;
import javax.annotation.Resource;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.cloud.account.TenantContext;
import com.sap.core.connectivity.api.configuration.ConnectivityConfiguration;
import com.sap.core.connectivity.api.configuration.DestinationConfiguration;
/**
* Servlet class making http calls to specified http destinations.
* Destinations are used in the following example connectivity scenarios:<br>
* - Connecting to an outbound Internet resource using HTTP destinations<br>
* - Connecting to an on-premise backend using on premise HTTP destinations,<br>
* where the destinations have no authentication.<br>
*/
/** {@inheritDoc} */
@Override
public void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
HttpURLConnection urlConnection = null;
String destinationName = request.getParameter("destname");
try {
// Look up the connectivity configuration API
Context ctx = new InitialContext();
ConnectivityConfiguration configuration =
(ConnectivityConfiguration) ctx.lookup("java:comp/env/
connectivityConfiguration");
Note
The given servlet can be run with different destination scenarios, for which user should specify the
destination name as a requested parameter in the calling URL. In the case of on-premise connection to a
back-end system, the destination names should be backend-no-auth-destination. That is, it will be
accessed at: <application_URL>/?destname=backend-no-auth-destination
Note
When accessing a destination with a specific authentication type, use AuthenticationHeaderProvider API
[page 320] to get authentication headers and then inject them in all requests to this destination.
9. Save the Java editor and make sure the project compiles without errors.
Note
We only recommend but not obligate that you create the destination before starting the application.
1. To deploy your Web application locally or on the cloud, follow the steps described in the respective pages:
Deploying Locally from Eclipse IDE [page 1045]
Deploying on the Cloud from Eclipse IDE [page 1047]
2. Once the application is successfully deployed locally or on the cloud, the application issues an exception
saying that the backend-no-auth-destination destination has not been specified yet:
To configure the destination in SAP Cloud Platform, you need to use the virtual host name
(virtualpingbackend) and port (1234) specified in one of the previous steps on the Cloud connector's Access
Control tab page.
Note
● On-premise destinations support HTTP connections only.
● The connection from an application to the Cloud connector (through the tunnel) is encrypted on TLS level.
Also, you can choose between using HTTP or HTTPS to hop from the Cloud connector to the back end.
1. In the Eclipse IDE, open the Servers view and double-click on <application>.<account> to open the cloud
server editor.
2. Open the Connectivity tab page.
3. In the All Destinations section, choose to create a new destination with the name backend-no-auth-
destination.
4. Use the following configuration:
Name=backend-no-auth-destination
Type=HTTP
URL=http://virtualpingbackend:1234/BackendAppHttpNoAuth/noauth
Authentication=NoAuthentication
ProxyType=OnPremise
CloudConnectorVersion=2
Next Step
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
For more information, see Using Logs in the Eclipse IDE [page 1170].
Related Information
JavaDoc ConnectivityConfiguration
JavaDoc DestinationConfiguration
JavaDoc AuthenticationHeaderProvider
Overview
This section describes how you set up a simple ping Web application that is used as a back-end system.
Prerequisites
You have downloaded SAP Cloud Platform SDK on your local file system.
Procedure
<role rolename="pingrole"/>
<user name="pinguser" password="pingpassword" roles="pingrole" />
Note
In case you use SDK with version equal to or lower than, respectively, 1.44.0.1 (Java Web) and 2.24.13
(Java EE 6 Web Profile), you should find the WAR files in directory <SDK_location>/tools/samples/
connectivity/onpremise, under the names PingAppHttpNoAuth.war and PingAppHttpBasicAuth.war.
Also, you should access the applications at the relevant URLs:
● http://localhost:8080/PingAppHttpNoAuth/pingnoauth
● http://localhost:8080/PingAppHttpBasicAuth/pingbasic
Consuming Back-End Systems (Java Web or Java EE 6 Web Profile) [page 409]
Installation Prerequisites
● To provide connectivity tunnel via RFC destinations, your Cloud connector version needs to be at least 1.3.0.
● To develop a JCo application, your SDK version needs to be 1.29.18 (SDK Java Web), or 2.11.6 (SDK for
Java EE 6 Web Profile). Also, your SDK local runtime needs to be hosted by a 64-bit JVM. SDKs of Tomcat 7
and Tomcat 8 runtime support JCo from the very beginning.
On Windows platforms, you need to install Microsoft Visual C++ 2010 Redistributable Package (x64). To
download this package, go to http://www.microsoft.com/en-us/download/details.aspx?id=14632 .
You can call a service from a fenced customer network using a simple application which consumes a simple on-
premise remote-enabled function module.
The invocation of function modules via RFC is offered via the JCo API like the one available in SAP NetWeaver
Application Server Java since version 7.10, and in JCo standalone 3.0. If you are an experienced JCo developer,
you can easily develop a Web application using JCo: you simply consume the APIs like you do in other Java
environments. Restrictions that apply in the cloud environment are mentioned in the Restrictions section below.
To see a sample Web application, see Tutorial: Invoking ABAP Function Modules in On-Premise ABAP Systems
[page 444].
Related Information
RFC destinations provide the configuration needed for communicating with an on-premise ABAP system via RFC.
The RFC destination data is used by the JCo version that is offered within SAP Cloud Platform to establish and
manage the connection.
The RFC destination specific configuration in SAP Cloud Platform consists of properties arranged in groups, as
described below. The supported set of properties is a subset of the standard JCo properties in arbitrary
environments. The configuration data is divided into the following groups:
The minimal configuration contains user logon properties and information identifying the target host. This means
you must provide at least a set of properties containing this information.
Example
Name=SalesSystem
Type=RFC
This group of JCo properties covers different types of user credentials, as well as the ABAP system client and the
logon language. The currently supported logon mechanism uses user/password as the credentials.
Table 250:
Property Description
jco.client.passwd Represents the password of the user that shall be used. Note
that passwords in systems of SAP NetWeaver releases lower
than 7.0 are case-insensitive and can be only eight characters
long. For releases 7.0 and higher, passwords are case-sensi
tive with a maximum length of 40.
Note
When working with the Destinations editor in the cockpit,
enter this password in the Password field. Do not enter it as
additional property.
Note
In the case of PrincipalPropagation value, you
should better configure the
jco.destination.repository.user and
jco.destination.repository.passwd proper
ties, since there are special permissions needed (for meta
data lookup in the back end) that not all business applica
tion users might have.
Overview
This group of JCo properties covers different settings for the behavior of the destination's connection pool. All
properties are optional.
Table 251:
Property Description
Note
Turning on this check has performance impact for
stateless communication. This is due to an addi
tional low-level ping to the server, which takes a
certain amount of time for non-corrupted connec
tions depending on latency.
Pooling Details
● Each destination is associated with a connection factory and, if the pooling feature is used, with a connection
pool.
● Initially, the destination's connection pool is empty, and the JCo runtime does not preallocate any connection.
The first connection will be created when the first function module invocation is performed. The peak_limit
property describes how many connections can be created simultaneously, if applications allocate
connections in different sessions at the same time. A connection is allocated either when a stateless function
call is executed, or when a connection for a stateful call sequence is reserved within a session.
● After the <peak_limit> number of connections has been allocated (in <peak_limit> number of sessions),
the next session will wait for at most <max_get_client_time> milliseconds until a different session
releases a connection (either finishes a stateless call or ends a stateful call sequence). In case the waiting
session does not get any connection during the <max_get_client_time> period, the function request will
be aborted with JCoException with the key JCO_ERROR_RESOURCE.
● Connections that are no longer used by applications are returned to the destination pool. There are at most
<pool_capacity> number of connections kept open by the pool. Further connections (<peak_limit> -
<pool_capacity>) will be closed immediately after usage. The pooled connections (open connections in the
pool) are marked as expired if they are not used again during <expiration_time> milliseconds. All expired
connections will be closed by a timeout checker thread which executes the check every
<expiration_check_period> milliseconds.
This JCo properties group allows you to influence how the repository that dynamically retrieves function module
metadata behaves. All properties below are optional. Alternatively, applications could create their metadata in
their code, using the metadata factory methods within the JCo class, to avoid additional round-trips to the on-
premise system.
Table 252:
Property Description
Note
When working with the Destinations editor in the cockpit,
enter this password in the field of the main property
Repository password. Do not enter it as additional prop
erty.
Overview
Depending on the configuration used, different properties are considered mandatory or optional.
Table 253:
Property Description
jco.client.sysnr Represents the so-called "system number" and has two digits.
It identifies the logical port on which the application server is
listening for incoming requests. In the case of configurations
in SAP Cloud Platform, this property needs to match a virtual
port entry in the Cloud connector Access Control configura
tion.
Note
The virtual port in the above access control entry needs to
be named sapgw<##>, where <##> is the value of sysnr.
Table 254:
Property Description
Note
The virtual port in the above access control entry needs to
be named sapms<###>, where <###> is the value of
r3name.
This group of JCo properties allows you to influence the connection to an ABAP system. All properties are
optional.
Table 255:
Property Description
jco.client.codepage Declares the 4-digit SAP codepage that shall be used when ini
tiating the connection to the backend. The default value is
1100 (comparable to iso-8859-1). It is important to provide
this property if the password that is used contains characters
that cannot be represented in 1100.
Overview
This section helps you to configure your Cloud connector when you are working via the RFC protocol.
Related Information
To set up a mutual authentication between Cloud connector and an ABAP back-end system (connected via RFC),
you can configure SNC for the Cloud connector. It will then use the associated PSE for all RFC SNC requests. This
means that the SNC identity, represented by this PSE needs to:
● Be trusted by all back-end systems to which the Cloud connector is supposed to connect;
● Play the role of a trusted external system by adding the SNC name of the Cloud connector to the SNCSYSACL
table. You can find more details in the SNC configuration documentation for the release of your ABAP system.
Prerequisites
You have configured your ABAP system(s) for SNC. For detailed information on configuring SNC for an ABAP
system, see also Configuring SNC on AS ABAP. In order to establish trust for Principal Propagation, follow the
steps described in Configuring Principal Propagation to an ABAP System for RFC [page 523].
○ Library Name: Provides the location of the SNC library you are using for the Cloud connector.
Note
Bear in mind that you must use one and the same security product on both sides of the
communication.
○ My Name: The SNC name that identifies the Cloud connector. It represents a valid scheme for the SNC
implementation that is used.
○ Quality of Protection: Determines the level of protection that you require for the connectivity to the ABAP
systems.
Note
When using CommonCryptoLibrary as SNC implementation, note 1525059 will help you to configure the
PSE to be associated with the user running the Cloud connector process.
Related Information
To allow your on-demand applications to access a certain back-end system on the intranet, you need to insert an
extra line within the Cloud connector Access Control management.
1. Choose Clout To On Premise from your Account menu and go to tab Access Control.
2. Choose Add.
3. Back-end Type: You need to select the description that best matches the addressed back-end system. In case
of RFC, only ABAP System and SAP Gateway are fitting values, which means usage of RFC is free of charge.
4. Choose Next.
5. Protocol: You need to choose whether the Cloud connector should use RFC or RFC with SNC for connecting to
the back-end system. This is completely independent from the settings on cloud side. This way, you are
ensured that the entire connection from the on-demand application to the actual back-end system (provided
through the SSL tunnel) is secured, partly with SSL and partly with SNC. For more information, see Initial
Configuration (RFC) [page 437].
Note
○ The back end needs to be properly configured to support SNC connections.
○ SNC configuration has to be provided in the Cloud connector.
6. Choose Next.
7. Choose whether you want to configure a load balancing logon or whether to connect to a concrete application
server.
○ When using direct logon, the Application Server specifies one application server of the ABAP system. The
instance number is a two-digit number that is also found in in the SAP Logon configuration. Alternatively,
it's possible to directly specify the gateway port in the Instance Number field.
9. Optional: You can virtualize the system information in case you like to hide your internal host names from the
cloud. The virtual information can be a fake name which does not need to exist. The fields will be pre-
○ Virtual Message Server - specifies the host name exactly as specified as the jco.client.mshost
property in the RFC destination configuration in the cloud. The Virtual System ID allows you to distinguish
between different entry points of your back-end system that have different sets of access control
settings. The value needs to be the same like for the jco.client.r3name property in the RFC
destination configuration in the cloud.
○ Virtual Application Server - it specifies the host name exactly as specified as the jco.client.ashost
property in the RFC destination configuration in the cloud. The Virtual Instance Number allows you to
distinguish between different entry points of your back-end system that have different sets of access
control settings. The value needs to be the same like for the jco.client.sysnr property in the RFC
destination configuration in the cloud.
10. This step will only come up, if you have chosen RFC SNC, not for plain RFC. The <Principal Type> field
defines what kind of principal is used when configuring a destination on the cloud side using this system
mapping with authentication type Principal Propagation. No matter what you choose, you need to make sure
that the general configuration for the <Principal Type> has been done to make it work correctly. For
destinations using different authentication types, this setting is ignored. In case you choose None as
<Principal Type>, it is not possible to apply Principal Propagation to this system.
Note
In the case of RFC, it is not possible to choose between different principal types. The only supported one is
X.509 certificate, which can be applied only when using an SNC-enabled back-end connection.
12. You can enter an optional description at this stage. The respective description will be shown as a rich tooltip
when the mouse hovers over the entries of the virtual host column (table Mapping Virtual to Internal System).
13. The summary shows information about the system to be stored. When saving the system mapping, you can
trigger a ping from the Cloud connector to the internal host, using the Check availability of internal host check
box. This allows you to make sure the Cloud connector can indeed access the internal system, and allows you
to catch basic things, such as spelling mistakes or firewall problems between the Cloud connector and the
internal host. If the ping to the internal host is successful, the Cloud connector saves the mapping without any
remark. If it fails, a warning will pop up, that the host is not reachable. Details for the reason are available in
the log files. You can execute such a check at any time later for all selected systems in the Access Control
overview.
In addition to allowing access to a particular host and port, you also need to specify which function modules
(Resources) are allowed to be invoked on that host. The Cloud connector uses very strict white lists for its access
control, so allowed are only function modules for which you explicitly granted access. All other RFC requests are
denied by the Cloud connector.
1. To define the permitted function modules (Resources) for a particular back-end system, choose the row
corresponding to that back-end system and press Add in section Resources Accessible On... below. A dialog
appears, prompting you to enter the specific function module name whose invoking you want to allow.
Related Information
Tutorial: Invoking ABAP Function Modules in On-Premise ABAP Systems [page 444]
Context
This step-by-step tutorial shows how a sample Web application invokes a function module in an on-premise ABAP
system via RFC by using theconnectivity service.
Different user roles are involved in the on-demand to on-premise connectivity end-to-end scenario. The particular
steps for the relevant roles are described below:
IT Administrator
This role sets up and configures the Cloud connector. Scenario steps:
Application Developer
1. Installs the Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
2. Develops a Java EE application using the destination API.
3. Configures connectivity destinations as resources in the web.xml file.
4. Configures connectivity destinations via the SAP Cloud Platform server adapter in Eclipse IDE.
5. Deploys the Java EE application locally and on the cloud.
Account Operator
This role deploys Web applications, configures their destinations, and conducts tests. Scenario steps:
Installation Prerequisites
● You have downloaded and set up your Eclipse IDE and SAP Cloud Platform Tools for Java.
● You have downloaded the SDK. Its version needs to be at least 1.29.18 (SDK for Java Web), 2.11.6 (SDK for
Java EE 6 Web Profile), or 2.9.1 (SDK for Java Web Tomcat 7), respectively.
● Your local runtime needs to be hosted by a 64-bit JVM. On Windows platforms, you need to install Microsoft
Visual C++ 2010 Redistributable Package (x64).
To read the installation documentation, go to Setting Up the Development Environment [page 43] and Installing
the Cloud Connector [page 483].
Procedure
2. From the Eclipse main menu, choose New Dynamic Web Project .
3. In the Project name field, enter jco_demo .
4. In the Target Runtime pane, select the runtime you want to use to deploy the HelloWorld application. In this
tutorial, we choose Java Web.
5. In the Configuration pane, leave the default configuration.
6. Choose Finish to complete the creation of your project.
Procedure
package com.sap.demo.jco;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.conn.jco.AbapException;
import com.sap.conn.jco.JCoDestination;
import com.sap.conn.jco.JCoDestinationManager;
import com.sap.conn.jco.JCoException;
import com.sap.conn.jco.JCoFunction;
import com.sap.conn.jco.JCoParameterList;
import com.sap.conn.jco.JCoRepository;
/**
* Sample application that uses the connectivity
service. In particular,
* it makes use of the capability to invoke a function module in an ABAP system
* via RFC
*
* Note: The JCo APIs are available under <code>com.sap.conn.jco</code>.
*/
public class ConnectivityRFCExample extends HttpServlet
{
private static final long serialVersionUID = 1L;
public ConnectivityRFCExample()
{
}
protected void doGet(HttpServletRequest request, HttpServletResponse
response)
throws ServletException, IOException
{
PrintWriter responseWriter = response.getWriter();
try
{
// access the RFC Destination "JCoDemoSystem"
JCoDestination
destination=JCoDestinationManager.getDestination("JCoDemoSystem");
// make an invocation of STFC_CONNECTION in the backend;
JCoRepository repo=destination.getRepository();
JCoFunction stfcConnection=repo.getFunction("STFC_CONNECTION");
JCoParameterList imports=stfcConnection.getImportParameterList();
imports.setValue("REQUTEXT", "SAP HANA Cloud connectivity runs with
JCo");
stfcConnection.execute(destination);
JCoParameterList exports=stfcConnection.getExportParameterList();
String echotext=exports.getString("ECHOTEXT");
String resptext=exports.getString("RESPTEXT");
response.addHeader("Content-type", "text/html");
responseWriter.println("<html><body>");
responseWriter.println("<h1>Executed STFC_CONNECTION in system
JCoDemoSystem</h1>");
responseWriter.println("<p>Export parameter ECHOTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(echotext);
responseWriter.println("<p>Export parameter RESPTEXT of
STFC_CONNECTION:<br>");
responseWriter.println(resptext);
responseWriter.println("</body></html>");
}
catch (AbapException ae)
{
5. Save the Java editor and make sure that the project compiles without errors.
Procedure
1. To deploy your Web application locally or on the cloud, see the following two procedures, respectively:
To configure the destination on SAP Cloud Platform, you need to use a virtual application server host name
(abapserver.hana.cloud) and a virtual system number (42) that you will expose later in the Cloud connector.
Alternatively, you could use a load balancing configuration with a message server host and a system ID.
Procedure
Name=JCoDemoSystem
Type=RFC
jco.client.ashost=abapserver.hana.cloud
jco.client.cloud_connector_version=2
jco.client.sysnr=42
jco.client.user=DEMOUSER
jco.client.passwd=<password>
jco.client.client=000
jco.client.lang=EN
jco.destination.pool_capacity=5
2. Upload this file to your Web application in SAP Cloud Platform. For more information, see Configuring
Destinations from the Console Client [page 326].
3. Call the URL that references the cloud application again in the Web browser. The application should now
return a different exception:
4. This means the Cloud connector denied opening a connection to this system. As a next step, you need to
configure the system in your installed Cloud connector.
This is required since the Cloud connector only allows access to white-listed back-end systems. To do this, follow
the steps below:
Procedure
1. Optional: In the Cloud connector administration UI, you can check under Monitor Audit whether access
has been denied:
2. In the Cloud connector administration UI and choose Cloud To On-Premise from your Account menu, tab
Access Control.
3. In section Mapping Virtual To Internal System choose Add to define a new system.
1. For Back-end Type, select ABAP System and choose Next.
2. For Protocol, select RFC and choose Next.
3. Choose option Without load balancing.
4. Enter application server and instance number. The Application Server entry must be the physical host
name of the machine on which the ABAP application server is running. Choose Next.
Example:
4. Call the URL that references the cloud application again in the Web browser. The application should now
throw a different exception:
5. This means the Cloud connector denied invoking STFC_CONNECTION in this system. As a final step, you
need to provide access to this function module in your installed Cloud connector.
This is required since the Cloud connector only allows access to white-listed resources (which are defined on the
basis of function module names with RFC). To do this, follow the steps below:
Procedure
1. Optional: In the Cloud connector administration UI, you can check under Monitor Audit whether access
has been denied:
2. In the Cloud connector administration UI, go to the Access Control tab page.
5. Call the URL that references the cloud application again in the Web browser. The application should now
return with a message showing the export parameters of the function module after a successful invocation.
Related Information
You can monitor the state and logs of your Web application deployed on SAP Cloud Platform.
For more information, see Using Logs in the Eclipse IDE [page 1170].
The e-mail connectivity functionality allows you to send electronic mail messages from your Web applications
using e-mail providers that are accessible on the Internet, such as Google Mail (Gmail). It also allows you to
retrieve e-mails from the mailbox of your e-mail account.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider of
your choice.
● Obtain a mail session resource using resource injection or, alternatively, using a JNDI lookup.
● Configure the mail session resource by specifying the protocol settings of your mail server as a mail
destination configuration. SMTP is supported for sending e-mail, and POP3 and IMAP for retrieving messages
from a mailbox account.
Related Information
In your Web application, you use the JavaMail API (javax.mail) to create and send a MimeMessage object or
retrieve e-mails from a message store.
Mail Session
You can obtain a mail session resource using resource injection or a JNDI lookup. The properties of the mail
session are specified by a mail destination configuration. So that the resource is linked to this configuration, the
names of the destination configuration and mail session resource must be the same.
● Resource injection
You can directly inject the mail session resource using annotations as shown in the example below. You do not
need to declare the JNDI resource reference in the web.xml deployment descriptor.
@Resource(name = "mail/Session")
private javax.mail.Session mailSession;
● JNDI lookup
To obtain a resource of type javax.mail.Session, you declare a JNDI resource reference in the web.xml
deployment descriptor in the WebContent/WEB-INF directory as shown below. Note that the recommended
resource reference name is Session and the recommended subcontext is mail (mail/Session):
<resource-ref>
<res-ref-name>mail/Session</res-ref-name>
<res-type>javax.mail.Session</res-type>
</resource-ref>
An initial JNDI context can be obtained by creating a javax.naming.InitialContext object. You can then
consume the resource by looking up the naming environment through the InitialContext, as follows:
Sending E-Mail
With the javax.mail.Session object you have retrieved, you can use the JavaMail API to create a
MimeMessage object with its constituent parts (instances of MimeMultipart and MimeBodyPart). The message
can then be sent using the send method from the Transport class:
Fetching E-Mail
You can retrieve the e-mails from the inbox folder of your e-mail account using the getFolder method from the
Store class as follows:
Fetched e-mail is not scanned for viruses. This means that e-mail retrieved from an e-mail provider using IMAP or
POP3 could contain a virus that could potentially be distributed (for example, if e-mail is stored in the database or
forwarded). Basic mitigation steps you could take include the following:
Related Information
The name of the mail destination must match the name used for the mail session resource. You can configure a
mail destination directly in a destination editor or in a mail destination properties file. The mail destination then
needs to be made available in the cloud. If a mail destination is updated, an application restart is required so that
the new configuration becomes effective.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider of
your choice.
Table 256:
Property Description Mandatory
Name The name of the destination. The mail session that is configured by Yes
this mail destination is available by injecting the mail session re
source mail/<Name>. The name of the mail session resource
must match the destination name.
Type The type of destination. It must be MAIL for mail destinations. Yes
mail.* javax.mail properties for configuring the mail session. Depends on the mail protocol
used.
To send e-emails, you must specify at least
mail.transport.protocol and mail.smtp.host.
mail.password Password that is used for authentication. The user name for au Yes, if authentication is used
thentication is specified by mail.user (a standard (mail.smtp.auth=true and
javax.mail property). generally for fetching e-mail).
● mail.smtp.port: The SMTP standard ports 465 (SMTPS) and 587 (SMTP+STARTTLS) are open for
outgoing connections on SAP Cloud Platform.
● mail.pop3.port: The POP3 standard ports 995 (POP3S) and 110 (POP3+STARTTLS) are open for outgoing
connections (used to fetch e-mail).
The destination below has been configured to use Gmail as the e-mail provider, SMTP with STARTTLS (port 587)
for sending e-mail, and IMAP (SSL) for receiving e-mail:
Name=Session
Type=MAIL
mail.user=<gmail account name>
mail.password=<gmail account password>
mail.transport.protocol=smtp
mail.smtp.host=smtp.gmail.com
mail.smtp.auth=true
mail.smtp.starttls.enable=true
mail.smtp.port=587
mail.store.protocol=imaps
mail.imaps.host=imap.gmail.com
SMTPS Example
The destination below uses Gmail and SMTPS (port 465) for sending e-mail:
Name=Session
Type=MAIL
mail.user=<gmail account name>
mail.password=<gmail account password>
mail.transport.protocol=smtps
mail.smtps.host=smtp.gmail.com
mail.smtps.auth=true
mail.smtps.port=465
Related Information
In order to troubleshoot e-mail delivery and retrieval issues, it is useful to have debug information about the mail
session established between your SAP Cloud Platform application and your e-mail provider.
Context
To include debug information in the standard trace log files written at runtime, you can use the JavaMail
debugging feature and the System.out logger. The System.out logger is preconfigured with the log level INFO.
You require at least INFO or a level with more detailed information.
Procedure
1. To enable the JavaMail debugging feature, add the mail.debug property to the mail destination
configuration as shown below:
mail.debug=true
2. To check the log level for your application, log onto the cockpit.
Note
You can check the log level of the System.out logger in a similar manner from the Eclipse IDE.
Related Information
This step-by-step tutorial shows how you can send an e-mail from a simple Web application using an e-mail
provider that is accessible on the Internet. As an example, it uses Gmail.
Note
SAP does not act as e-mail provider. To use this service, please cooperate with an external e-mail provider of
your choice.
Table 257:
Steps Sample Application
Prerequisites [page 459] The application is also available as a sample in the SAP
1. Create a Dynamic Web Project and Servlet [page 459] Cloud Platform SDK:
Prerequisites
You have installed the SAP Cloud Platform Tools and created a SAP HANA Cloud server runtime environment as
described in Setting Up the Development Environment [page 43].
To develop applications for the SAP Cloud Platform, you require a dynamic Web project and servlet.
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. In the Project name field, enter mail.
3. In the Target Runtime pane, select the runtime you want to use to deploy the application. In this tutorial, you
use Java Web.
4. In the Configuration area, leave the default configuration and choose Finish.
5. To add a servlet to the project you have just created, select the mail node in the Project Explorer view.
6. From the Eclipse main menu, choose File New Servlet .
7. Enter the Java package com.sap.cloud.sample.mail and the class name MailServlet.
8. Choose Finish to generate the servlet.
You add code to create a simple Web UI for composing and sending an e-mail message. The code includes the
following methods:
package com.sap.cloud.sample.mail;
import java.io.IOException;
import java.io.PrintWriter;
import javax.annotation.Resource;
import javax.mail.Message.RecipientType;
import javax.mail.MessagingException;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeBodyPart;
import javax.mail.internet.MimeMessage;
import javax.mail.internet.MimeMultipart;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* Servlet implementing a mail example which shows how to use the connectivity
service APIs to send e-mail.
* The example provides a simple UI to compose an e-mail message and send it.
The post method uses
* the connectivity
service and the javax.mail API to send the e-mail.
*/
public class MailServlet extends HttpServlet {
@Resource(name = "mail/Session")
private Session mailSession;
private static final long serialVersionUID = 1L;
private static final Logger LOGGER =
LoggerFactory.getLogger(MailServlet.class);
/** {@inheritDoc} */
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
// Show input form to user
response.setHeader("Content-Type", "text/html");
PrintWriter writer = response.getWriter();
writer.write("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01
Transitional//EN\" "
+ "\"http://www.w3.org/TR/html4/loose.dtd\">");
writer.write("<html><head><title>Mail Test</title></head><body>");
writer.write("<form action='' method='post'>");
writer.write("<table style='width: 100%'>");
writer.write("<tr>");
writer.write("<td width='100px'><label>From:</label></td>");
writer.write("<td><input type='text' size='50' value=''
name='fromaddress'></td>");
Test your code using the local file system before configuring your mail destination and testing the application in
the cloud.
1. To test your application on the local server, select the servlet and choose Run Run As Run on Server .
2. Make sure that the Manually define a new server radio button is selected and select SAP Java Web
Server .
3. Choose Finish. A sender screen appears, allowing you to compose and send an e-mail. The sent e-mail is
stored in the work/mailservice directory contained in the root of your SAP Cloud Platform local runtime
server.
Note
To send the e-mail through a real e-mail server, you can configure a destination as described in the next
section, but using the local server runtime. Remember that once you have configured a destination for local
testing, messages are no longer sent to the local file system.
Create a mail destination that contains the SMTP settings of your e-mail provider. The name of the mail
destination must match the name used in the resource reference in the web.xml descriptor.
1. In the Eclipse main menu, choose File New Other Server Server .
2. Select the server type SAP Cloud Platform and choose Next.
3. In the SAP Cloud Platform Application dialog box, enter the name of your application, account, user, and
password and choose Finish. The new server is listed in the Servers view.
4. Double-click the server and switch to the Connectivity tab.
Table 258:
Property Value
mail.transport.protocol smtp
mail.smtp.host smtp.gmail.com
mail.smtp.auth true
mail.smtp.starttls.enable true
mail.smtp.port 587
Table 259:
Property Value
mail.transport.protocol smtps
mail.smtps.host smtp.gmail.com
mail.smtps.auth true
mail.smtps.port 465
8. Save the destination to upload it to the cloud. The settings take effect when the application is next started.
9. In the Project Explorer view, select MailServlet.java and choose Run Run As Run on Server .
10. Make sure that the Choose an existing server radio button is selected and select the server you have just
defined.
11. Choose Finish to deploy to the cloud. You should now see the sender screen, where you can compose and
send an e-mail
Internet Connectivity
Applications that require connection to a remote service can use the connectivity service to configure HTTP or
RFC endpoints. In a provider-managed application, such an endpoint can either be once defined by the application
provider, or by each application consumer. If the application needs to use the same endpoint, independently from
the current application consumer, the destination that contains the endpoint configuration is uploaded by the
application provider. If the endpoint should be different for each application consumer, the destination shall be
uploaded by each particular application consumer.
Destinations can be simultaneously configured on three levels: application, consumer account and subscription.
This means it is possible to have one and the same destination on more than one configuration level. For more
information, see Destinations [page 324]
When the application accesses the destination at runtime, the connectivity service tries to first lookup the
requested destination in the consumer account on subscription level. If no destination is available there, it checks
if the destination is available on the account level of the consumer account. If there is still no destination found,
the connectivity service searches on application level of the provider account.
Consumer-Specific Destination
Provider-Specific Destination
This connectivity type is fully applicable when working with connectivity service 2.x.
Related Information
Introduction
You can create connectivity destinations for HANA XS applications, configure their security, adding roles and then
test them on a relevant landscape (productive or trial). Depending to your scenario, see:
Related Information
Overview
This section represents the usage of the connectivity service in a productive SAP HANA instance. Below are listed
the available scenarios depending on the connectivity and authentication types you use for your development
work.
Connectivity Types
Internet Connectivity
In this case, you can develop an XS application in a productive SAP HANA instance at SAP Cloud Platform so that
the application connects to external Internet services or resources.
Note
In the outbound scenarion, the useSSL property can be set to true or false depending on the XS
application's needs.
For more information, see Using XS Destinations for Internet Connectivity [page 468]
In this case, you can develop an XS application in a productive SAP HANA instance at SAP Cloud Platform so that
the application connects, via a Cloud connector tunnel, to on-premise services and resources.
The corresponding XS parameters for all productive landscapes are the same. That is:
Note
When XS applications consume the connectivity service to connect to on-premise systems, the useSSL
property must always be set to false.
The communication between the XS application and the proxy listening on localhost is always via HTTP.
Whether the connection to the on-premise back-end should be HTTP or HTTPS is a matter of access control
configuration in the Cloud connector. For more information, see Configuring Access Control (HTTP) [page
389].
For more information, see Using XS Destinations for On-Demand to On-Premise Connectivity [page 472]
No Authenticaion
Basic Authentication
You need credentials to access an Internet or on-premise service. To meet this requirement, proceed as follows:
1. Open a Web browser and start the SAP HANA XS Administration Tool (https://
<schema><account>.<host>/sap/hana/xs/admin/).
2. On the XS Applications page, expand the nodes in the application tree to locate your application.
3. Select the .xshttpdest file to display details of the HTTP destination and then choose Edit.
4. In the AUTHENTICATION section, choose the Basic radio button.
5. Enter the credentials for the on-premise service.
6. Save your entries.
Context
This tutorial explains how to create a simple SAP HANA XS application, which is written in server-side JavaScript
and makes use of theconnectivity service for making Internet connections.
In the HTTP example, the package is named connectivity and the XS application is mapinfo. The output displays
information from Google Maps showing the distance between Frankfurt and Cologne, together with the consumed
time if travelling with a car, as all this information is provided in American English..
Prerequisites
● You have a productive SAP HANA instance. For more information, see Using a Productive SAP HANA
Database System [page 1080].
● You have installed the SAP HANA tools. For more information, see Installing SAP HANA Tools for Eclipse
[page 68].
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an XS
Destination File on this page.
● If you need to create an XS application from scratch, go to page Creating an SAP HANA XS Hello World
Application Using SAP HANA Studio [page 73] and execute procedures 1 to 4. Then execute the procedures
from this page (2 to 5).
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "maps.googleapis.com";
port = 80;
pathPrefix = "/maps/api/distancematrix/json";
useProxy = true;
proxyHost = "proxy";
proxyPort = 8080;
authType = none;
useSSL = false;
timeout = 30000;
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google_test.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "application/json";
$.response.setBody(response.body.asString());
$.response.status = $.net.http.OK;
} catch (e) {
$.response.contentType = "text/plain";
$.response.setBody(e.message);
}
Note
To consume an Internet service via HTTPS, you need to export your HTTPS service certificate into X.509
format, to import it into a trust store and to assign it to your activated destination. You need to do this in the
SAP HANA XS Administration Tool (https://<schema><account>.<host>/sap/hana/xs/admin/). For more
information, see Developer Guide for SAP HANA Studio → section "3.6.2.1 SAP HANA XS Application
Authentication". For more information, see the SAP HANA Developer Guides listed in the Related Links section
below. Refer to the SAP Cloud Platform Release Notes to find out which HANA SPS is supported by SAP
Cloud Platform.
1. In the Systems view, expand Security Users and then double-click your user ID.
2. On the Granted Roles tab, choose the + (Add) button.
3. Select the model_access role in the list and choose OK. The role is now listed on the Granted Roles tab.
4. Choose Deploy in the upper right corner of screen. A message confirms that your user has been modified.
Open the cockpit and proceed as described in Launching SAP HANA XS Applications [page 1079].
You will be authenticated by SAML and should then see the following response:
{
"destination_addresses" : [ "Cologne, Germany" ],
"origin_addresses" : [ "Frankfurt, Germany" ],
"rows" : [
{
"elements" : [
{
"distance" : {
"text" : "190 km",
"value" : 190173
},
"duration" : {
"text" : "1 hour 58 mins",
"value" : 7103
},
"status" : "OK"
}
]
}
],
"status" : "OK"
}
Additional Example
You can also see an example for enabling server-side JavaScript applications to use the outbound connectivity
API. For more information, see Developer Guide for SAP HANA Studio → section "8.4.1 Tutorial: Using the XSJS
Outbound API".
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is supported by SAP Cloud Platform.
Related Information
Context
This tutorial explains how to create a simple SAP HANA XS application that consumes a sample back-end system
exposed via the Cloud connector.
In this example, the XS application consumes an on-premise system with basic authentication on landscape
hana.ondemand.com.
Prerequisites
● You have a productive SAP HANA instance. For more information, see Using a Productive SAP HANA
Database System [page 1080].
● You have installed the SAP HANA tools. For more information, see Installing SAP HANA Tools for Eclipse
[page 68]. You need them to open a Database Tunnel.
● You have Cloud connector 2.x installed on an on-premise system. For more information, see Installing the
Cloud Connector [page 483].
● A sample back-end system with basic authentication is available on an on-premise host. For more
information, see Setting Up an Application as a Sample Back-End System [page 428].
● You have created a tunnel between your account and a Cloud connector. For more information, see Initial
Configuration [page 504] → section "Establishing Connections to SAP Cloud Platform".
● The back-end system is exposed for the SAP HANA XS application via Cloud connector configuration using as
settings: virtual_host = virtualpingbackend and virtual_port = 1234. For more information, see
Consuming Back-End Systems (Java Web or Java EE 6 Web Profile) [page 409].
Note
The last two prerequisites can be achieved by exposing any other available HTTP service in your on-premise
network. In this case, you shall adjust accordingly the pathPrefix value, mentioned below in procedure "2.
Create an XS Destination File".
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an XS
Destination File on this page.
Note
The subpackage in which you will later create your XS destination and XSJS files has to be named connectivity.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name odop.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "virtualpingbackend";
port = 1234;
useSSL = false;
pathPrefix = "/BackendAppHttpBasicAuth/basic";
useProxy = true;
proxyHost = "localhost";
proxyPort = 20003;
timeout = 3000;
Note
In case you use SDK with a version equal to or lower than 1.44.0.1 (Java Web) and 2.24.13 (Java EE 6
Web Profile) respectively, you should find the on-premise WAR files in directory <SDK_location>/
tools/samples/connectivity/onpremise. Also, the pathPrefix should be /
PingAppHttpBasicAuth/pingbasic.
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name ODOPTest.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "text/html";
var dest = $.net.http.readDestination("connectivity","odop");
var client = new $.net.http.Client();
var req = new $.web.WebRequest($.net.http.GET, "");
client.request(req, dest);
var response = client.getResponse().body.asString();
$.response.setBody(response);
Note
You also need to enter your on-premise credentials. You should not enter them in the destination file since they
must not be exposed as plain text.
1. Open a Web browser and start the SAP HANA XS Administration Tool (https://
<schema><account>.<host>/sap/hana/xs/admin/).
2. On the XS Applications page, expand the nodes in the application tree to locate your application.
3. Select the odop.xshttpdest file to display the HTTP destination details and then choose Edit.
4. In section AUTHENTICATION, choose the Basic radio button.
5. Enter your on-premise credentials (user and password).
6. Save your entries.
Note
If you later need to make another configuration change to your XS destination, you need to enter your
password again since it is no longer remembered by the editor.
1. In the Systems view, expand Security Users and then double-click your user ID.
2. On the Granted Roles tab, choose the + (Add) button.
3. Select the model_access role in the list and choose OK. The role is now listed on the Granted Roles tab.
4. Choose Deploy in the upper right corner of screen. A message confirms that your user has been modified.
Open the cockpit and proceed as described in Launching SAP HANA XS Applications [page 1079].
Principal Propagation scenario is available for HANA XS applications. It is used for propagating the currently
logged in user to an on-premise backend system using the Cloud connector and connectivity service. To configure
the scenario make sure to:
2.Open the Cloud connector and mark your HANA instance as trusted in the Principal Propagation tab. The HANA
instance name is displayed in the cockpit under Persistence Databases & Schemas . For more information,
see Setting Up Trust [page 513].
Related Information
port It enables you to specify the port ● For Internet connection: 80, 443
number to use for connections to the
● For on-demand to on-premise
HTTP destination hosting the service or
connection: 1080
data you want your SAP HANA XS
● For service-to-service connection:
application to access.
8443
Related Information
SAP HANA Developer Guide → section "3.7.3 HTTP Destination Configuration Syntax"
Context
This section represents the usage of the connectivity service when you develop and deploy SAP HANA XS
applications in a trial environment. Currently, you can make XS destinations for consuming HTTP Internet
services only.
The tutorial explains how to create a simple SAP HANA XS application which is written in server-side JavaScript
and makes use of the connectivity service for making Internet connections. In the HTTP example, the package is
named connectivity and the XS application is mapinfo. The output displays information from Google Maps
showing the distance between Frankfurt and Cologne, together with the consumed time if travelling with a car, as
all this information is provided in American English.
Features
In this case, you can develop an XS application in a trial environment at SAP Cloud Platform so that the application
connects to external Internet services or resources.
XS parameter hanatrial.ondemand.com
useProxy true
proxyHost proxy-trial
proxyPort 8080
Note
The useSSL property can be set to true or false depending on the XS application's needs.
1. Initial Steps
To create and assign an XS destination, you need to have a developed HANA XS application.
● If you have already created one and have opened a database tunnel, go straight to procedure 2. Create an XS
Destination File on this page.
● If you need to create an XS application from scratch, go to page Creating an SAP HANA XS Hello World
Application Using SAP HANA Studio [page 73] and execute procedures 1 to 4. Then execute the procedures
from this page (2 to 5).
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google.xshttpdest and choose Finish.
3. Copy and paste the following destination configuration settings:
host = "maps.googleapis.com";
port = 80;
pathPrefix = "/maps/api/distancematrix/json";
useProxy = true;
proxyHost = "proxy-trial";
proxyPort = 8080;
authType = none;
useSSL = false;
timeout = 30000;
1. In the Project Explorer view, select the connectivity folder and choose File New File .
2. Enter the file name google_test.xsjs and choose Finish.
3. Copy and paste the following JavaScript code into the file:
$.response.contentType = "application/json";
$.response.setBody(response.body.asString());
$.response.status = $.net.http.OK;
} catch (e) {
$.response.contentType = "text/plain";
$.response.setBody(e.message);
}
1. In the Systems view, select your system and from the context menu choose SQL Console.
2. In the SQL console, enter the following, replacing <SAP HANA Cloud user> with your user:
call
"HCP"."HCP_GRANT_ROLE_TO_USER"('p1234567890trial.myhanaxs.hello::model_access',
'<SAP HANA Cloud user>')
3. Execute the procedure. You should see a confirmation that the statement was successfully executed.
Open the cockpit and proceed as described in Launching SAP HANA XS Applications [page 1079].
You will be authenticated by SAML and should then see the following response:
{
"destination_addresses" : [ "Cologne, Germany" ],
"origin_addresses" : [ "Frankfurt, Germany" ],
"rows" : [
{
"elements" : [
{
"distance" : {
"text" : "190 km",
"value" : 190173
},
"duration" : {
"text" : "1 hour 58 mins",
"value" : 7103
},
"status" : "OK"
}
]
}
],
"status" : "OK"
}
Related Information
Creating an SAP HANA XS Hello World Application Using SAP HANA Web-based Development Workbench [page
69]
Content
Note
This documentation refers to SAP Cloud Platform cloud connector (formerly known as SAP HANA Cloud
Connector) version 2.9+. For SAP Cloud Platform cloud connector versions prior to version 2.9, please see SAP
Cloud Platform documentation on SCN (ABAP Connectivity Wiki, section Documentation, chapter 1.4.1.3).
Table 260:
Section Description
Advantages [page 481] How the Cloud connector helps you to connect your on-prem
ise systems to SAP Cloud Platform.
Scenarios [page 481] Learn more about the different connection setups you can
choose.
Basic Tasks [page 482] Primary steps you need to perform to connect the Cloud
connector to your SAP Cloud Platform account.
What's New? [page 483] Stay up to date with the new Cloud connector features.
Context
The SAP Cloud Platform cloud connector (Cloud connector) serves as the link between on-demand applications in
SAP Cloud Platform and existing on-premise systems. It combines an easy setup with a clear configuration of the
systems that are exposed to SAP Cloud Platform. In addition, you can control the resources available for the cloud
applications in those systems. Thus, you can benefit from your existing assets without exposing the whole internal
landscape.
The Cloud connector runs as on-premise agent in a secured network and acts as a reverse invoke proxy between
the on-premise network and SAP Cloud Platform. Due to its reverse invoke support, you don't need to configure
the on-premise firewall to allow external access from the cloud to internal systems. The Cloud connector provides
fine-grained control over:
You can use the Cloud connector in business critical enterprise scenarios. The tool takes care to automatically re-
establish broken connections, provides audit logging of the inbound traffic and configuration changes, and can be
run in a high-availability setup.
In the Scenarios section below, follow the steps according to the protocol you need to use (HTTP or RFC).
Advantages
Compared to the approach of opening ports in the firewall and using reverse proxies in the DMZ to establish
access to on-premise systems, the Cloud connector has the following advantages:
● The firewall of the on-premise network does not have to open an inbound port to establish connectivity from
SAP Cloud Platform to an on-premise system. In the case of allowed outbound connections, no modifications
are required.
● The Cloud connector supports additional protocols, apart from HTTP. For example, the RFC protocol
supports native access to ABAP systems by invoking function modules.
● The Cloud connector can be used to connect on-premise database, or BI tools to SAP HANA databases in the
cloud. That means, it also supports the opposite connection direction (from the on-premise system to the
cloud).
● The Cloud connector allows propagating identity of cloud users to on-premise systems in a secure way.
● The Cloud connector is easy to install and configure, that is, it comes with a low TCO and fits well to cloud
scenarios. SAP provides standard support for it.
Scenarios
Note
Depending on the type of installation setup, the Cloud connector can also be installed in an environment
managed by SAP or a 3rd party provider. In this case, special procedures may apply for configuration. If so,
they are mentioned in the corresponding configuration steps.
Table 262:
Basic Tasks
The following steps are required to connect the Cloud connector to your SAP Cloud Platform account:
You can follow the release notes of SAP Cloud Platform to stay informed about updates of the Cloud
connector.
Related Information
Choose one of the procedures listed below to install Cloud connector 2.x depending on your preferable operating
system.
On Microsoft Windows and Linux, two installation modes are available: portable version and installer
version. On Mac OS X, only the portable version is available.
● Portable version - it can be easily installed by just extracting a compressed archive into an empty directory.
It does not require administrator or root privileges for the installation. Restrictions:
○ It cannot be run in the background as a Windows Service or Linux daemon (with automatic start
capabilities at boot time).
○ It does not support an automatic upgrade procedure. So, if you want to update a portable installation,
you will have to delete the current installation, extract the new version, and then re-do the configuration.
● Installer version - it requires administrator or root permissions for the installation and can be set up to run
as a Windows Service or Linux daemon in the background. It can also be easily upgraded, retaining all the
configuration and customizing. It is the recommended variant for productive setups.
Prerequisites
There is a list of prerequisites you need to fulfill to successfully install the Cloud connector 2.x. For more
information, see Prerequisites [page 484].
Related Information
1.4.1.3.1.1 Prerequisites
The listed prerequisites below need to be fulfilled for successful installation of the Cloud connector 2.x.
Connectivity Restrictions
For general information about SAP Cloud Platform restrictions, see Product Prerequisites and Restrictions [page
8].
For specific information about all connectivity restrictions, see SAP Cloud Platform Connectivity [page 311] →
section "Restrictions".
Hardware
● You have downloaded the Cloud connector installation archive from SAP Development Tools for Eclipse.
● A JDK 7 or 8 needs to be installed. Due to problems with expired root CA certificates contained in older patch
levels of JDK 7, we recommend that you install the most recent patch level. An up-to-date SAP JVM can be
downloaded from the SAP Development Tools for Eclipse page as well.
Caution
Do not use Apache Portable Runtime (APR) on the system on which you use the Cloud connector. If you
cannot avoid this restriction and want to use APR at your own risk, you need to manually adopt the default-
server.xml configuration file in directory <scc_installation_folder>/config_master/
org.eclipse.gemini.web.tomcat. To do so, follow the documentation of the HTTPS port configuration
for APR.
Supported JDKs
Table 263:
JDK Version Cloud Connector Version
Network
You need to have Internet connection at least to the following hosts (depending on the data center), to which you
can connect your Cloud connector:
Table 264:
Data Center (Landscape host) Hosts IP Addresses
connectivitytunnel.hana.ondemand.com 155.56.210.84
connectivitycertsigning.us1.hana.onde 65.221.12.241
mand.com
connectivitytunnel.us1.hana.onde 65.221.12.41
mand.com
connectivitytunnel.us2.hana.onde 64.95.110.214
mand.com
connectivitytunnel.ap1.hana.onde 210.80.140.246
mand.com
connectivitytunnel.cn1.hana.onde 157.133.192.141
mand.com
connectivitytunnel.jp1.hana.onde 157.133.150.141
mand.com
connectivitytunnel.hanatrial.onde 155.56.219.27
mand.com
Note
If you install the Cloud connector in a network segment that is isolated from the backend systems, you need to
make sure that the exposed hosts and ports are still reachable and open them in the firewall protecting them:
● for HTTP, these are the ports you chose for the HTTP/S server.
● for LDAP, it is the port of the LDAP server.
● in case of RFC it depends on whether you use a SAProuter or not and whether load balancing is used:
For more information about the used ABAP server ports, see also: Ports of SAP NetWeaver Application Server
ABAP.
Table 265:
Operating System Version Architecture Cloud Connector Version
Enterprise Linux 6
SUSE Linux Enterprise Server 12, Redhat x86_64 2.5.1 and higher
Enterprise Linux 7
Related Information
Prerequisites
● You have either of the following 64-bit operating systems: Windows 7, Windows 8.1, Windows 10, Windows
Server 2008 R2, Windows Server 2012, or Windows Server 2012 R2.
● You have downloaded either the portable variant as ZIP archive for Windows, or the MSI installer from
the SAP Development Tools for Eclipse page.
● You need to install Microsoft Visual Studio C++ 2010 runtime libraries. For more information, see Microsoft
Visual Studio C++ 2010 Redistributable Package (x64)
Note
Even if you have a more recent version of the Microsoft Visual C++ runtime libraries, you still need to install
the Microsoft Visual Studio C++ 2010 libraries.
● Java 7 or Java 8 needs to be installed. In case you want to use SAP JVM, you can download it from the SAP
Development Tools for Eclipse page.
● When using the portable variant, the environment variable <JAVA_HOME> needs to be set to the Java
installation directory, so that the bin subdirectory can be found. Alternatively, you can add the relevant bin
subdirectory to the <PATH> variable.
Context
You can choose between a simple portable variant of the Cloud connector and the MSI-based installer. The
installer is the generally recommended means that can be used for both developer and productive scenarios.
It takes care, for example, to register the Cloud connector as a Windows service and this way to automatically
start it after machine reboot.
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud connector after a
simple unzip (archive extraction). You might want to use it also if you cannot perform a true installation due to
lack of permissions, or if you need to use multiple versions of the Cloud connector simultaneously on the same
machine.
Portable Scenario
1. Extract the <sapcc-<version>-windows-x64.zip> ZIP file to an arbitrary directory on your local file
system.
2. Set the environment variable JAVA_HOME to the installation directory of the JDK you want to use to run the
Cloud connector. (Alternatively, you can add the bin subdirectory of the JDK installation directory to the
PATH environment variable.)
3. Change to the Cloud connector installation directory and start it via the go.bat batch file.
4. Continue with the Next Steps section.
Note
Cloud connector 2.x is not started as a service when using the portable variant, and hence will not
automatically start after a reboot of your system. Also, the portable version does not support the automatic
upgrade procedure.
Installer Scenario
Note
Cloud connector 2.x is started as a Windows Service in the Productive use case. Hence, installation requires
administration permissions. After installation, the service should be administrated under Control Panel
Administrative Tools Services . The service name is Cloud Connector 2.0. Make sure that the service is
executed with a user that has limited privileges. Typically, privileges allowed for service users are defined by
Next Steps
1. In a browser, enter: https://<hostname>:8443, where <hostname> is the host name of the machine on
which you have installed the Cloud connector.
If you access the Cloud connector locally from the same machine, you can just enter localhost.
2. Continue with initial configuration of the Cloud connector.
For more information, see Initial Configuration [page 504].
Related Information
Prerequisites
● You have either of the following 64-bit operating systems: SUSE Linux Enterprise Server 11 or 12, or Redhat
Enterprise Linux 6 or 7
● You have downloaded either the portable variant as tar.gz archive for Linux or the RPM installer
contained in the ZIP for Linux, from the SAP Development Tools for Eclipse page.
● Java 7 or Java 8 needs to be installed. In case you want to use SAP JVM, you can download it from the SAP
Development Tools for Eclipse page as well. When installing it via the RPM package, the Cloud connector will
detect it and use it for its runtime.
● When using the tar.gz archive, the environment variable <JAVA_HOME> needs to be set to the Java
installation directory, so that the bin subdirectory can be found. Alternatively, you can add the Java
installation's bin subdirectory to the <PATH> variable.
You can choose between a simple portable variant of the Cloud connector and the RPM-based installer. The
installer is the generally recommended means that can be used for both developer and productive scenarios.
It takes care, for example, of registering the Cloud connector as a daemon service and this way to automatically
start it after machine reboot.
Tip
If you are a developer, you might want to use the portable variant as you can run the Cloud connector after a
simple "tar -xzof" execution. You might want to use it also if you cannot perform a true installation due to
lack of operating system permissions, or if you need to use multiple versions of the Cloud connector
simultaneously on the same machine.
Portable Scenario
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
Note that by using parameter "o", the extracted files will be assigned to the user ID and the group ID of the
user that has unpacked the archive. This is the default behavior for users other than root.
2. Change to this directory and start the Cloud connector via the go.sh script.
3. Continue with the Next Steps section.
Note
In this case, Cloud connector is not started as a daemon, and hence will not automatically start after a reboot of
your system. Also, the portable version does not support the automatic upgrade procedure.
Installer Scenario
rpm -i com.sap.scc-ui-<version>.rpm
In the productive case, Cloud connector 2.x is started as daemon. If you need to manage the daemon process,
execute:
Example: After a file system restore, the system files represent Cloud connector 2.3.0 but the RPM package
management "believes" version 2.4.3 is installed. In this case, commands like rpm -U and rpm -e will not work
as expected. Furthermore, avoid the usage of the --force parameter as it may lead to unpredictable state
with two versions being installed concurrently, which is not supported.
Next Steps
1. In a browser, enter: https://<hostname>:8443, where <hostname> is the host name of the machine on
which you have installed the Cloud connector.
If you access the Cloud connector locally from the same machine, you can just enter localhost.
2. Continue with initial configuration of the Cloud connector.
For more information, see Initial Configuration [page 504].
Related Information
Prerequisites
Note
Mac OS X is not supported for productive scenarios. The developer version described below must not be used
as productive version.
● You have either of the following 64-bit operating systems: Mac OS X 10.7 (Lion), Mac OS X 10.8 (Mountain
Lion), Mac OS X 10.9 (Mavericks), Mac OS X 10.10 (Yosemite), or Mac OS X 10.11 (El Capitan).
● You have downloaded the tar.gz archive for the developer use case on Mac OS X from the SAP
Development Tools for Eclipse page.
Procedure
1. Extract the tar.gz file to an arbitrary directory on your local file system using the following command:
2. Change to this directory and start Cloud connector via the go.sh script.
3. Continue with the Next Steps section.
Note
Cloud connector is not started as a daemon, and hence will not automatically start after a reboot of your
system. Also, the Mac OS X version of Cloud connector does not support the automatic upgrade
procedure.
Next Steps
1. In a browser, enter: https://<hostname>:8443, where <hostname> is the host name of the machine on
which you have installed the Cloud connector.
If you access the Cloud connector locally from the same machine, you can just enter localhost.
2. Continue with initial configuration of the Cloud connector.
For more information, see Initial Configuration [page 504].
Related Information
Overview
The following guideline should be applied by customers who use connectivity service and the Cloud connector to
guarantee the highest level of security. To assist the administrator with this task the current security status is
shown in the top left corner.
The General Security Status addresses security topics that are account-independent.
● Choose any of the Actions icons in the corresponding line to navigate to the UI area that deals with that
particular topic and view or edit details.
● Navigation is not possible for the last item in the list, namely the Service User.
● The service user is specific to the Windows Operating System (see Installation on Microsoft Windows OS
[page 488] for details) and is only visible when running the Cloud connector on Windows. It cannot be
addressed through the UI. If the service user was set up properly, check the check box.
The Account-Specific Security Status lists security-related information for each and every account. Both the
account-specific and the general security status are aggregated to obtain a summary of the security status that
can then be displayed as the icon of the button mentioned above.
Note
The security status is purely of an informational nature and merely serves as a reminder to address security
issues or as confirmation that your installation complies with all recommended security settings.
Once installed, the Cloud connector provides an initial user name and password and forces the user
(Administrator) to change the password upon initial login. The password should be changed from the initial
password to a specific one immediately after installation.
The connector itself does not check the strength of the password. The Cloud connector administrator must select
a strong password that cannot be guessed easily.
Note
To enforce your company's password policy, we recommend that you configure the Administration UI to use an
LDAP server for authorizing access to the UI.
The Cloud connector is a security critical component that handles the external access to systems of an isolated
network, comparable to a reverse proxy. We therefore recommend to restrict the access to the operating system
on which the Cloud connector is installed to the minimal set of users who shall administrate the Cloud connector.
This will minimize the risk of unauthorized people getting access to credentials, such as certificates stored in the
secure storage of the Cloud connector.
Following the same arguments, we recommend that you use the machine to operate the Cloud connector only and
no other systems.
To log on to the Cloud connector administration UI, the "Administrator" user of the connector must not have an
OS user for the machine on which the connector is running. This allows the OS administrator to be distinguished
from the Cloud connector administrator. To make an initial connection between the connector and a particular
SAP HANA Cloud account, an SAP HANA Cloud user with the needed permissions for the related account is
required. We recommend that you separate these roles/duties (that means, you have separate users for Cloud
connector administrator and SAP HANA Cloud).
Note
We recommend that only a small number of users be granted access to the machine as root.
This ensures that the Cloud connector configuration data cannot be read by unauthorized users, even if they
obtain access to the hard drive.
The Cloud connector administration UI can be remotely accessed via HTTPS. The connector uses a standard X.
509 self-signed certificate as SSL server certificate. The certificate can be exchanged by a customer-specific
certificate that is trusted by the customer. For more information, see Recommended: Replacing the Default SSL
Certificate [page 498].
We recommend that you limit the access to the administration UI to localhost. Thus, you can restrict the access to
a browser that is running on the same server as the Cloud connector.
Note
Since browsers usually do not resolve localhost to the host name whereas the certificate usually is created
under the host name, you might get a certificate warning. In this case, just skip the warning message.
Proceed as follows:
1. Open the default-server.xml file of the Web container provided as part of the Cloud connector:
○ Microsoft Windows OS: <install_dir>\config_master\org.eclipse.gemini.web.tomcat
\default-server.xml
○ Linux OS/Mac OS X: /opt/sap/scc/config_master/org.eclipse.gemini.web.tomcat/
default-server.xml
2. Modify the SSL Connector configuration in the <Host> section, which makes the Web container listen to the
localhost only (that is, IP address 127.0.0.1):
ciphers="TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_RSA
_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,TLS_DHE_DSS_WITH_AES_12
8_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA256"
compression="on" compressionMinSize="1024"
noCompressionUserAgents="gozilla,traviata,*MSIE 6.*"
compressableMimeType="text/html,text/xml,text/plain,text/javascript,text/
css,text/json,application/x-javascript,application/javascript,application/json"
/>
Note
When setting up a Cloud connector in high availability mode, restricting UI access to localhost is not
possible. Otherwise neither the master could push configurations to the shadow nor could the shadow
connect to the master to check whether it is alive.
Caution
With regards to ciphers and sslEnabledProtocols, make sure that these parameters work correctly
with the JCE you are using with your Java Virtual Machine. If they don't, you will not be able to use the High
Availability setup, or the UI administration port may not start at all. If you need to modify the ciphers we
recommend to use the respective section of the settings UI (see Selecting Encryption Ciphers below).
Currently, HTTP, HTTPS and RFC are supported as the protocols between SAP Cloud Platform and on-premise
systems when the Cloud connector and the connectivity service are used. The whole route from the application
virtual machine in the cloud to the Cloud connector is always SSL-encrypted.
The route from the connector to the back-end system can be SSL-encrypted or SNC-encrypted.
For more information, see Configuring Access Control (HTTP) [page 389] and Configuring Access Control (RFC)
[page 438].
We recommend that you turn on the audit log on operating system level to monitor the file operations.
The Cloud connector audit log must remain switched on during the time it is used with productive systems (set it
to audit level "ALL"; the default one is "SECURITY"). The administrators responsible for a running Cloud connector
are obliged to ensure that the audit log files are properly archived and do not get lost, in order to conform to the
local regulations. Additionally, audit logging should be switched on in the connected back-end systems.
Cloud connector administrators should not be authorized to modify files on operating system (OS) level, and OS
administrators should not have access to the Cloud connector administration UI.
By default, all available encryption ciphers are supported for HTTPS connections to the administration UI.
However, some of them may not conform to your security standards and hence should be excluded. To do so,
choose Configuration from the main menu and go to tab User Interface, section Cipher Suites:
Note
We recommend to revert to the default (all ciphers selected) whenever you plan to switch to another JVM. As
the set of supported ciphers may differ, there is a chance that the selected ciphers may not be supported by
the new JVM. In that case the Cloud connector will not start anymore, and you need to fix the issue manually
adapting the file default-server.xml (cp. attribute ciphers, see Accessing the Cloud connector Administrator UI
above). After a successful switch, the list of eligible ciphers can be adjusted again.
Related Information
Overview
By default, the Cloud connector comes with a self-signed default certificate that is used to encrypt the
communication between the browser-based user interface and the Cloud connector itself. For security reasons,
however, you should replace this certificate with your own certificate so that the browser accepts the certificate
without security warnings.
Up to version 2.5.2, for this purpose, you need to know the password of the Cloud connector's Java keystore. This
password is generated during installation and then kept into encrypted secure storage area.
Note
The procedure described above, which requires the manual execution of command line commands is only
needed for versions below 2.6. As of version 2.6.0, you can easily replace the default certificate within the
Cloud connector administration UI . For more information, see Exchanging UI Certificates in the Administration
UI [page 502].
Caution
The Cloud connector's keystore may contain a certificate used in the High Availability setup. This certificate
has alias "ha". Be careful - any changes on it or removal would lead to disruption of communication between
the shadow and the master instance, and as a consequence - to a failed procedure. Therefore, we recommend
Procedure
● on Linux OS:
In the next procedure, we will use the standard Java keytool tool to delete/generate/import certificates from/for/
into the Cloud connector's keystore. Memorize the keystore password shown by the above command, as you will
need it for these operations.
Also make sure that you change into the directory /opt/sap/scc/config before executing the commands
described in the following.
Note
For a detailed description of the keytool tool, see http://docs.oracle.com/javase/7/docs/technotes/tools/
solaris/keytool.html .
Related Information
Context
If you want to use a simple, self-signed certificate, follow the procedure below.
The Server configuration delivered by SAP uses the same password for key store (option \-storepass) and key
(option \-keypass) under alias tomcat.
Procedure
2. Generate a certificate:
3. Self-sign it - you will be prompted for the keypass password defined in step 2:
Overview
If you have a signed certificate produced by a trusted certificate authority (CA), go directly to step 3.
You now have a file called <csr-file-name> that you can submit to the Certificate Authority. In return, you
get a certificate.
3. Import the certificate chain that you obtained from your trusted CA:
The password is created at installation time and stored in the secure storage. Thus, only applications with access
can read the password. You can read password using Java:
You might need to adapt the configuration if you want to use another key storage file or change the current
configuration (HTTPS port, authentication type, SSL protocol, and so on). You can find the SSL configuration in
the Connector section of the file, respectively :
Note
We recommend that you do not modify the configuration unless you have expertise in this area.
By default, the Cloud connector comes with a self-signed default certificate, which is used to encrypt the
communication between the browser-based user interface and the Cloud connector itself. For security reasons,
however, you should replace this certificate with your own one so that the browser accepts the certificate without
security warnings.
Procedure
Master Instance
5. You are prompted to save the signing request in a file. The content of the file is the signing request in PEM
format.
The signing request needs to be provided to a Certificate Authority (CA) - either one within your company or
another one you trust. The CA will sign the request and the returned response should be stored in a file.
Shadow Instance
The same operation is possible on the shadow instance in a high availability setup.
Configure the Cloud connector to make it operational for connections between your SAP Cloud Platform account
and on-premise systems.
In this section:
Table 266:
Topic Description
Initial Configuration [page 504] Once you have installed the Cloud connector and started the
Cloud connector daemon, you can log on and perform the
necessary customization to make your Cloud connector op
erational.
Managing Accounts [page 509] How to connect SAP Cloud Platform accounts to your Cloud
connector.
Configuring Principal Propagation [page 513] Principal Propagation [page 362] allows forwarding the log
ged on identity in the cloud to the internal (on-premise) sys
tem without the need of providing the password.
Configuring Access Control [page 532] Copy the complete access control settings from another ac
count on the same Cloud connector.
Configuring User Store in the Cloud Connector [page 533] Configure applications running on SAP Cloud Platform to use
your corporate LDAP server as a user store.
Using Service Channels [page 534] Service channels provide secure and reliable access from an
external network to certain services on SAP Cloud Platform,
which are not exposed for direct access from the Internet.
Connecting DB Tools to SAP HANA via Service Channels How to connect database, BI, or replication tools running in
[page 537] the on-premise network to a HANA database on SAP Cloud
Platform using service channels of the Cloud connector.
Configuring Domain Mappings for Cookies [page 540] Map virtual and internal domains to ensure correct handling of
cookies in client/server communication.
Context
Once the Cloud connector has been installed and the Cloud connector daemon has been started, you can log on
and perform the necessary customization to make your Cloud connector operational. To do this, follow the
procedure below.
Prerequisites
We strongly recommend that you read and follow the steps described in Recommendations for Secure Setup
[page 494]. For operating the Cloud connector securely, see also Guidelines for Secure Operation of Cloud
connector [page 580].
To administer the Cloud connector, you need a Web browser. To check the list of supported browsers, go to
Product Prerequisites and Restrictions [page 8] → section "Browser Support".
1. When you first log in, you must change the password before you continue forwards, regardless of the
installation type you have chosen.
2. Choose between master and shadow installation. Use Master if you are installing a single Cloud connector
instance or a main instance from a pair of Cloud connector instances. For more information, see Installing a
Failover Instance for High Availability [page 546].
3. You can edit the password for the Administrator user from Configuration in the main menu, tab , section
Authentication: User Interface
If your internal landscape is protected by a firewall that blocks any outgoing TCP traffic, you need to specify an
HTTPS proxy that the Cloud connector can use to connect to SAP Cloud Platform. Normally, you would need to
use the same proxy settings as those being used by your standard Web browser. The Cloud connector needs this
proxy for two operations:
● Downloading the correct connection configuration corresponding to your account ID in SAP Cloud Platform.
● Establishing the SSL tunnel connection from the Cloud connectorUser to your SAP Cloud Platform account.
In case you want to skip the initial configuration, you can click the icon in the upper right corner. You might
need this in case of connectivity issues described in your logs. You can add accounts later as described in page
Managing Accounts [page 509].
When you first log on, the Cloud connector collects the following required information:
1. For Landscape Host, specify the SAP Cloud Platform landscape that should be used. You can choose the one
you need from the dropdown list. For more information, see Landscape Hosts [page 41].
2. For Account Name, Account User and Password, enter the values you obtained when you registered your
account on SAP Cloud Platform or add a new Account User [page 26] with role Cloud Connector Admin
from the Members tab in the SAP Cloud Platform cockpit and use the new user and password..
Note
If the Cloud connector is installed in an environment that is operated by SAP, SAP will provide a user that
you should add as new member in your SAP Cloud Platform account. In this case, please assign the Cloud
Connector Admin role (see Account Member Roles [page 30]) to the user provided by SAP. Once the
Cloud connector connection is established, this user is not needed any more since it serves for initial
connection setup only. You may revoke the corresponding role assignment then and remove the user from
the Members list.
3. Optional: You can define a Display Name, which allows you to easily recognize a specific account in the UI
compared to the technical Account Name.
4. Optional: you can define a Location ID identifying the location of this Cloud connector for a specific account.
Starting with Cloud connector release 2.9.0, the location ID is used as routing information and thus it is
possible to connect multiple cloud connectors to a single account. If not specifying any value for Location ID,
the default will be used, which is representing the behavior of previous Cloud connector versions. The location
ID needs to be unique per account and should be some identifier that can be used in a URI. In order to route
requests to a Cloud connector with a Location ID, the location ID needs to be configured in the respective
destinations.
Note
Location IDs provided in older versions of Cloud Connector will be discarded during upgrade to ensure
compatibility for existing scenarios.
5. Enter a suitable proxy host from your network and the port that is specified for this proxy. If your network
requires an authentication for the proxy, enter a corresponding proxy user and password. You need to specify
a proxy server that supports SSL communication (a standard HTTP proxy will not suffice).
Note
These settings strongly depend on your specific network setup. If you need more detailed information,
please contact your local system administrator.
6. Optional: You can provide a Description (free-text) of the account that will be shown when clicking on the
Details icon in the Actions column of the Account Dashboard. It helps you identify the particular Cloud
connector you use.
7. When you have finished with the settings, choose Save.
Note
The internal network must allow access to the port. Specific configuration for opening the respective port(s)
depends on the firewall software used. The default ports are 80 for HTTP and 443 for HTTPS. For RFC
communication, you need to open a gateway port (default: 33+<instance number> and an arbitrary
message server port. For a connection to a HANA Database (on SAP Cloud Platform) via JDBC, you need to
open an arbitrary outbound port in your network. Mail (SMTP) communication is not supported.
● If you later need to change your proxy settings (for example, because the company firewall rules have
changed), choose Configuration from the main menu and go to tab Cloud, section HTTPS Proxy. Some proxy
servers require credentials for authentication. In this case, you need to provide the relevant user/password
information.
Once the initial setup has been completed successfully, the tunnel to the cloud endpoint is open (even though no
requests are allowed to pass until you have completed the Access Control setup). However, you can manually
close (and reopen) the connection to SAP Cloud Platform by opening the Connector State page and choosing the
Disconnect button (or the Connect button to reconnect to SAP Cloud Platform). The yellow state icon and the text
indicate that there is still no resource exposed that could be used from a cloud application. This requires
additional configuration, which is mentioned in the Related Information section.
Once the initial setup has been completed successfully, the tunnel to the cloud endpoint is open (even though no
requests are allowed to pass until you have completed the access control setup. See Configuring Access Control
[page 532]). However, you can manually close (and reopen) the connection to SAP Cloud Platform by choosing
your Account from the main menu and pressing the Disconnect/Connect button. The status no active
resources available indicates that additional configuration is required.
Note
Once connected, you can monitor the Cloud connector also in the Connectivity section of the SAP Cloud
Platform cockpit. There, you can track attributes like version, description and high availability set up. Every
Cloud connector configured for your account will automatically appear in the Connectivity section.
Related Information
Context
Effective version 2.2, it is possible to connect to several accounts within a single Cloud connector installation.
Those accounts can use the Cloud connector concurrently with different configurations. By selecting an account
from the drop-down box, all tab entries will show the configuration, audit and state specific to this account. In case
of audit and traces, cross account info is merged with the account specific parts in the UI.
Note
We recommend that you group only accounts of the same quality in a single installation:
● Productive accounts should reside on a Cloud connector that is used for productive accounts only.
● Test and development accounts could be merged, depending on the group of people that are supposed to
deal with those accounts. However, the mostly preferred logical setup is to have separate development and
test installations.
In the account dashboard (choose your Account from the main menu), you can check the state of all account
connections managed by this Cloud connector at a glance.
In the screenshot above, the trial account (display name trial23) is already connected, but has no active
resources exposed. The test account (display name test23) is currently disconnected.
In addition, depending on the connection state the dashboard allows you to do disconnect and connect operations
for the accounts by pressing the respective button in the Actions column.
In case you want to have an additional account to be connected with your on-premise landscape, just press the
Add Account button and a dialog appears, which is similar to the Initial Configuration operation when establishing
the first connection.
1. The <Landscape Host> field specifies the SAP Cloud Platform landscape that should be used. You can
choose the one you need from the dropdown list. For more information, see Cockpit [page 97] → section
"Logon".
2. For <Account Name> and <Account User> (user/password), enter the values you obtained when you
registered your account on SAP Cloud Platform or add a new Account User [page 26] with role Cloud
Connector Admin from the Members tab in the SAP Cloud Platform cockpit and use the new user and
password.
Note
If the Cloud connector is installed in an environment that is operated by SAP, SAP will provide a user that
you should add as new member in your SAP Cloud Platform account. In this case, please assign the Cloud
Connector Admin role (see Account Member Roles [page 30]) to the user provided by SAP. Once the
Cloud connector connection is established, this user is not needed any more since it serves for initial
connection setup only. You may revoke the corresponding role assignment then and remove the user from
the Members list.
3. Optional: You can define a <Display Name>, which allows you to easily recognize a specific account in the UI
compared to the technical <Account Name>.
Next Steps
● To modify an existing account, press the Edit icon and then change the <Display Name>, <Location ID>
and/or <Description>.
● You can also delete an account from the list of connections. After confirming that you really want to delete it,
the account will be disconnected and all configurations will be removed from the installation.
Related Information
In this section:
Table 267:
Topic Description
Setting Up Trust [page 513] Configure a trust relationship in the Cloud connector to sup
port principal propagation. Principal propagation allows for
warding the logged on identity in the cloud to the internal sys
tem without the need of providing the password.
Configuring Kerberos in the Cloud Connector [page 530] The cloud connector allows you to propagate users authenti
cated in SAP Cloud Platform via Kerberos against back-end
systems. It uses the Service For User and Constrained Dele
gation protocol extension of Kerberos.
Content
The purpose of the trust configuration is the support of principal propagation: Forwarding the logged on identity in
the cloud to the internal system, which means logging on with a user that matches this identity without the need
of providing the password. By default, your Cloud connector is not trusting any entity that is issuing tokens for
principal propagation. Therefore, the list of trusted identity providers is empty in the beginning. If you decide to
make use of the principal propagation feature, you need to establish trust to at least one identiy provider.
Currently, SAML2 identity providers are supported. Trust to one or more SAML2 IDPs can be configured per
account. After you've configured trust in the cockpit for your account, for example, to your own company's
identity provider(s), you can synchronize this list to your Cloud connector.
By pressing the Synchronize button, the list of existing identity providers will be stored locally in your Cloud
connector.
When selecting the entry, you can see the following details about it, in case the trusted entity reflects a SAML2
identity provider:
Choose the icon Show Certificate Information to display detail information for the corresponding entry.
For each of the entries you can decide, whether to trust it for the principal propagation use case by choosing Edit
and (de)selecting the Trusted checkbox for the respective entry. This will be stored locally.
The following procedure helps you to set up principal propagation from SAP Cloud Platform to your internal
system that shall be used in a hybrid scenario.
Note
As a prerequisite for principal propagation for RFC, the following cloud application runtime versions are
required:
1. Set up trust to an entity, which is issuing an assertion for the logged on user. This is described in the section
above.
Note
If you have the following scenario: Application1->AppToAppSS0->Application2->Principal Propagation->On
premise Backend System you have to mark Application2 as trusted by the Cloud connector in the Trust
Configurations tab.
By default, all applications within an account are allowed to use the Cloud connector associated with the account
they run in. However, this behavior might not be desired. For some applications this is acceptable, as they need to
interact with on-premise resources. Others, for which it is not transparent whether they try to receive some on-
premise data, might turn out to be malicious. For such cases, the application whitelist is useful.
As long as there is no entry in this list, all applications will be allowed to use the Cloud connector. If one or more
entries appear in the whitelist, then only these applications will be allowed to connect to the exposed systems in
the Cloud connector.
● To add one or more applications, press the Add icon. Enter a comma-separated list in the dialog's input field
and then press Save.
● To edit an existing entry, press Edit. Choose Save after editing the value.
Note
In order to allow subscribed applications, you need to add it to the whitelist in the format
<providerAccount>:<applicationName>.
Trust Store
By default, the Cloud connector trusts every on-premise system when connecting to it via HTTPS. As this may be
an undesirable behavior from a security perspective, you can configure a trust store that acts as a whitelist of
trusted on-premise systems, represented by their respective public keys. You can configure the trust store by
choosing Configuration from the main menu. Go to tab On Premise, section Trust Store:
An empty trust store does not impose any restrictions on the trusted on-premise systems. This behavior ensures
downward compatibility so that the Cloud connector behaves as it did before introducing the configurable trust
store. While an empty trust store acts like a blacklist, it transforms into a whitelist as soon as you add the first
public key.
Note
You hve to provide the public keys in .der or .cer format.
Tasks
To learn more about the different types of configuring and supporting principal propagation for a particular AS
ABAP, see:
Related Information
Supported CA Mechanisms
You can enable support for Principal Propagation with X.509 certificates in two ways:
● Using a Local CA in the Cloud connector. Prior to version 2.7.0, this was the only option and the system
certificate was acting both as client certificate and CA certificate in the context of Principal Propagation.
● Using a Secure Login Server and delegate the CA functionality to it.
The Cloud connector will then use the configured CA approach for issuing short-living certificates for logging on
the same identity in the back-end that is logged on in the cloud. For establishing trust with the back-end, the
respective configuration steps are independent from the approach chosen for the CA.
In order to issue short-living certificates used for principal propagation to a back-end system, you can import an
X.509 client certificate into the Cloud connector. This CA certificate needs to be provided as PKCS#12 file
containing the (intermediate) certificate, the corresponding private key and the CA root certificate that signed the
intermediate certificate (plus potentially the certificates of any intermediate CAs, if the certificate chain is longer
than 2). Via the file upload dialog, this PKCS#12 file can be chosen from the file system, and its password also
needs to be supplied for the import process. As a second option, you can start a Certificate Signing Request
procedure like for the UI certificate - described in Exchanging UI Certificates in the Administration UI [page 502].
Note
The CA certificate should have the KeyUsage attribute keyCertSign. Many systems verify that the issuer of a
certificate has this attribute and deny a client certificate, if it is not the case. When using the Certificate Signing
Request procedure, the attribute will be requested for the CA certificate.
If a CA certificate is no longer required, you can delete it. To do this, use the respective button and confirm
deletion.
If you like to delegate the CA functionality to a Secure Login Server, choose the CA via Secure Login Server option
and configure the Secure Login Server as follows, after having configured the Secure Login server as described in
Configuring a Secure Login Server for the Cloud Connector [page 527].
Note
For this privileged port a client certificate authentication is required, for which the Cloud connector system
certificate will be used.
● <Profile>: The Secure Login Server Profile that will allow to issue certificates as needed for Principal
Propagation with the Cloud connector.
● <Profiles Port>: The profiles port needs to be provided only when your Secure Login Server is configured
to not allow to fetch profiles via the privileged authentication port. If this is the case, you can provide here the
port that is configured for that functionality.
Related Information
Configuring a Secure Login Server for the Cloud Connector [page 527]
Initial Configuration (HTTP) [page 387]
Initial Configuration (RFC) [page 437]
Context
In this page, the abstract description for principal propagation configuration is mapped to a concrete step-by-step
instruction for an ABAP application server configuration of the use case.
● System certificate was issued by: CN=MyCompany CA, O=Trust Community, C=DE
● It has subject: CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE.
● An example for a short-living certificate has the subject CN=P1234567890, where P1234567890 is the
platform user
Note
In case you have applied SAP Note 2052899 to your system, you can alternatively provide an additional
parameter for icm/trusted_reverse_proxy_<x>
Note
In case you have a Web dispatcher installed in front of the ABAP system, trust needs to be added in its
configuration files with the same parameters as for the ICM. In addition, the system certificate of the Cloud
connector needs to be added to the trust list of the Web dispatcher Server PSE.
You can do this manually in the system as described below or make use of an Identity Management Solution for a
more comfortable approach. For example, for large numbers of users the rule-based certificate mapping is a good
way to save time and effort. For more information, see Rule-based Mapping of Certificates [page 521].
Optional procedure. Execute these steps in case your scenario requires basic authentication support for some of
the ICF services.
Related Information
Note
If dynamic parameters are disabled, enter the value using transaction RZ10 and re-start the whole ABAP
system.
Note
To access transaction CERTRULE, you need the corresponding authorizations (see: Assigning
Authorization Objects for Rule-based Mapping [page 522]).
Note
Once you save the changes and return to transcation CERTRULE, the sample certificate which you
imported in Step 2b will not be saved. This is just a sample editor view to see the sample certificates
and mappings.
Related Information
Context
In this page you will find a detailed step-by-step scenario on how to configure the Cloud connector and an AS
ABAP so that it accepts user principals propagated from a SAP Cloud Platform account.
● A system PSE has been generated and installed on the host where the Cloud connector is running.
For more information, see the SNC User's Guide: https://service.sap.com/security → section
"Infrastructure Security".
● The system's SNC name is: p:CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE
● The ABAP system's PSE name is: p:CN=SID, O=Trust Community, C=DE
● The ABAP system's PSE and the Cloud connector's system PSE need to be signed by the same CA for mutual
authentication.
● An example for a short-living certificate has the subject CN=P1234567, where P1234567 is the platform user.
1. Configuring the ABAP System to Trust the Cloud Connector's System PSE
1. Open the SNC Access Control List for Systems (transaction code: SNC0).
2. Think of a nice "system ID" for your Cloud connector and enter it together with its SNC name: p:CN=SCC,
OU=HCP Scenarios, O=Trust Community, C=DE
3. Save the entry and then choose the Details button.
4. In the next screen, activate the check boxes for Entry for RFC activated and Entry for certificate activated.
5. Save your settings.
You can do this manually in the system as described below or make use of an Identity Management Solution for a
more comfortable approach. For example, for large numbers of users the rule-based certificate mapping is a good
way to save time and effort. For more information, see Rule-Based Certificate Mapping.
We assume that:
● The necessary security product for the SNC flavor, used by your ABAP backend systems, is already installed
on the Cloud connector host
● The Cloud connector's system PSE is opened for the operating system user under which the SCC process is
running. If this is the case, two more steps need to be performed in the Cloud connector UI:.
Note
The example in Initial Configuration (RFC) [page 437] shows the library location if you use the SAP Secure
Login Client as your SNC security product. In this case (as well as for some other security products), SNC
My Name is optional, because the security product automatically uses the PSE associated with the current
operating system user under which the process is running, so you can leave that field empty. (Otherwise, in
this example it should be filled with p:CN=SCC, OU=HCP Scenarios, O=Trust Community, C=DE.)
We recommend that you use the third shown option for Quality of Protection, if your security solution
supports it, as it provides the best protection.
Create an RFC hostname mapping corresponding to the RFC destination with principal propagation on cloud
side
1. In the Access Control section of the Cloud connector, create a hostname mapping corresponding to the
cloud-side RFC destination. For more information, see Configuring Access Control (RFC) [page 438].
2. Make sure that you choose RFC SNC as Protocol and ABAP System as Back-end Type. In the SNC Partner
Name field, enter the ABAP system's SNC name, for example p:CN=SID, O=Trust Community, C=DE in
this example.
Using the principal propagation, a secure way is provided to forward the on-demand identity to the Cloud
connector and from there to the back end. The pattern identifying the user for the subject of the generated short-
living X.509 certificate, as well as its validity period, can be defined as shown in the picture below.
Subject Pattern
There are two ways to define the subject's distinguished name (DN), for which the certificate will be issued:
● Using the help of the selection menu, that is, the icon.
Thus, you can assign a value for each parameter (either directly as a free text or as a variable selected from the
menu of this field). Those selectable parameters are:
● ${name}
● ${mail}
● ${display_name}
● ${login_name} (as of cloud connector version 2.8.1.1)
The values for these variables will be provided by the Certificate Authority, which also provides the values
for the subject's DN.
By choosing Generate Sample Certificate you can create a sample certificate that looks like one of the short-living
certificates created at runtime. It can be used for generating user mapping rules in the target system, for example,
via transaction CERTRULE in an ABAP system. If your subject pattern contains variable fields, a small wizard will
allow you to provide meaningful values for each of them and eventually you can save the sample certificate in DER
format.
This is the time, provided in hours, that defines how long the application can use a principal issued for a user after
the token provided from cloud side has been expired.
This is the time, provided in minutes, that defines how long the certificate generated for principal propagation can
be used to authenticate against the back end. Reuse of a previously generated certificate increases the
performance.
Related Information
The Cloud connector is able to use on-the-fly generated X.509 user certificates to log in to on-premise systems if
the external user session is authenticated (for example by means of SAML). If you do not want to use the built-in
certification authority (CA) functionality of the Cloud connector (for example because of security considerations),
you can connect SAP SSO 2.0 Secure Login Server (SLS).
SLS is a Java application running on AS JAVA 7.20 or higher, which provides interfaces for certificate enrollment.
● HTTPS
● REST
● JSON and
● PKCS#10/PKCS#7
Note
Any enrollment requires a successful user or client authentication, which can be a single, multiple or even a
multi factor authentication.
● LDAP/ADS
● RADIUS
● SAP SSO OTP
● ABAP RFC
● Kerberos/SPNego and
● X.509 TLS Client Authentication
SLS allows you to define arbitrary enrollment profiles, each with a unique profile UID in its URL, and with a
configurable authentication and certificate generation.
Requirements
For the purpose of user certification, SLS has to provide a profile with the following properties:
SLS provides all required features with SAP SSO 2.0 SP06:
Implementation
INSTALLATION
Follow the standard installation procedures for SLS. This includes the initial setup of a PKI (public key
infrastructure).
Note
SLS allows you to set up one or more own PKIs with Root CA, User CA etc. You can also import CAs as
PKCS#12 file or use a hardware security module (HSM) as "External User CA".
Note
You should only use HTTPS connections for any communication with SLS. AS JAVA / ICM supports TLS, and
the default configuration comes with a self-signed sever certificate. You may use SLS to replace this certificate
by a PKI certificate.
CONFIGURATION
SSL Ports
1. Open the NetWeaver Administrator, choose Configuration SSL and define a new port with Client
Authentication Mode = REQUIRED.
Note
You may also define another port with Client Authentication Mode = Do not request if you did
not do so yet.
2. Import the Root CA of the PKI that issued your Cloud connector service certificate.
3. Save and restart the Internet Communication Manager (ICM).
Authentication Policy
Root CA Certificate
Follow the standard installation procedure of the Cloud connector and configure SLS support:
1. Enter the Policy URL pointing to the SLS User Profile Group.
2. Select the profile, e.g. Cloud Connector User Certificates.
3. Import the Root CA certificate of SLS into the Cloud connector´s trust store.
Follow the standard configuration procedure for Cloud connector support, and configure SLS support:
Context
The Cloud connector allows you to propagate users authenticated in SAP Cloud Platform via Kerberos against
back-end systems. It uses the Service For User and Constrained Delegation protocol extension of Kerberos.
We use Key Distribution Center (KDC) to exchange messages in order to retrieve Kerberos tokens for a certain
user and a back-end system.
For more information, see Kerberos Protocol Extensions: Service for User and Constrained Delegation Protocol
Table 268:
1. An SAP Cloud Platform application calls a back-end system via the Cloud
connector.
2. The Cloud connector calls the KDC to obtain a Kerberos token for the user
propagated from the Cloud connector.
3. The obtained Kerberos token is sent as a credential to the back-end system.
Procedure
Example
You have a back-end system protected with SPNego authentication in your corporate network. You want to call
it from a cloud application while preserving the identity of a cloud-authenticated user.
Result:
When these configurations are provided, if you call a back-end system, the Cloud connector will obtain an
SPNego token from your KDC for the cloud-authenticated user. This token will be sent along with the request to
the back end, so that it can authenticate the user and the identity to be preserved.
Related Information
Kerberos Configuration
Setting Up Trust [page 513]
When adding new accounts, it is possible for you to copy the complete access control settings from another
account on the same Cloud connector. In case you skip this operation, you can do it later by using the import/
export mechanism provided by the Cloud connector.
1. Choose Cloud To On-Premise from your account menu and go to tab Access Control.
2. Choose the Download icon in the upper right corner to store the current settings in a ZIP file.
3. The file can be imported later into a different Cloud connector.
On the screenshot below, there are two locations from which you can import access control settings:
● From a file, which has been previously exported from a Cloud connector
● From a different account on the same Cloud connector
In addition, there are two checkboxes that influence the behavior of the import:
● Overwrite: When this checkbox is selected, all previously existing system mappings will be removed.
Otherwise, the imported ones will be merged into the list of existing ones. Even then, if the same virtual host-
port combination exists already, it will be overridden by the imported one. By default, imported system
mappings are merged into the existing ones.
● Include Resources: When this checkbox is selected (default), the resources that belong to the imported
systems will also be imported. Otherwise, only the list of system mappings will be imported - without any
exposed resource.
Prerequisites
● You have configured your cloud application to use an on-premise user provider and to consume users from
LDAP via the Cloud connector. To do this, execute the following command:
● You have created a connectivity destination (with the parameters below), to configure the on-premise user
provider:
Name=onpremiseumconnector
Type=HTTP
URL= http://scc.scim:80/scim/v1
Authentication=NoAuthentication
CloudConnectorVersion=2
ProxyType=OnPremise
Context
You can configure applications running on SAP Cloud Platform to use your corporate LDAP server as a user store.
This way, SAP Cloud Platform does not need to keep the whole user database but requests the necessary
information from the LDAP server. For that purpose, Java applications running on SAP Cloud Platform can use the
on-premise system to check credentials, search for users, and retrieve their details. In addition to the user
information, the cloud application may request information about the groups of which a specific user is a member.
One way for a cloud Java application to define user authorizations is by checking the user membership to specific
groups in the on-premise user store. For that purpose, the Java application uses the roles for the groups defined
in SAP Cloud Platform. For more information, see Managing Roles [page 1394].
The corporate LDAP server that is used in the current configuration is configured in the Cloud connector.
Note
The configuration steps below are only applicable for Microsoft Active Directory (AD).
3. Select Secure if you want to connect to the LDAP system via SSL.
4. In the Hosts field, you can manage the hosts (and ports) of your LDAP server(s).
○ Choose the Add icon to add as many hosts (and ports) as you need.
○ Choose Edit to edit the selected host.
○ Choose Delete to delete the selected hosts.
5. For User Name and Password, enter the credentials of the service user that will be used to contact the LDAP
system.
Note
The user name must be fully qualified, including the AD domain suffix. For instance
john.smith@mycompany.com.
6. In User Path, specify the LDAP subtree that contains the users. button to add as many hosts (and ports) as
you need.
7. In Group Path, specify the LDAP subtree that contains the groups.
8. Choose Save.
Related Information
With service channels, the Cloud connector allows secure and reliable access from an external network to certain
services on SAP Cloud Platform, which are not exposed for direct access from the Internet. The Cloud connector
takes care that the connection is always available and communication is secured.
The Service Channel for the HANA Datebase allows accessing HANA databases running in the cloud with
database clients (for example, clients using ODBC/JDBC drivers). You can use the Service Channel to connect
database tools, analytical tools, BI tools, or replication tools to your HANA database in your SAP Cloud Platform
account.
The Service Channel for the Virtual Machine allows accessing an SAP Cloud Platform Virtual Machine with an SSH
client. Thus you can administrate the VM and adjust it to your needs.
Next Steps
Context
You can establish a connection to the HANA Database in the SAP Cloud Platform that is not directly exposed to
external access. You can do this in section On-Premise to Cloud Service Channels of the Cloud connector.
The Service Channel for HANA Database allows accessing SAP HANA databases running on the cloud via ODBC/
JDBC. You can use the Service Channel to connect database tools, analytical tools, BI tools, or replication tools to
your HANA database in your SAP Cloud Platform account.
Note
The following procedure requires a productive HANA instance to be available in the respective account.
Follow the steps below to establish a Service Channel to a HANA instance of your account.
3. In the Add Service Channel dialog, select HANA Database from the list of supported channel types.
4. Choose Next. The HANA Database dialog opens.
5. Choose the HANA instance name from the list of available HANA instances. If fetching the list failed, you need
to specify the name yourself. It must match one of the names shown under Persistence Databases &
Schemas in the cockpit.
Note
The HANA instance name is case-sensitive.
6. Choose the local instance number. This is a double-digit number which computes the local port used to
access the HANA instance in the cloud. The local port is derived from the local instance number as
3<instance number>15. For example, if the instance number is 22, then the local port will be 32215.
Note
The local port should not match the HANA port used in the cloud – they are mapped transparently by the
Cloud connector.
7. Leave the Enabled option selected to establish the channel immediately after clicking Save, or deselect it if the
channel should not be established yet.
8. When you are done, choose Finish.
Once you have established a HANA Database Service Channel, you can connect on-premise database or BI tools
to the selected HANA database in the cloud by using <Cloud_connector_host>:<local_HANA_port> in the
JDBC/ODBC connect strings.
For more information, see Connecting DB Tools to SAP HANA via Service Channels [page 537]
Context
This section describes how you can connect database, BI, or replication tools running in on-premise network to a
HANA database on SAP Cloud Platform using service channels of the Cloud connector. You can also use the high
availability support of the Cloud connector to achieve a highly available database connection. The picture below
shows the landscape in such a scenario.
● For more information on using SAP HANA instances, see Using a Productive SAP HANA Database System
[page 1080]
● For the connection string via ODBC you need a corresponding database user and password (see step 4
below). See also: Guidelines for Creating Database Users [page 1083]
● Find detailed information on failover support in the SAP HANA Administration Guide: Configuring Clients for
Failover.
Note
This link points to the latest release of SAP HANA Administration Guide. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is supported by SAP Cloud Platform. Find the list of guides
for earlier releases in the Related Links section below.
Procedure
1. To establish a highly available connection to one or multiple SAP HANA instances in the cloud, we recommend
that you make use of the failover support of the Cloud connector. For this aim, set up a master and a shadow
instance. For more information, see Installing a Failover Instance for High Availability [page 546].
2. In the master instance, configure a service channel to the SAP HANA database of the SAP Cloud Platform
account to which you want to connect. Let's assume that the chosen port of the service channel is 30015. For
more information, see Configuring a Service Channel for HANA Database [page 535].
3. You can now connect on-premise DB tools via JDBC to the SAP HANA database by using the following
connection string:
jdbc:sap://<cloud-connector-master-host>:30015;<cloud-connector-shadow-host>:
30015[/?<options>]
The SAP HANA JDBC driver supports failover out of the box. All you need is to configure the shadow instance
of the Cloud connector as a failover server in the JDBC connection string. The different options supported in
the JDBC connection string are described in page: Connect to SAP HANA via JDBC
4. You can also connect on-premise DB tools via ODBC to the SAP HANA database. The connection string is as
follows:
"DRIVER=HDBODBC32;UID=<user>;PWD=<password>;SERVERNODE=<cloud-connector-master-
host>:30015;<cloud-connector-shadow-host>:30015;"
Related Information
Context
You can establish a connection to a Virtual Machine in the SAP Cloud Platform that is not directly exposed to
external access. You can do this in the On-Premise to Cloud Service Channels section of the Cloud
connector. The Service Channel for Virtual Machine allows accessing SAP HANA databases running on the cloud
via SSH. You can use the Service Channel to administrate the Virtual Machine and adjust it to your needs.
Note
The following procedure requires that you have created a Virtual Machine in your account before.
Follow the steps below to establish a Service Channel to a Virtual Machine of your account.
Procedure
3. In the Add Service Channel dialog, select Virtual Machine from the list of supported channel types.
4. Choose Next. The Virtual Machine dialog opens.
5. Choose the Virtual Machine <Name> from the list of available Virtual Machines. It will match one of the names
shown under Virtual Machines in the cockpit.
Note
The Virtual Machine name is case-sensitive.
7. Leave the <Enabled> option selected to establish the channel immediately after clicking Save, or deselect it if
the channel should not be established yet.
8. When you are done, choose Finish.
Next Steps
Once you have established a Service Channel for the Virtual Machine, you can connect it with your SSH client
by accessing <Cloud_connector_host>:<local_VM_port> and the key file that was generated when
creating the virtual machine.
Related Information
Context
Some HTTP servers return cookies which contain a "domain" attribute. On further requests, HTTP clients should
send these cookies to machines whose hostnames lie in the specified domain.
it will return that the cookie in follow-up requests to all hosts like ecc60.mycompany.corp,
crm40.mycompany.corp, and so on, if the other attributes like "path" and "attribute" require it.
However, in the setup with the Cloud connector between a client and a Web server, this may lead to potential
problems. For example, assume that you have defined a virtual host sales-system.cloud and mapped it to the
internal host name ecc60.mycompany.corp. Then, the client "thinks" it is sending an HTTP request to the host
name sales-system.cloud, while the Web server, unaware of the above host name mapping, sets a cookie for the
domain mycompany.corp. The client does not know this domain name and thus, for the next request to that Web
server, it will not attach the cookie, even though it should.
Procedure
1. Choose Cloud To On-Premise from your account menu and go to tab Cookie Domains.
2. Choose Add.
3. Enter cloud as the virtual domain, and your company name as the internal domain.
4. Choose Save.
This way, the Cloud connector will check the Web server's response for "Set-Cookie" headers, and if it finds
one with an attribute domain=intranet.corp, it will replace it with domain=sales.cloud before returning
the HTTP response to the client. Then, the client recognizes the domain name, and for the next request
against www1.sales.cloud it will attach the cookie, which will then successfully arrive at the server on
machine1.intranet.corp.
Note
Some Web servers use a syntax such as "domain=.intranet.corp" (RFC 2109), even though the newer
RFC 6265 recommends using the notation without a dot.
Note
Also bear in mind that the value of the domain attribute may be a simple host name. In this case, no extra
domain mapping is necessary on the Cloud connector. If the server sets a cookie with
"domain=machine1.intranet.corp", the Cloud connector will automatically reverse the mapping
machine1.intranet.corp to www1.sales.cloud and replace the cookie domain accordingly.
Learn more about operating the Cloud connector, using its administration tools and optimizing its functions.
In this section:
Table 269:
Topic Description
Using LDAP for Authentication [page 543] If you operate an LDAP server in your system landscape, you
can configure the Cloud connector to use the users available
on this server.
Installing a Failover Instance for High Availability [page 546] The Cloud connector allows the installation of a redundant
(shadow) instance, which monitors the main (master) in
stance.
Changing the UI Port [page 551] If you have to change the port for the Cloud connector admin
istration UI, you can use the changeport tool (Cloud connector
version 2.6.0+).
Securing the Activation of Traffic Traces [page 552] Tracing of network traffic data may contain business critical
information or security sensitive data. You can implement a
four-eyes principle to protect your traces (Cloud connector
version 1.3.2+).
Monitoring [page 553] Use various views to monitor the activities and state of the
Cloud connector.
Alerting [page 557] Configure the Cloud connector to send out Emails whenever
critical situations occur that may prevent it from operating
flawlessly in the near or not so distant future.
Audit Logging in the Cloud Connector [page 559] The Cloud connector provides an auditor tool that allows you
to view and manage audit log information (Cloud connector
version 2.2+).
Troubleshooting [page 562] How to monitor the state of open tunnel connections in the
Cloud connector. Display different types of logs and traces
that can help you troubleshoot connection problems.
Cloud Connector Operator's Guide [page 566] Detailed information and procedures for operating the Cloud
connector.
You can use LDAP (Lightweight Directory Access Protocol) to configure Cloud connector authentication.
Overview
After installation, the Cloud connector uses file-based user management. Initially there is one Administrator user
with the password manage, which needs to be changed on the first logon. As an alternative to this file-based user
management, the Cloud connector also supports LDAP-based user management. If you have an LDAP server in
your landscape, you can configure the Cloud connector to use the users available on that LDAP server. All users
that are in a group named admin or sccadmin will have the necessary authorization for administrating the Cloud
connector. This group membership is checked by the Cloud connector.
1. Choose Configuration from the main menu and go to tab User Interface.
2. In section Authentication, choose Switch to LDAP.
3. If you want to save intermediate adoptions of the LDAP configuration, choose Save Draft.
4. Usually, the LDAP server lists users in an LDAP node and user groups in another node. In this case, you can
use the following template for LDAP configuration. The template can be copied into the configuration text
area by choosing the rightmost button immediately below the text area. The template looks like this:
userPattern="uid={0},ou=people,dc=mycompany,dc=com"
roleBase="ou=groups,dc=mycompany,dc=com"
roleName="cn"
roleSearch="(uniqueMember={0})"
5. Change the ou and dc fields in userPattern and roleBase, according to the configuration on your LDAP
server, or use some other LDAP query.
6. Provide the LDAP server's host and port (port 389 is used by default) in the Host field. If you want to use the
secure protocol variant LDAPS based on TLS, select the Secure checkbox to do so.
For more information about how to set up LDAP authentication, see tomcat.apache.org/tomcat-7.0-doc/realm-
howto.html .
Note
When using LDAP together with a high availability setup with master and shadow, the configuration option
userPattern cannot be used. Instead a working combination of userSearch, userSubtree and userBase
needs to be used.
Note
If an LDAP configuration is wrong, you will probably not be able to logon to the Cloud connector again. In this
case, you need to adjust the Cloud connector configuration to use the file-based user store again without the
administration UI. For more information, see the next section.
The same operation is possible on the shadow instance in a high availability setup (choose Shadow from the main
menu of the shadow instance and go to tab Authentication):
In case your LDAP settings do not work as expected, you can use the useFileUserStore tool, provided with Cloud
connector version 2.8.0 and higher, to revert back to the file based user store:
1. Change to the installation directory of the Cloud connector. To adjust the userstore, execute the following
command:
○ Microsoft Windows: useFileUserStore
○ Linux, Mac OS: ./useFileUserStore.sh
2. The tool will inform you about the successful modification of the user store.
3. To activate the file based user store, you need to restart the Cloud connector.
For older versions you need to manually edit the configuration files as described below.
1. To revert to file-based user management, replace the Realm section with the following:
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.CombinedRealm">
<Realm
X509UsernameRetrieverClassName="com.sap.scc.tomcat.utils.SccX509SubjectDnRetrieve
r" className="org.apache.catalina.realm.UserDatabaseRealm" digest="SHA-256"
resourceName="UserDatabase"/>
<Realm
X509UsernameRetrieverClassName="com.sap.scc.tomcat.utils.SccX509SubjectDnRetrieve
r" className="org.apache.catalina.realm.UserDatabaseRealm" digest="SHA-1"
resourceName="UserDatabase"/>
</Realm>
</Realm>
2. To restart the Cloud connector service, proceed as described below depending on your operating system:
○ Microsoft Windows OS: Open the Windows Services console and restart the cloud connector service.
○ Linux OS: Execute command: service scc_daemon restart
○ Mac OS X: Not applicable because no daemon exists; it is only a "developer version".
The Cloud connector allows you to install a redundant instance, which monitors the main instance.
Context
If the main instance should go down for some reason, the redundant one can take over its role. The main instance
of the Cloud connector is called master and the redundant instance shadow. The shadow has to be installed and
connected to its master. During the setup of high availability, the master pushes the whole configuration to the
shadow. Later on, during a normal operation, the master also pushes configuration updates to the shadow,
whenever the configuration is changed. Thus, the shadow instance is kept synchronized with the master instance.
The shadow pings the master regularly, and if the master is not reachable for a while, the shadow tries to take over
the master role and to establish the tunnel to SAP Cloud Platform.
Procedure
If this flag is not activated, no shadow instance can connect itself to this Cloud connector. Additionally, when
providing a concrete Shadow Host, you can ensure that only from this host a shadow instance can be
connected.
Note
By pressing the Reset button, all high availability settings will be reset to their initial state. As a result high
availability will be disabled and the shadow host will be cleared. Resetting will only work if no shadow is
connected.
The shadow instance must be installed in the same network segment as the master instance. Communication
between master and shadow via proxy is not supported. The same distribution package is used for master and
shadow instance.
Note
If you plan to use LDAP for the user authentication on both master and shadow, make sure you configure it
before establishing the connection from shadow to master.
1. On first start-up of a Cloud connector instance, a UI wizard asks you whether the current instance should be
master or shadow. Choose Shadow and press Save:
Note
If you decide to attach the shadow instance to a different master, choose the Reset button. All your high
availability settings will be removed, that is, reset to their initial state. This will only work if the shadow is
currently not connected.
4. The UI on the master instance shows information about the connected shadow instance. Choose High
Availability from the main menu:
5. As of version 2.6.0, in this High Availability view on the master, at the bottom there is an Alert Messages
panel displaying alerts in case configuration changes had not been pushed successfully before. This could
happen if a temporary network failure occurs just at the time a configuration change is done. Thus, an
administrator can recognize whether there is an inconsistency in the configuration data between master and
shadow that could cause trouble if the shadow needs to take over. Typically, the master recognizes this
situation and tries to push the configuration change at a later time automatically. If this is successful, all
failure alerts will be removed and replaced by a warning alert showing that there had been trouble before. As
of version 2.8.0.1 these alerts have been integrated in the general Alerting section so that the Alert
Messages panel is no longer existing in the High Availability section.
In case it does not recover automatically, disconnect/connect the shadow, which will trigger a complete
configuration transfer.
There is few administration required (if possible) on the shadow instance. All configuration of tunnel connections,
host mappings, access rules, and so on, must be maintained on the master instance. They can be replicated over
to the shadow instance only for display reasons. You may want to modify the check interval (time between
checks of whether the master is still alive) and the takeover delay (time the shadow waits to see whether the
master would come back online, before taking over the master role itself).
Keep in mind:
If you want to drop all configuration on the shadow that is related to the master, choose the Reset button, but only
if the shadow is not connected to the master.
Failover Process
The shadow instance checks regularly if the master instance is still alive. Once the check fails, the shadow
instance tries to re-establish the connection to the master instance for a time period specified by the takeover
delay parameter.
● If during this time, no connection was possible, the shadow tries to take over the master role. At this point, it is
still possible for the master to be alive and the trouble to be caused by a network issue between the shadow
and master. In any case, the shadow instance then tries to establish a tunnel to the given SAP Cloud Platform
account. If the original master is still alive (and consequently its tunnel to the cloud account is still active), this
attempt will be denied and the shadow will remain in "shadow status", periodically pinging the master and
trying to connect to the cloud, while the master is not yet reachable.
● Otherwise, the cloud side allows the tunnel to be opened and the shadow instance therefore knows that the
master is indeed down, and takes over its role. Starting this moment, the shadow instance displays the UI of a
master instance and allows the usual operations of a master instance, e.g. starting/stopping tunnels,
modifying the configuration, etc.
Note
Only one shadow instance is supported. Any further shadow instances attempting to connect will be declined
by the master instance.
The master considers a shadow as lost, if no check/ping is received from that shadow instance during a time
interval of three times the check period. Only after this period, another shadow system can register itself.
Note
On the master, it is possible to trigger a failover process by choosing the Switch Roles button. If the shadow is
up, this works as described before, but even if the shadow can not be reached, a role switch of the master can
be enforced. Only enforce the switch if you are absolutely sure that this is right.
Related Information
Context
By default, the Cloud connector uses port 8443 for its administration UI. In case this port is blocked by another
process, or if you just want to change it after the installation (on Windows, you can choose a different port during
installation), you can use the changeport tool, provided with Cloud connector version 2.6.0 and higher.
Procedure
1. Change to the installation directory of the Cloud connector. To adjust the port, execute the following
command:
○ Microsoft Windows OS:
changeport <desired_port>
./changeport.sh <desired_port>
2. The tool will inform you about the successful modification of the port.
3. To activate the new port, you need to restart the Cloud connector.
Context
The Cloud connector provides the possibility to trace all network traffic going through it (HTTP/RFC requests and
responses) for support purposes. This traffic data may contain business critical information or security sensitive
data, such as usernames, passwords, address data, credit card numbers, and so on. Thus, by activating the
corresponding trace level, a Cloud connector administrator could see business data that he/she is not supposed
to see. If you want to prevent this behavior from occurring, you need to implement the following four-eyes
principle. This principle is supported by the Cloud connector release 1.3.2 and higher.
Once the four-eyes principle is applied, activating a trace level that dumps traffic data will require two separate
users:
● An operating system user on the machine where the Cloud connector is installed;
● An Administrator user of the Cloud connector user interface.
By assigning these two users to two different persons, it can be ensured that both persons are needed to activate
a traffic dump (that is, when a certain problem needs to be troubleshot) but neither of them can do this on their
own.
1. Go to directory <scc_install_dir>\scc_config and create a file with name writeHexDump. The owner
of this file needs to be different from the operating system user that runs the Cloud connector process.
Note
Usually, this is the user which is specified in the Log On tab in the properties of the Cloud connector service
(in the Windows Services console). Note that the Local System user should not be used in this case. You
shall better have a dedicated OS user for the cloud connector service.
○ Only the owner of the file and no other user shall have write permission for the file.
○ The OS user that runs the Cloud connector process needs read-only permissions for this file.
○ Initially, the file should contain a line like allowed=false.
1.4.1.3.3.5 Monitoring
The cockpit provides a Connectivity view. Here an administrator can check the status of the Cloud connector
attached in the current account, if any. The view provides information about the Cloud connector ID, version, used
Java runtime, high availability setup, and some more details. Access is granted to administrators, developers and
support users.
The Cloud connector offers various views for monitoring its activities and state. Choose one of the tabs on the
Monitor screen:
Performance
All requests that went through the Cloud connector to the respective back-ends as specified through access
control take a certain amount of time. You can check the duration of requests in a bar chart. The bar chart either
shows the duration statistics for all virtual hosts or for a selected virtual host. The requests are not shown
individually, but are clustered (assigned to buckets). Each of these buckets represents a time range.
For example, the first bucket contains all requests that took 10ms or less, the second one the requests that took
longer than 10ms, but not longer than 20ms. The last bucket contains all requests that took longer than 5000ms.
The collection of duration statistics starts as soon as the Cloud connector is operational. At any point you may
delete all of these statistical records using the button Delete All. After that, the collection of duration statistics will
start from scratch.
Note
Deleting means that the list of most recent requests as well as top time consumers (see below) will be cleared.
A table shows recorded requests starting with the most recent requests:
A horizontal stacked bar chart breaks down the duration of the request into 5 parts as per legend. The numbers
shown on the chart sections are milliseconds.
Note
Parts with a duration of less than 1ms are not shown at all.
In the example shown above the selected request took 25ms, to which the Cloud connector contributed 1ms.
Opening a connection took 5ms. Processing at the back-end side consumed 7ms. Latency effects accounted for
the remaining 12ms, while there was no SSO handling necessary and hence it took no time at all.
This view is in major parts identical to Most Recent Requests. However, requests are not shown in order of
appearance, but rather sorted by their duration (in descending order). Furthermore, you can delete top time
consumers, which has no effect on most recent requests nor the performance overview.
This section shows a tabular overview of all active and idle connections, aggregated for each virtual host. By
selecting a row (i.e. a virtual host) you can view the details of all active connections as well as a graphical summary
of all idle connections. The graphical summary is an accumulative view of connections based on the time the
connections have been idle.
The maximum idle time is displayed on the rightmost side of the horizontal axis. For any point t on that axis
(representing a time value ranging between 0ms and the maximal idle time) the ordinate is the number of
connections that have been idle for not more than t. You can click inside the graph area to view the respective
abscissa t and ordinate.
Hardware Metrics
You can check the current state of critical system resources through pie charts. Furthermore, the history of CPU
and memory usage (recorded in intervals of fifteen seconds) is displayed graphically.
● view the usage at a certain point in time by clicking inside the main graph area, and
● zoom in on a certain excerpt of the historic data through standard click, drag and release of the left mouse
button.
The entire historic data is always visible in the smaller bottom area right below the main graph.
In case you have zoomed in, an excerpt window in the bottom area shows you where you are in the main area with
respect to the entire data. You can:
● drag that window (press left mouse button when inside the window and keep it pressed down while dragging)
or
● position it somewhere else by clicking anywhere inside the bottom area. You can also
● undo zooming, using the button located in the top right corner of the respective graph area.
Monitoring APIs
As a user of the Cloud connector, you might want to integrate some monitoring information in the monitoring tool
you use. In future, the Cloud connector will offer more APIs for that purpose. Find below the APIs currently
available.
With the health check API, it is possible to recognize that the Cloud connector is up and running. The purpose of
this health check is only to verify that the Cloud connector is not down. It does not check any internal state, nor
tunnel connection states. Thus, it is a quick check, which you can often execute.
https://<scc_host>:<scc_port>/exposed? 200
action=ping
Related Information
1.4.1.3.3.6 Alerting
You can configure the Cloud connector to send out Emails whenever critical situations occur that may prevent it
from operating flawlessly in the near or not so distant future. Choose Alerting from the top left navigation area to
set up and tailor alerting to your needs.
Email Configuration
In this section you can specify the list of Email addresses to which alerts should be sent (Send To).
Note
You can assign Email addresses in compliance with RFC 2822. For instance, both john.doe@company.com and
John Doe <j.doe@company.com> are valid Email addresses.
● Optionally, you can enter the sender's Email address (Sent From).
Observation Configuration
In this section you can configure the surveillance of pivotal resources and components of the Cloud connector:
Emails will be sent out as soon as any of the chosen components or resources is deemed to malfunction, or is
considered to be in a critical state.
● High Availability deals with issues that can occur in the context of an active high availability set up, meaning a
shadow system is connected. Whenever a communication problem is detected in this context an alert is
produced.
● Tunnel Health and Service Channels Health refer to the state of the respective connections. Whenever such a
connection is lost, an alert is triggered.
● An excessively high CPU load over an extended period of time adversely affects performance and may be an
indicator of serious issues that jeopardize the operability of the Cloud connector. The CPU load is monitored
and an alert is triggered whenever the CPU load exceeds and continues to exceed a given threshold
percentage (default is 90%) for more than a given period of time (default is 60 seconds).
● The Cloud connector does not require nor consume large amounts of Disk space. However, running out of
disk space remains an undesirable circumstance that you should avoid.
Note
We recommend to send out an alert if the disk space falls below a critical value (default is 10 megabytes).
1. Check the components or resources that you want to keep under surveillance. The selected components and
resources will be examined every 30 seconds by default.
2. If you wish to change the Health Check Interval enter the number of seconds of your choice into the respective
field at the bottom.
3. Press Save to change the current configuration.
Alert Messages
The Cloud connector does not only send out alert messages via Email, it also lists them in this section. However,
the Cloud connector does not dispatch the same alert repeatedly. Instead, an informational alert is generated,
sent out and listed, as soon as the respective and previously reported issue has been resolved (i.e., cannot be
detected anymore).
Note
This is particularly sensible in the case of informational alerts and alerts that have obviously been resolved.
Deleting alerts that pertain to issues that still occur is futile as they will reappear.
Context
Starting with version 2.2, the Cloud connector is providing an auditor tool. It allows you to verify the integrity of
the available audit log files.
Choose Audit from your account menu and go to section Settings to specify which kind of audit events the Cloud
connector should log at runtime. Currently, you can choose between three different Audit Levels:
Note
We recommend that you switch to All only in case of legal requirements or company policies for which not
only security-relevant events shall be logged.
Audit entries for configuration changes are written for the following different categories:
In the Audit Viewer section, you can first define filter criteria and then display the selected audit entries.
● In the Audit Type field, you can select whether you want to view the audit entries for:
○ only requests that were denied;
○ only requests that were allowed;
○ Cloud connector changes;
○ all of the above.
● In the Pattern field, you can specify a certain string that the detail text of each selected audit entry must
contain. The detail text contains information about the user name, requested resource/URL, and the virtual
<host>:<port>. Wildcards are currently not supported in this field. This feature can help you:
○ Filter the audit log for all requests that a particular HTTP user has made during a certain time frame
These three filter criteria are combined with a logical AND so that all audit entries that match these criteria are
displayed. If you have modified one of the criteria, choose the Refresh button to display the updated selection of
audit events that match the new criteria.
Note
In order to ensure separation of concerns, we recommend that the operating system administrator and the
SAP Cloud Platform administrator are different persons. Thus, a single person cannot change audit log level
and delete all existing audit logs. Additionally, we recommend to turn on the audit log on operating system level
for file operations.
The Check button checks all files that are filtered by the specified date range.
To check the integrity of the audit logs, go to <scc_installation>/auditor. This directory contains an
executable go script file (respectively, go.cmd on Microsoft Windows OS and go.sh on other OS).
if you start the go file without specifying parameters from <scc_installation>/auditor, it will start the
verification of all available audit logs for the current Cloud connector installation.
The tool is built as a Java Application and hence requires Java runtime for execution. The best way is to specify
JAVA_HOME:
Alternatively, include the Java bin directory into the PATH variable so that Java can be executed.
In the following example, the Audit Viewer displays all audit entries on level Security, with denied access, for the
time frame between May 28, 00:00:00 and May 28, 23:59:59:
1.4.1.3.3.8 Troubleshooting
Overview
This page provides you with details on how to monitor the state of your open tunnel connections in the Cloud
connector. You can also view different types of logs and traces that can help you troubleshoot connection
problems.
To find a solution for a particular problem or an error you have encountered, you can refer to the Cloud connector
troubleshooting pages. For more information, see Connectivity Support [page 598].
Monitoring
It is possible to view the list of all currently connected applications. To do that, choose your Account from the left
menu and go to section Cloud Connections:
● Application name: The name of the application, as also shown in the cockpit, for your account
● Connections: The number of currently existing connections to the application
● Connected Since: The earliest start time of a connection to this application
● Peer Labels: The name of the application processes, as also shown for this application in the cockpit, for your
account.
Logs
On the Logs tab page, you can find some log files that can help you troubleshoot problems with the internal
operation of the Cloud connector. These logs are intended primarily for SAP Support. They cover both internal
Cloud connector operations and details about the communication between the local and the remote (SAP Cloud
Platform) tunnel endpoint.
● Cloud Connector Loggers adjusts the levels for Java loggers directly related to Cloud connector functionality.
● Other Loggers adjusts the log level for all other Java loggers available in the runtime (which is very rarely
needed). You only need to change the level when requested by SAP Support. It will produce a lot of trace
entries.
● CPIC Trace Level allows you to set the level between 0 and 3 and provides traces for the CPIC-based RFC
communication with ABAP systems.
● When the Payload Trace is activated for an account, all the HTTP and RFC traffic crossing the tunnel for that
account going through this Cloud connector, is traced in files with names traffic_trace_<account
id>_on_<landscapehost>.trc.
Note
Use payload and CPIC tracing on Level 3 carefully and only when requested to do so for support reasons. In
particular, the trace may write sensitive information (such as payload data of HTTP/RFC requests and
responses) to the trace files, and thus, present a potential security risk. For this reason the Cloud connector
(effective version 2.2) supports an implementation of a "four-eyes principle" for activating the trace levels that
dump the network traffic into a trace file. When this four-eyes principle is in place, two users are required for
the activation of a trace level that would record traffic data.
For more information about setting this extra security measure, see Securing the Activation of Traffic Traces
[page 552].
In this section, you can view all existing trace files and delete the ones that are no longer needed.
Via the Download/Download All icons you can create a ZIP archive containing one particular trace file or all trace
files and download it to your local file system for convenient analysis of larger trace files.
Note
Trace files currently in use by the Cloud connector cannot be deleted from the UI. Linux OS allows them to be
deleted from the command line, but we recommend that you do not use this option to avoid inconsistencies in
the internal trace management of the Cloud connector.
Once a problem has been identified, you can turn off the trace again by editing the trace and log settings
accordingly.
On this screen, you can use the Refresh button to update the displayed information (this option is also available on
all other screens). For example, you can use this button because more trace files might have been written since
you last updated the display.
Related Information
Table 271:
To learn about See
How to install a Cloud connector on Microsoft Windows OS Cloud Connector on Microsoft Windows [page 572]
How to install a Cloud connector on Linux OS Cloud Connector on Linux [page 573]
How to create a shadow instance for the Cloud connector High Availability Setup of the Cloud Connector [page 574]
How to administer the Cloud connector Cloud Connector Administration [page 574]
How to securely operate the Cloud connector Guidelines for Secure Operation of Cloud Connector [page
580]
How to handle issues with the Cloud connector Supportability [page 583]
The releases and maintenance Release and Maintenance Strategy [page 583]
Hybrid scenarios with the Cloud connector Process Guidelines for Hybrid Scenarios [page 583]
1.4.1.3.3.9.1 Introduction
The Cloud connector is an on-premise agent that runs in the customer network and takes care of securely
connecting cloud applications, running on SAP Cloud Platform, with services and systems of the customer
network. It is used to implement hybrid scenarios, in which cloud applications require point-to-point integration
This document provides a guide for IT administrators how to setup, configure, securely operate and protect the
Cloud connector, version 2.x, in productive scenarios.
Sections
Target Audience
Additional Information
This document focuses on the operation aspects of the Cloud connector. It does not cover a general overview of
the SAP Cloud Platform and its connectivity service, neither does it address development related questions, such
as how to implement connectivity-enabled applications.
For additional information on specific topics, see the following online resources:
Table 272:
Resource Link
This section describes the hard- and software requirements needed to install and run the Cloud connector.
Hardware Requirements
Table 273:
Minimum Recommended
CPU Single core 3 GHz, x86-64 architecture Dual core 2 GHz, x86-64 architecture
compatible compatible
Memory (RAM) 1 GB 4 GB
Software Requirements
Table 274:
Operating System Architecture
Note
An up-to-date list with detailed Cloud connector version information is available from Prerequisites [page 484]
section.
The browsers that can be used for the Cloud connector Administration UI are the ones supported by SAP UI5.
Currently, these are the following:
An up-to-date list of the supported SAP UI5 browsers can be found here: Browsers for Platforms
The Cloud connector can be downloaded from the Cloud Tools page.
Installation size
To download and install a new Cloud connector server, a minimum of free disk space is required as following:
● Size of downloaded Cloud connector installation file (ZIP, TAR, MSI files): 50 MB
● Newly installed Cloud connector server: 70 MB
● Total: 120 MB as a minimum
The Cloud connector writes configuration files, audit log files and trace files at runtime. The recommendation is to
accommodate between 1 and 20 GB of disk space for those files.
Trace and log files are written to <scc_dir>/log/ within the Cloud connector root directory. The
ljs_trace.log file contains traces in general, communication payload traces are stored in
traffic_trace_*.trc. They are used for support cases to analyze potential issues. The default trace level is
set to Information, where the amount of written data is in the range of few KB each day. You can turn off these
traces to save disk space. However, it is not recommended to turn off this trace completely, but to leave it with the
default settings to allow root cause analysis in case an issue occurs. If the trace level is increased to All, the
amount of data can easily reach the range of several GB per day. We recommend that you only use trace level All
for analyzing a particular issue. Payload trace, however, should be turned off normally and only in case of certain
issues turned on for supporting analysis by SAP support.
To be compliant with the regulatory requirements of your organization and the regional laws, the audit log files
must be persisted for a certain period of time for traceability purposes. Therefore, it is recommended to back up
the audit log files regularly from the Cloud connector file system and to keep the backup for a certain period of
time, fitting to those rules.
Usually, a customer network is divided into multiple network zones or sub-networks according to the security level
of the contained components. There is, for instance, the DMZ that contains and exposes the external-facing
services of an organization to an untrusted network, usually the Internet, and there is one or multiple other
network zones which contain the components and services provided in the company’s intranet.
Generally, customers have the choice in which network zone the Cloud connector should be set-up in their
network. Technical prerequisites for the Cloud connector to work properly are:
● Cloud connector must have internet access to the SAP Cloud Platform landscape host, either directly or via
HTTPS proxy.
● Cloud connector must have direct access to the internal systems it shall provide access to. That means, there
must be transparent connectivity between the Cloud connector and the internal system.
Depending on the needs of the project, the Cloud connector can be either set-up in the DMZ and operated
centrally by the IT department or set-up in the intranet and operated by the line-of-business.
Note
The internal network must allow access to the port. Specific configuration for opening the respective port(s)
depends on the firewall software used.
The default ports are 80 for HTTP and 443 for HTTPS. For RFC communication, you need to open a gateway
port (default: 33+<instance number> and an arbitrary message server port. For a connection to a HANA
Database (on SAP Cloud Platform) via JDBC, you need to open an arbitrary outbound port in your network. Mail
(SMTP) communication is not supported.
Currently, the Cloud connector supports the following Microsoft Windows OS versions:
This section describes how to install, upgrade, uninstall and start/stop the Cloud connector process on MS
Windows operating systems.
Installation
Detailed documentation on how to install the Cloud connector on Microsoft Windows can be found here:
Installation on Microsoft Windows OS [page 488]
Note
The Windows MSI installer must be used for productive scenarios, as only then the Cloud connector gets
registered as a MS Windows service (SAP HANA Cloud Connector 2.0). Your company policy defines the
privileges to be allowed for service users. Then, adjust the folder/file permissions to be manageable by only a
limited-privileged user and system administrators.
Upgrade
Detailed documentation on how to upgrade the Cloud connector on Microsoft Windows can be found here:
Upgrading the Cloud Connector [page 586]
After the installation, the Cloud connector is registered as a Windows service which is configured to be started
automatically. With this configuration, the Cloud connector process will be started automatically after a reboot of
the system. You can start and stop the service via shortcuts created on the desktop ("Start Cloud Connector
2.0" and "Stop Cloud Connector 2.0"), or by using the Windows Services manager and look for the
service SAP HANA cloud connector 2.0.
Once started, the Cloud connector administration UI can be accessed at https://localhost:<port>, where the
default port is 8443 (this port could have been modified during the installation).
Detailed documentation on how to uninstall the Cloud connector on Microsoft Windows can be found here:
Uninstalling the Cloud Connector [page 588]
This section describes how to install, upgrade, uninstall and start/stop the Cloud connector process on Linux
operating systems.
Installation
Detailed documentation on how to install the Cloud connector on Linux can be found here: Installation on Linux
OS [page 490]
Note
For productive scenarios, the Cloud connector Linux RPM installer must be used, as only then the Cloud
connector will be registered as a daemon process.
Upgrade
Detailed documentation on how to upgrade the Cloud connector on Linux can be found here: Upgrading the Cloud
Connector [page 586]
After installing the Cloud connector via RPM manager, the Cloud connector process is started automatically and
registered as a daemon process, which takes care of restarting the Cloud connector automatically after a reboot
of the system.
To start/stop/restart the process explicitly, you can open a command shell and use the following commands,
which require root permissions:
Detailed documentation on how to uninstall the Cloud connector on Linux can be found here: Uninstalling the
Cloud Connector [page 588]
Context
The Cloud connector can be operated in a high availability mode, in which a master and a shadow instance are
installed. The main instance of the Cloud connector is called master and the redundant instance - shadow. In the
case, when the master instance goes down, the shadow takes over its role and continues to serve the connectivity
with SAP Cloud Platform. For the shadow instance, a second Cloud connector has to be installed, then configured
as a shadow, and connected to its master. The master instance pushes its whole configuration to the shadow
whenever the configuration of the master is changed. Thus, the shadow instance is kept synchronized with the
master. The shadow pings the master regularly, and if the master is not reachable for a while, the failover happens
and the shadow takes over the role of the master.
Activities
● To learn how to install a failover (shadow) instance, see: Installing a Failover Instance for High Availability
[page 546]
● To learn how to administer master and shadow instances, see: Master and Shadow Administration [page 550]
As the Cloud connector is a security critical component enabling external access to systems of an isolated
network, similar to a reverse proxy in a DMZ, we recommend that you restrict the access to the operating system
on which the Cloud connector is installed to the minimal set of users who shall administrate the system. This will
minimize the risk of unauthorized people accessing the Cloud connector system and trying to modify or damage a
running Cloud connector instance.
To learn all tips and tricks for secure setup, see Recommendations for Secure Setup [page 494]
After a new installation, the Cloud connector provides a self-signed X.509 certificate used for the SSL
communication between the Cloud connector Administration UI running in a Web browser and the Cloud
connector process itself. For security reasons, this certificate should be replaced for productive scenarios with a
certificate trusted by your organization.
To learn in detail how to do this, read this page: Recommended: Replacing the Default SSL Certificate [page 498]
Basic Configuration
The basic configuration steps for the Cloud connector consist of:
Detailed documentation of these two steps can be found here: Initial Configuration [page 504]
You are forced to change the initial password to a specific one immediately after installation. The Cloud connector
itself does not check the strength of the password, thus the Cloud connector administrators should voluntarily
choose a strong password that cannot be guessed easily.
Related Information
The major principle for the connectivity established by the Cloud connector is that the Cloud connector
administrator should have full control over the connection to the cloud, i.e. they should be able to decide if and
Using the administration UI, the Cloud connector administrator can connect and disconnect the Cloud connector
to the configured cloud account. Once disconnected, there is no communication possible – neither between the
cloud account and the Cloud connector nor to the internal systems. The connection state can be verified and
changed by the Cloud connector administrator on the Account Dashboard tab of the UI as shown in the following
screen shot:
Note
Bear in mind that once the Cloud connector is freshly installed and connected to a cloud account, still none of
the systems available in the customer network will be accessible to the applications of the related cloud
account. The systems and resources that shall be made accessible must be configured explicitly in the Cloud
connector one by one. For more information, see Configuring Trust between Cloud Connector and On-Premise
Systems [page 578]
Effective Cloud connector version 2.2.0, a single Cloud connector instance can be connected to multiple
accounts in the cloud. This is useful especially for customers who need multiple accounts to structure their
development or to stage their cloud landscape into development, test, and production. These customers have the
option to use a single Cloud connector instance for multiple accounts of theirs. Nevertheless, it is recommended
to not use accounts running productive scenarios and accounts used for development or test purposes within the
same Cloud connector. A cloud account can be added to or deleted from a Cloud connector viaAccount
Dashboard, using the Add and Delete buttons (see screenshot above).
A detailed description on how to add, delete, connect or disconnect accounts can be also found here: Managing
Accounts [page 509]
After installing a new Cloud connector in a network, no systems or resources of the network have been exposed to
the cloud yet. You have to configure each system and resource that shall be used by applications of the connected
cloud account. To do this, choose Cloud To On Premise from your account menu and go to tab Access Control:
● Detailed documentation on how HTTP resources are configured can be found here: Configuring Access
Control (HTTP) [page 389]
● Detailed documentation on how RFC resources are configured can be found here: Configuring Access Control
(RFC) [page 438]
We recommend that you narrow the access only to those backend services and resources that are explicitly
needed by the cloud applications. Instead of configuring, for example, a system and granting access to all its
resources, we recommend that you only grant access to the concrete resources which are needed by the cloud
application. For example, define access to an HTTP service by specifying the service URL root path and allowing
access to all its sub-paths.
When configuring an on-premise system, it is possible to define a virtual host and port for the specified system, as
shown in the screenshot below. The virtual host name and port represent the fully-qualified domain name of the
related system in the cloud. We recommend that you use the virtual host name/port mapping in order to prevent
from leaking information about the physical machine name and port of an on-premise system and thus – of your
internal network infrastructure getting published to the cloud.
For secure communication between the Cloud connector and the used on-premise systems, it is recommended to
use encrypted protocols, like HTTPS and RFC over SNC, and to set up a trust relationship between the Cloud
connector and the on-premise systems by exchanging certificates.
When using HTTPS as protocol, a trust relationship can be set-up by configuring the so-called system certificate
in the Cloud connector. A system certificate is an X.509 certificate which represents the identity of the Cloud
connector instance and is used as a client certificate in the HTTPS communication between the Cloud connector
and the on-premise system. The used on-premise system should be configured to validate the system certificate
of the Cloud connector to ensure that only calls from trusted Cloud connectors are accepted.
A detailed documentation on how to use and configure the system certificate for a Cloud connector can be found
here: Initial Configuration (HTTP) [page 387]
We recommend that you configure LDAP-based user management for the Cloud connector Administration UI so
that only named administrator users can log on to the administration UI. This is important to guarantee
traceability of the Cloud connector configuration changes via the Cloud connector audit log. With the default and
built-in Administrator user, it is not possible to identify the physical person who has done a possibly security-
sensitive configuration change in the Cloud connector.
If you have an LDAP server in your landscape, you can configure the Cloud connector to authenticate Cloud
connector administrator users against the LDAP server. Valid administrator users must belong to the user group
named admin or sccadmin. Documentation on how to configure an LDAP server can be found here: Using LDAP
for Authentication [page 543]
Once an LDAP has been configured for the authentication of the Cloud connector, the default Administrator
user will be inactive and cannot be used anymore for logging on to the Cloud connector.
Audit logging is a critical element of an organization’s risk management strategy. The audit log data of the Cloud
connector can be used to alert Cloud connector administrators to unusual or suspicious network and system
behavior. Additionally, the audit log data can provide auditors with information required to validate security policy
enforcement and proper segregation of duties. IT staff can use the audit log data for root-cause analysis following
a security incident.
The Cloud connector provides an auditor tool to view and manage audit log information of the complete record of
access between cloud and Cloud connector, as well as of configuration changes done in the Cloud connector. The
written audit log files are digitally signed by the Cloud connector so that their integrity can be checked. Learn
more about the auditor tool in: Audit Logging in the Cloud Connector [page 559]
Note
We recommend that you switch on audit logging of the Cloud connector permanently in productive scenarios.
● Normally, you should better set it to Security (the default configuration value).
● In case of legal requirements or company policies, we recommend that you set it to All. In this way, the
audit log files can be used to detect attacks of, for example, a malicious cloud application that tries to
access on-premise services without permission, or in a forensic analysis of a security incident.
It is further recommended to copy the audit log files of the Cloud connector regularly to an external persistent
storage according to your local regulations. The audit log files can be found in the Cloud connector root directory
under the following location: /log/audit/<account-name>/audit-log_<timestamp>.csv.
Currently, the Cloud connector supports basic authentication and principal propagation as user authentication
types towards internal systems. The destination configuration of the used cloud application defines which of these
types is used for the actual communication to an on-premise system through the Cloud connector. For more
information, see: Destinations [page 324]
In case basic authentication is used, the on-premise system must be configured to accept basic authentication
and to provide one or multiple service users. There are no additional steps which are needed in the Cloud
connector for this authentication type.
In case principal propagation is used, the Cloud connector administrator has to explicitly configure trust to those
cloud entities from which user tokens are accepted as valid. This can be done in the Trust view of the Cloud
connector and is described in more detail here: Setting Up Trust [page 513]
The following table summarizes the guidelines and recommendations for a secure setup and operation of the
Cloud connector in a productive scenario.
# Activity Recommendation
1.4.1.3.3.9.9 Monitoring
To verify that a Cloud connector is up and running, the simplest way is to try to access its administration UI. If the
UI can be opened in a Web browser, the Cloud connector process is running.
● On Microsoft Windows operating systems, the Cloud connector process is registered as a Windows service,
which is configured to start automatically after a new Cloud connector installation. In case the machine gets
rebooted, the Cloud connector process should then be auto-restarted immediately. You can check the state
with the following command:
sc query "SAP HANA cloud connector 2.0"
The line state shows the state of the service.
● On Linux operating systems, the Cloud connector is registered as a daemon process and gets restarted
automatically each time the Cloud connector process is down, like after a reboot of the whole system. The
daemon state can be checked with:
service cloud connector_daemon status
To verify if a Cloud connector is connected to a certain cloud account, log on to the Cloud connector
Administration UI and go to the Accounts Dashboard, where the connection state of the connected accounts are
visible, as described in section Connecting and Disconnecting a Cloud Account [page 575].
In case of issues with the Cloud connector, SAP customers and partners can create OSS tickets under the
component BC-MID-SCC.
The general SAP SLAs in regards of OSS processing time also apply for SAP Cloud Platform and the Cloud
connector. To avoid unnecessary answer/response cycles in the support case, we recommend that you download
the logs of the corresponding Cloud connector, using the Download button on the Logs view, and to attach the
respective log file(s) to the OSS ticket directly when creating it.
In case the issue is easily reproducible, re-execute it at log level All before creating the archive.
As for all components of SAP Cloud Platform, new releases of the Cloud connector are available on the Cloud
Tools page. As SAP Cloud Platform releases in a bi-weekly cycle, new releases of the Cloud connector could occur
every other week, although the actual releases will be more seldom (new releases are shipped when new features
or important bug fixes shall be delivered).
Cloud connector versions follow the <major>.<minor>.<micro> versioning schema. Within a major version, the
Cloud connector will stay fully compatible. Within a minor version, the Cloud connector will stay with the same
feature set, and higher minor versions usually support additional features compared to lower minor versions.
Micro versions are increased to release patches of a <master>.<minor> version in order to deliver bug fixes.
For each supported major version of the Cloud connector, only one <major>.<minor>.<micro> version will be
provided and supported on the Cloud Tools page. This means that users have to upgrade their existing Cloud
connectors in order to get a patch for a bug or to make use of new features.
New versions of the Cloud connector are announced in the Release Notes of SAP Cloud Platform. We
recommend that Cloud connector administrators check regularly the release notes for Cloud connector updates.
New versions of the Cloud connector can be applied by using the Cloud connector upgrade capabilities. For more
information, see Upgrading the Cloud Connector [page 586].
Note
We recommend that you apply an upgrade first in the Cloud connector test landscape to validate that the
running applications are working, and then continue with the productive landscape.
When updates are applied on the cloud, operations continuity of existing Cloud connectors and its connections
are assured by the platform, i.e. users do not have to perform manual actions in the Cloud connector when the
cloud side gets updated.
The following chapter provides process guidelines that help you to manage productive hybrid scenarios, in which
applications running on SAP Cloud Platform require access to on-premise systems.
To have an overview of the cloud and on-premise landscape relevant for your hybrid scenario, we recommend
that you document the used cloud accounts, their connected Cloud connectors and the used on-premise backend
systems in landscape overview diagrams. Document the account names, the purpose of the accounts (dev, test,
prod), information of the Cloud connector machines (host, domains), the URLs of the Cloud connectors in the
landscape overview document, and possibly more details.
It is recommended to document which users have administrator access to the cloud accounts, to the Cloud
connector operating system, and to the Cloud connector Administration UI.
An example of such administrator role documentation could look like following sample table:
Table 275:
Resource john@acme.com marry@acme.com pete@acme.com greg@acme.com
CA Dev2 X
CA Test X X
CA Prod X
It is recommended to create and document separate email distribution lists for both the cloud account
administrators and the Cloud connector administrators.
Table 276:
Landscape Distribution List
It is recommended to define and document mandatory project and development guidelines for your SAP Cloud
Platform projects. An example of such a guideline could look like the following:
For every SAP Cloud Platform project of your organization, the following requirements are mandatory:
It is recommended to define and document the process of how to set a cloud application live and how to configure
needed connectivity for such an application.
1. Transferring application to production: This process defines the steps which are necessary for transferring an
application to the productive status on the SAP Cloud Platform.
2. Application Connectivity: This process defines the steps which are necessary to add a connectivity
destination to a deployed application for connections to other resources in the test or productive landscape.
3. Cloud Connector Connectivity: This process defines the steps which are necessary to add an on-premise
resource to the Cloud connector in the test or productive landscapes to make it available for the connected
cloud accounts.
4. On-premise System Connectivity: This process defines the steps which are necessary to setup a trust
relationship between an on-premise system and the Cloud connector and to configure user authentication
and authorization in the on-premise system in the test or productive landscapes.
5. Application Authorization: This process defines the steps which are necessary to request and assign an
authorization which is available inside the SAP HANA Cloud application to a user in the test or productive
landscapes.
6. Administrator Permissions: This process defines the steps which are necessary to request and assign the
administrator permissions in a cloud account to a user in the test or productive landscape.
Choose one of the procedures listed below to upgrade your Cloud connector depending on your operating
system. If you follow these steps, the previous settings and configurations will be automatically preserved.
Note
Upgrade is supported only for installer variants. See Installing the Cloud Connector [page 483].
If you have a single-machine Cloud connector installation, a short downtime is unavoidable during the upgrade
process. However, if you have set up a master and a shadow instance, you can perform the upgrade without
downtime by executing the following procedure:
Result: Both instances have now been upgraded without connectivity downtime and without configuration loss.
For more information, see Installing a Failover Instance for High Availability [page 546].
1. Uninstall the Cloud connector as described on page Uninstalling the Cloud Connector [page 588].
2. Install again the Cloud connector within the same directory. For more information, see Installation on
Microsoft Windows OS [page 488].
3. Before accessing the administration UI again, make sure to clear your browser cache in order to avoid
unpredictable behavior due to the upgraded UI.
Linux OS
rpm -U com.sap.scc-ui-<version>.rpm
2. Before accessing the administration UI again, make sure to clear your browser cache in order to avoid
unpredictable behavior due to the upgraded UI.
Sometimes it becomes necessary to update the Java VM used by the Cloud connector, e.g. because of expired
SSL certificates contained in the JVM, bug fixes etc. If you make a replacement in the same directory, simply
shutdown the Cloud connector before upgrading the JVM and start it again afterwards and you are done.
If you change the installation directory of the JVM, this section describes a safe way to make the Cloud connector
use it. Make sure the JVM has been installed successfully. Then execute the following steps, depending on the
operating system:
On Windows
Note
The bin subdirectory must not be part of the JavaHome value.
If that value does not yet exist, you may create it here with a type of "String Value" (REG_SZ), and then specify
the full path of the Java installation directory, for example: C:\sapjvm_7.
On Linux
After you executed the above steps, the Cloud connector should be running again and should have picked up the
new Java version during startup. You can verify this by logging in to the Cloud connector with your favorite
browser, opening the About dialogue and checking that the field <Java Details> shows the version number and
build date of the new Java VM. After you verified that the new JVM is indeed used by theCloud connector, you can
delete or uninstall the old JVM.
Context
If you have installed an installer variant of the Cloud connector , follow the steps below according to your
operating system.
Note
For uninstalling a developer version, proceed as described in section Portable Variants.
Microsoft Windows OS
Linux OS
rpm -e com.sap.scc-ui
Caution
Bear in mind that this command will also remove the configuration files.
Mac OS X
There is no installer variant for Mac OS X, only a portable one (see below).
Portable Variants
If you have installed a portable version (zip or tgz archive) of the Cloud connector, just remove the directory in
which you have originally extracted the Cloud connector archive.
Note
This procedure is relevant for Microsoft Windows OS, Linux OS and Mac OS X Portable variants.
Related Information
Find quick answers to the most common questions about the Cloud connector.
Is the Cloud connector used to send data from on-premise systems to SAP Cloud Platform or the
other way around?
The connection is opened from on-premise to the Cloud, but is then used in the other direction.
This concept was created because an on-premise system is, in contrast to a Cloud system, normally located
behind a restrictive firewall and its services aren’t accessible thru the Internet. It is a widely used pattern often
referred to as reverse invoke proxy.
Is the connection between the SAP Cloud Platform and the Cloud connector encrypted?
Yes, TLS encryption is used by default for the tunnel between SAP Cloud Platform and Cloud connector.
If used properly, TLS is a highly secure protocol, it is the industry standard for encrypted communication and also,
for example, as a secure channel in HTTPS.
Keep your Cloud connector installation updated and we will make sure that no weak or deprecated ciphers are
used for TLS.
What is the oldest version of SAP Business Suite which is compatible with the Cloud connector?
The Cloud connector can connect a SAP Business Suite system version 4.6C and newer.
Table 277:
6 7 8
Note
Please be aware that the Cloud connector version 2.8 and above may have problems with ciphers in Google
Chrome, if you use the JVM 7. For more information read this SCN Article .
Please check Configuring User Store in the Cloud Connector [page 533].
Is the Cloud connector sufficient to connect the SAP Cloud Platform to an SAP ABAP backend or
is HCI needed?
It depends on the scenario: For pure point-to-point connectivity to call on-premise functionality like BAPIs, RFCs,
OData services etc. exposed via on-premise systems, the Cloud connector could suffice.
However, if an advanced functionality is required for e.g. n-to-n connectivity as an integration hub, HANA Cloud
Integration – Process Integration (HCI-PI) would be the suitable solution. HCI could then make use of Cloud
connector as a communication channel.
This highly depends on the application, which is using the Cloud connector tunnel. If the tunnel isn’t currently
used, but still connected, a few bytes per minute will be used in order to keep the connection alive.
The response will be lost. The Cloud connector only provides tunneling, it does not store and forward data in case
of network issues.
For productive instances, we recommend installing the Cloud connector on a single purpose machine. This is
relevant for security. Find more information in our Security Whitepaper .
In theory, you need only one server, but we recommend at least 3 servers with the following purposes:
● Development
● Production Master
● Production Shadow
Note
Production Master and Shadow should NOT run as VM inside the same physical machine. This would remove
the essential redundancy, which is needed to guarantee high availability. Additionally, a QA (Quality Assurance)
instance is a useful extension. In the case of Disaster Recovery, you will need 2 others instances (Master and
Shadow).
You can find hardware requirements and recommendations here: Prerequisites [page 484].
Can I send push messages from an on-premise system to the SAP Cloud Platform through the
Cloud connector?
We currently support:
Additionally, the portable variant of the Cloud connector is available for Mac OS X (10.7, 10.8, 10.9, 10.10, and
10.11), but there is no installer variant for any Mac OS X (or macOS) version that could be used in productive
scenarios.
Find all hardware and software requirements here: Prerequisites [page 484].
We currently support 64bit operating systems only running on an x86-64 Processor (also known as x64, x86_64
or AMD64).
Find all hardware and software requirements here: Prerequisites [page 484].
Yes, you should be able to connect almost every system, which supports the HTTP Protocol, to the SAP Cloud
Platform. E.g.: The Apache HTTP Server, Apache Tomcat, Microsoft IIS or Nginx.
Administration
Yes, find more details here: Audit Logging in the Cloud Connector [page 559].
No, currently there is only one role which allows complete administration of the Cloud connector.
Yes, to enable this, you have to configure an LDAP server. Configuring LDAP is described here: Using LDAP for
Authentication [page 543].
How can I reset the Cloud connector's administrator password when not using LDAP for
authentication?
Visit https://tools.hana.ondemand.com/#cloud to download the portable version of the Cloud connector and
extract the users.xml file in the config directory to the config directory of your Cloud connector installation, then
restart the Cloud connector.
Now the password and username are set back to the default values.
Alternatively, you could edit the configuration file manually, but we highly recommend using the solution provided
above, because it’s failsafe.
There are three folders containing all relevant configuration information in the installation directory of your Cloud
connector installation:
● config
● config_master
● scc_config
As the layout of the configuration files may change between different Cloud connector versions, it is not
recommended to restore a configuration backup of a Cloud connector 2.x installation into a 2.y installation.
Yes, you can create an archive file of the installation directory to create a full backup, but please note the following
before you restore:
● If you restore the backup on a different host, the UI certificate won’t be valid anymore.
● Before you restore the backup you should perform a “normal” installation and then replace the files. This will
register the Cloud connector at your operating systems package manager.
This user needs to be allowed to open the tunnel and have the certificates generated that are used for mutual
trust later on.
The user is not part of the certificate identifying the Cloud connector.
The only remnants are in both Cloud connector UI and SAP Cloud Platform cockpit, the username is still displayed
as the one who did the initial configuration (even though the user may have left the company).
What happens to a Cloud connector connection if the user who created the tunnel leaves the
company?
This does not affect the tunnel, even if you restart the Cloud connector.
How long does SAP provide support for distinct Cloud connector versions?
Each Cloud connector version will be supported for 12 months, which means it is guaranteed that the cloud side
stays compatible with those versions.
After that time frame, compatibility is no longer guaranteed and interoperability could be dropped. Furthermore,
after an additional 3 month, the next feature release that will be published after that period will no longer support
an upgrade from the deprecated version as a starting release.
SAP Cloud Platform customers can purchase accounts and deploy applications into these accounts.
Additionally, there are users: they have a password and are able to log into the cockpit and manage all accounts
they have permission for.
● A single account can be managed by multiple users e.g. if your company has several administrators.
● A single user can manage multiple accounts e.g. if you have multiple applications and want them (for isolation
reasons) to be split over multiple accounts.
For trial users, this is typically your username followed by the suffix “trial”:
Does the Cloud connector work with SAP Cloud Platform Cloud Foundry?
The Cloud connector is not yet supported in SAP Cloud Platform Cloud Foundry.
How do I bind multiple Cloud connectors to one SAP Cloud Platform account?
As of Cloud connector version 2.9, it’s possible to connect multiple Cloud connectors to a single account thus
allowing to address multiple separate corporate network segments.
Those Cloud connectors are distinguishable via their location ID, which then needs to be provided in the
destination configuration on cloud side.
Note
Location IDs that have been provided in previous versions of the Cloud connector will be dropped during an
upgrade to ensure that running scenarios are not disturbed.
No, we currently support HTTP and RFC protocol (access control) only.
Additionally, you can use the Cloud connector as a JDBC or ODBC proxy to access the HANA DB instance of your
SAP Cloud Platform account (service channel).
No, the audit log only logs access from SAP Cloud Platform to on-premise, nothing else.
Troubleshooting
How do I fix the “Could not open Service Manager” error message?
Most likely, you see this error message because of missing administrator privileges.
Fix: Right-click the shortcut and select Run as administrator (see screenshot below).
If you don’t have administrator privileges on your machine you can use the portable variant of the Cloud
connector.
Note
The portable variants of the Cloud connector are meant for non-productive scenarios only.
JAVA_HOME must point to the installation directory of your JRE while PATH must contain the bin folder inside the
installation directory of your JRE.
When I try to open the Cloud connector UI, Google Chrome opens a Save as Dialog, Firefox
displays some cryptic signs and Internet Explorer shows a blank page, how do I fix this?
This happens when you try to access the Cloud connector over HTTP instead of HTTPS, which is the default
behavior of most browsers, if you don’t specify a protocol.
Adding “https://” to the beginning of your URL should fix the problem. For localhost, you can use this URL string:
https://localhost:8443/.
Overview
This section outlines an alternative approach for technical connectivity between the cloud and on-premise, using a
reverse proxy. It also discusses the pros and cons of this method compared to when you use the Cloud connector.
Features
An alternative approach compared to the SSL VPN solution that is provided by the Cloud connector is to expose
on-premise services and applications via a reverse proxy to the Internet. For this method, there is typically a
reverse proxy setup in the "demilitarized zone" (DMZ) subnetwork of a customer, which:
● Acts as a mediator between SAP Cloud Platform and the on-premise services;
● Provides the services of an Application Delivery Controller (ADC) in order, for example, to encrypt, filter,
route, or introspect the inbound traffic.
The figure below shows the minimal overall network topology of this approach. For more information, see
Technical Connectivity Guide .
On-premise services accessible via a reverse proxy are then callable from SAP Cloud Platform like other HTTP
services available on the Internet. When you use destinations to call those services, make sure that the
configuration of the ProxyType parameter is set to Internet.
Depending on your scenario, you can benefit from the reverse proxy. An example is the required network
infrastructure (such as a reverse proxy and ADC services): since it already exists in your network landscape, you
can reuse it to connect to SAP Cloud Platform. In this case, there would be no need to set up and operate new
components on your (customer) side.
Disadvantages
● The reverse proxy approach does not prevent the exposed services from being generally accessible via the
Internet, which makes them vulnerable to attacks from anywhere in the world. Denial-of-Service attacks in
particular are possible and difficult to protect against. Therefore, protection against potential attacks requires
the highest security standards to be implemented in the DMZ and reverse proxy. For the productive
deployment of a hybrid cloud/on-premise application, this approach usually requires intense involvement of
the customer's IT department and a longer period of implementation.
● If the reverse proxy is set to allow filtering or restriction of accepted source IP addresses, you can only set one
single IP address to be used for all SAP Cloud Platform outbound communications.
Although it filters any callers that are not running on the cloud, the reverse proxy does not exclusively restrict
the access to cloud applications belonging to the related customer. Basically, any application running on the
cloud would pass this filter.
● SAP-proprietary RFC protocol is not supported, so that a cloud application cannot directly call an on-premise
ABAP system without having application proxies on top of ABAP.
Note
These demerits do not exist when using the Cloud connector. As it establishes the SSL VPN tunnel to SAP
Cloud Platform via a reverse invoke approach, there is no need to configure the DMZ or external firewall of a
customer network for inbound traffic. Attacks from the Internet are not possible. With its simple setup and fine-
grained access control of exposed systems and resources, the Cloud connector allows a high level of security
and fast productive implementation of hybrid applications. It also supports multiple application protocols such
as HTTP and RFC.
What is this?
This section contains troubleshooting information related to SAP Cloud Platform Connectivity and the Cloud
connector. It provides solutions to general connectivity issues as well as to specific on-demand to on-premise
cases.
Locate the problem or error you have encountered and follow the steps recommended in the solution.
If you cannot find a solution to your issue, use the following template to provide specific, issue-relevant
information. This helps SAP Support to resolve your problem case.
You can submit this information by creating a customer ticket in the SAP CSS system. Use the following
components:
In case you experience a more serious issue that cannot be resolved with traces and logs only, access to the Cloud
connector is needed by support. In such a situation, follow the instructions of the notes below:
● For providing access to the Administration UI via a browser is described, check 592085 .
● For providing SSH access to the operating system of the Linux machine, on which the connector is installed,
check 1275351 .
Related Information
Introduction
SAP Cloud Platform Enterprise Messaging (Beta) is a cloud-based messaging framework that enables you to
connect applications, services, and devices across different technologies, platforms, and clouds.
Scaling to millions of messages per second in real-time, you can send and receive messages reliably using open
standards and protocols. You can decouple application logic, develop microservices, and support event-driven
architectures.
Benefits
Heterogeneous Integration
You can connect different technology platforms using Java apps and Enterprise Messaging service.
Features
● Efficiently manage your messaging hosts and obtain an overview of the number of connections and
messages.
● Create, manage, and review application bindings and messaging hosts.
Enterprise Messaging is currently only available as part of a guided beta program. Experience the benefits of this
service as part of SAP’s beta delivery and discuss your individual requirements, scenarios, and ideas with us.
Share your experiences and leverage your influence to shape future development.
In a typical IT landscape, you can come across a scenario where various systems interact and communicate with
each other directly, at optimum efficiency. However, as you know, an IT landscape is never static. With time, more
systems based on different technologies get added and as a result the complexity of the interactions also
increases. Managing this complex web of applications and systems require time, money, and resources and, in
the mid-term, it's not efficient or scalable without the introduction of a messaging architecture.
Messaging-Oriented Architecture
Enterprise Messaging simplifies connectivity. SAP’s messaging-as-a-service solution in the Cloud enables you to
connect the applications in your landscape through messaging hosts. A messaging host acts as an intermediary
The following diagram illustrates this messaging-oriented architecture with a central messaging host.
The messaging host uses a queue to enable point-to-point communication between two applications. An
application sends a message to a specific queue in a messaging host. The intended receiver, in this setup, is
subscribed to that same queue and can read any messages from it as soon as they have been published by the
sender.
Administrative Tasks
As an administrator, you can perform the following tasks to set up messaging in your landscape:
1. Manage messaging hosts. For more information, see Managing Messaging Hosts [page 602].
2. Manage and create application bindings to a messaging host. For more information, see Managing Application
Bindings [page 605].
Access SAP Cloud Platform Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to manage messaging
hosts.
Prerequisites
You have an SAP Cloud Platform account with messaging hosts provisioned in it.
A messaging host mediates communication between applications. In this setup, the messaging host eliminates
the mutual awareness that applications must have of each other to exchange messages. Effectively, the
messaging host decouples communication between the sending and receiving applications.
You can use the following procedure to access and manage the messaging hosts provisioned in your SAP Cloud
Platform account:
Procedure
For more information about accessing services, see Services in the Cockpit [page 38].
3. Choose Enterprise Messaging.
4. Choose Messaging Hosts in the navigation area.
○ Edit the description of a messaging host by selecting it and choosing Edit Description.
○ Create and manage queues in your messaging host. For more information, see Managing Queues [page
603].
Access SAP Cloud Platform Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to manage queues in
your messaging hosts.
Context
Messaging hosts use queues to enable point-to-point communication between two Java applications. A queue can
be configured to deliver messages in different ways when multiple applications are connected to it. This property
of a queue is called access type and it can be one of the following values:
● Exclusive: Only one application can receive messages from an exclusive queue. This is typically the first
application that connects to the queue.
● Non-Exclusive: Multiple applications can connect to a non-exclusive queue. Each application is serviced in a
round-robin fashion to provide load-balancing.
Procedure
For more information about accessing services, see Services in the Cockpit [page 38].
3. Choose Enterprise Messaging.
4. Choose Messaging Hosts in the navigation area.
5. Select a messaging host by choosing the corresponding link.
6. Choose Queues in the navigation area.
You can also search for a queue by typing its name in the Search text box. The list of queues is filtered to
match the pattern you have entered.
7. To create a new queue, perform the following substeps.
a. Choose Create Queue.
Access SAP Cloud Platform Enterprise Messaging (Beta) in SAP Cloud Platform cockpit to create and manage
application bindings to messaging hosts.
Context
To complete the messaging setup, it is important to create application bindings that connect Java applications to
messaging hosts. After an application binding has been created, applications can publish messages to any queue
in the messaging host or subscribe to receive messages that have been published to any queue in the messaging
host.
You can use the following procedure to create and manage an application binding.
Procedure
For more information about accessing services, see Services in the Cockpit [page 38].
3. Choose Enterprise Messaging.
4. Choose Application Bindings in the navigation area.
Note
At least one Java application must be available to create an application binding. For more information
on developing Java applications, see Developing Java Applications [page 1034].
Note
This name must be unique across all application bindings associated with the selected Java
application.
e. Choose Save.
The SAP Cloud Platform Document service provides an on-demand content repository for unstructured or semi-
structured content.
Overview
Applications access it using the OASIS standard protocol Content Management Interoperability Services (CMIS).
Java applications running on SAP Cloud Platform can easily consume the document service using the provided
client library. A JavaScript client library is currently being developed. Since the document service is exposed using
a standard protocol, it can also be consumed by any other technology that supports the CMIS protocol.
Features
The document service is an implementation of the CMIS standard and is the primary interface to a reliable and
safe store for content on SAP Cloud Platform.
● The storage and retrieval of files, which the file system often handles on traditional platforms
● The organization of files in a hierarchical folder structure
● The association of metadata with the content and the ability to read and write metadata
● A query interface based on this metadata using a query language similar to SQL
● Managing access control (access control lists)
● Versioning of content
● A powerful Java API (Apache Chemistry OpenCMIS)
● Streaming support to also handle large files efficiently
● Files are always encrypted (AES-128) before they are stored in the document service.
● A virus scanner can be activated to scan files for viruses during file uploads (write accesses). For performance
reasons, read-only file accesses are not scanned
● Access from applications running internally on SAP Cloud Platform or externally
● A domain model and service bindings that can be used by applications to work with a content management
repository
● An abstraction layer for controlling diverse document management systems and repositories using Web
protocols
CMIS provides a common data model covering typed files and folders with generic properties that can be set or
read. There is a set of services for adding and retrieving documents (called objects). CMIS defines an access
control system, a checkout and version control facility, and the ability to define generic relations. CMIS defines the
following protocol bindings, which use WSDL with Simple Object Access Protocol (SOAP) or Representational
State Transfer (REST):
Since the SAP Cloud Platform Document service API includes the OpenCMIS Java library, applications can be built
on SAP Cloud Platform that are independent of a specific content repository.
Restrictions
The following features, which are defined in the OASIS CMIS standard, are supported with restrictions:
● Multifiling
● Policies
● Relationships
● Change logs
● For searchable properties, a maximum of 100 values with a maximum of 5,000 characters is allowed.
● For non-searchable properties, a maximum of 1,000 values with a maximum of 50,000 characters is allowed.
If you expect to reach one or the other limitation, we recommend that you open a support ticket on BC-NEO-ECM-
DS and describe your scenario.
Related Information
Use the SAP Cloud Platform Document service to store unstructured or semi-structured data in the context of
your SAP Cloud Platform application.
Introduction
Many applications need to store and retrieve unstructured content. Traditionally, a file system is used for this
purpose. In a cloud environment, however, the usage of file systems is restricted. File systems are tied to
individual virtual machines, but a Web application often runs distributed across several instances in a cluster. File
systems also have limited capacity.
The document service offers persistent storage for content and provides additional functionality. It also provides a
standardized interface for content using the OASIS CMIS standard.
Related Information
The following sections describe the basic concepts of the SAP Cloud Platform Document service.
In the coding and the coding samples, ecm is used to refer to the document service. Therefore, for example, the
document service API is called ecm.api.
The SAP Cloud Platform Document service is exposed using the OASIS standard protocol Content Management
Interoperability Service (CMIS).
The CMIS standard defines the protocol level (SOAP, AtomPub, and JSON based protocols). The SAP Cloud
Platform provides a document service client API on top of this protocol for easier consumption. This API is the
Open Source library OpenCMIS provided by the Apache Chemistry Project.
Related Information
To manage documents in the SAP Cloud Platform Document service, you need to connect an application to a
repository of the document service.
A repository is the document store for your application. It has a unique name with which it can later be accessed,
and it is secured using a key provided by the application. Only applications that provide this key are allowed to
connect to this repository.
Note
As a repository has a certain storage footprint in the back end, the total amount of repositories for each
account is limited to 100. When you create repositories, for example, for testing, make sure that these
repositories are deleted after a test is finished to avoid reaching the limit. Should your use case require more
than 100 repositories per account, please create a support ticket.
Note
Due to the tenant isolation in SAP Cloud Platform, the document service cockpit cannot access or view
repostories you create in SAP Document Center or vice versa.
You can manage a repository using the application's program. In this way, you can create, edit, delete, and
connect the repository.
Related Information
You can create a repository with the createRepository(repositoryOptions) method of the EcmService
(document service).
Procedure
Use the createRepository(repositoryOptions) method and define the properties of the repository.
The following code snippet shows how to create a repository where uploaded files are scanned for viruses:
Related Information
Context
There are many ways to connect to a repository. For more information, see the API Documentation [page 1135]
and Reuse OpenCmis Session Objects in Performance Tips (Java) [page 652].
Procedure
Once you are connected to the repository, you get an OpenCMIS session object to manage documents and
folders in the connected repository.
Probably the most common use case is to create documents and folders in a repository. Every repository in CMIS
has a root folder. Once you have received a Session, you can create the root folder using the following syntax:
Once you have a root folder, you can create other folders or documents. In the CMIS domain model, all CMIS
objects are typed. Therefore, you have to provide type information for each object you create. The types carry the
metadata for an object. The metadata is passed in a property map. Some properties are mandatory, others are
optional. You have to provide at least an object type and a name. For properties defined in the standard,
OpenCMIS has predefined constants in the PropertyIds class.
To create a document with content, provide a map of properties. In addition, create a ContentStream object
carrying a Java InputStream plus some additional information for the content, like Content-Type and file
name.
String id = myDocument.getId();
Getting Children
To get the children of a folder, you can use the following code:
Retrieving a Document
You can also retrieve a document using its path with the getObjectByPath() method.
Tip
We recommend that you retrieve objects by ID and not by path. IDs are kept stable even if the object is moved.
Retrieving objects by IDs is also faster than retrieving objects by paths.
Before your application can use the document service, the application must be able to access and consume the
service.
There are several ways in which your application can access the document service:
● Any application deployed on SAP Cloud Platform as a Java Web application can consume the document
service.
● During the development phase, you can also use the document service in the SAP Cloud Platform local
runtime.
As a prerequisite for local development, you need an installation of the MongoDB on your machine. See
Creating a Sample Application (Java) [page 616].
● You can also use the document service from an application running outside SAP Cloud Platform.
This requires a special application running on SAP Cloud Platform acting as a bridge between the external
application and the document service. This application is called a "proxy bridge". For more information, see
Building a Proxy Bridge [page 621].
Related Information
http://chemistry.apache.org/
User Management
The service treats user names as opaque strings that are defined by the application. All actions in the document
service are executed in the context of this named user or the currently logged-on user. That is, the service sets the
cmis:createdBy and cmis:lastModifiedBy properties to the provided user name. The service also uses this
user name to evaluate access control lists (ACLs). For more information, see the CMIS specification. The
document service is not connected to a user management system and, therefore, does not perform any user
authentication.
Repositories are identified either by their unique name or by their ID. The unique name is a human-readable name
that should be constructed with Java package-name semantics, for example, com.foo.MySpecialRepository,
to avoid naming conflicts. Repositories in the document service are secured by a key provided by the application.
When a repository is created, a key must be supplied. Any further attempts to connect to this repository only
Multiple applications can access the same repository. However, applications can only connect to the same
repository using the unique name assigned to this repository if they are deployed within the same account as the
application that created the repository. In contrast, applications that are deployed in a different account cannot
access this repository. A consequence of having repositories isolated within an account is that data cannot be
shared across different accounts.
Repository ABC is created when Application1 is deployed in Account1. Application2 is located in the same Account1
as Application1; therefore, Application2 can also access the same repository using its unique name ABC.
Application3 is deployed in Account2. Application3 calls a repository that has the same unique name ABC as the
other repository that belongs to Account1. However, Application3 cannot access the ABC repository that belongs
to Account1 using the identical unique name, because the repositories are isolated within the account. Therefore,
Application3 in Account2 connects to another ABC repository that belongs to Account2. In summary, a repository
can only be accessed by applications that are deployed in the same account as the application that created the
repository.
Multitenancy
The document service supports multitenancy and isolates data between tenants. Each application consuming the
document service creates a repository and provides a unique name and a secret key. The document service
creates the repository internally in the context of the tenant using the application. While the repository name
uniquely identifies the repository, an internal ID is created for the application for each tenant. This ID identifies the
storage area containing all the data for the tenant in this repository. An application that uses the document
service in this way has multitenancy support. No additional logic is required at the application level.
Tip
One document service session is always bound to one tenant and to one user. If you create the session only
once, then store it statically, and finally reuse it for all subsequent requests, you end up in the tenant where you
first created the document service session. That is: You do not use multitenancy.
We recommend that you create one document service session per tenant and cache these sessions for future
reuse. Make sure that you do not mix up the tenants on your side.
If you expect a high load for a specific tenant, we recommend that you create a pool of sessions for that tenant.
A session is always bound to a particular server of the document service and this will not scale. If you use a
session pool, the different sessions are bound to different document service servers and you will get a much
better performance and scaling.
Prerequisites
● You have downloaded and configured the SAP Eclipse platform. For more information, see Setting Up the
Development Environment [page 43].
● You have created a HelloWorld Web application as described in Creating a HelloWorld Application [page 56].
● You have downloaded the SDK used for local development.
● You have installed MongoDB as described in Local Development Setup [page 620].
Context
This tutorial describes how you extend the HelloWorld Web application so that it uses the SAP Cloud Platform
Document service for managing unstructured content in your application. You test and run the Web application on
your local server and the SAP Cloud Platform.
Note
For historic reasons, ecm is used to refer to the document service in the coding and the coding samples.
Procedure
package hello;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.chemistry.opencmis.client.api.CmisObject;
import org.apache.chemistry.opencmis.client.api.Document;
import org.apache.chemistry.opencmis.client.api.Folder;
import org.apache.chemistry.opencmis.client.api.ItemIterable;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.commons.PropertyIds;
import org.apache.chemistry.opencmis.commons.data.ContentStream;
import org.apache.chemistry.opencmis.commons.enums.VersioningState;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisNameConstraintViolationEx
ception;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisObjectNotFoundException;
import com.sap.ecm.api.RepositoryOptions;
import com.sap.ecm.api.RepositoryOptions.Visibility;
import com.sap.ecm.api.EcmService;
import javax.naming.InitialContext;
/**
* Servlet implementation class HelloWorldServlet
*/
public class HelloWorldServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#HttpServlet()
*/
public HelloWorldServlet() {
super();
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
* response)
*/
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
response.getWriter().println("<html><body>");
try {
// Use a unique name with package semantics e.g. com.foo.MyRepository
String uniqueName = "com.foo.MyRepository";
// Use a secret key only known to your application (min. 10 chars)
String secretKey = "my_super_secret_key_123";
Session openCmisSession = null;
InitialContext ctx = new InitialContext();
String lookupName = "java:comp/env/" + "EcmService";
EcmService ecmSvc = (EcmService) ctx.lookup(lookupName);
try {
// connect to my repository
openCmisSession = ecmSvc.connect(uniqueName, secretKey);
}
For more information about using the OpenCMIS API, see the Apache Chemistry documentation.
During execution, this servlet executes the following steps:
1. It connects to a repository. If the repository does not yet exist, the servlet creates the repository.
2. It creates a subfolder.
3. It creates a document.
4. It displays the children of the root folder.
4. Add the resource reference description to the web.xml file.
Note
The document service is consumed by defining a resource in your web.xml file and by using JNDI lookup to
retrieve an instance of the com.sap.ecm.api.EcmService class. Once you have established a
connection to the document service, you can use one of the connect(…) methods to get a CMIS session
(org.apache.chemistry.opencmis.client.api.Session). A few examples of how to use the
OpenCMIS Client API from the Apache Chemistry project are described below. For more information, see
the Apache Chemistry page.
<resource-ref>
<res-ref-name>EcmService</res-ref-name>
<res-type>com.sap.ecm.api.EcmService</res-type>
</resource-ref>
5. Test the Web application locally or in the SAP Cloud Platform. For testing, proceed as described in Deploying
Locally from Eclipse IDE [page 1045] or Deploying on the Cloud from Eclipse IDE [page 1047] linked below.
Related Information
To use the document service in a Web application, download the SDK and install the MongoDB database.
Context
Procedure
If your setup is correct, you see a text message starting with "You are trying to access MongoDB on
the native driver port. …"
Related Information
Overview
The services on SAP Cloud Platform can be consumed by applications that are deployed on SAP Cloud Platform
but not from external applications. There are cases, however, where applications want to access content in the
cloud but cannot be deployed in the cloud.
This can be addressed by deploying an application on SAP Cloud Platform that accepts incoming requests from
the Internet and forwards them to the document service. We refer to this type of application as a proxy bridge.
The proxy bridge is deployed on SAP Cloud Platform and runs in an account using the common SAP Cloud
Platform patterns. The proxy bridge is responsible for user authentication. The resources consumed in the
document service are billed to the SAP Cloud Platform account that deployed this application.
Related Information
Context
All the standard mechanisms of the document service apply. The SAP Cloud Platform SDK provides a base class
(a Java servlet) that provides the proxy functionality out-of-the-box. This can easily be extended to customize its
behavior. The proxy bridge performs a 1:1 mapping from source CMIS calls to target CMIS calls. CMIS bindings
can be enabled or disabled. Further modifications of the incoming requests, such as allowing only certain
operations or modifying parameters, are not supported. The Apache OpenCMIS project contains a bridge module
that supports advanced scenarios of this type.
Caution
Note that the proxy bridge opens your repository to the public Internet and should always be secured
appropriately.
Note
For historic reasons, ecm is used to refer to the document service in the coding and the coding samples.
Procedure
1. Create an SAP Cloud Platform application as described in Using Java EE 6 Web Profile [page 1036].
2. Create a web.xml file and a servlet class.
3. Derive your servlet from the class com.sap.ecm.api.AbstractCmisProxyServlet.
4. Add a servlet mapping to your web.xml file using a URL pattern that contains a wildcard. See the following
example.
Example
<servlet>
<servlet-name>cmisproxy</servlet-name>
<servlet-class>my.app.CMISProxyServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>cmisproxy</servlet-name>
<url-pattern>/cmis/*</url-pattern>
</servlet-mapping>
You can use prefixes other than /cmis and you can add more servlets in accordance with your needs. The
URL pattern for your servlet derived from the class AbstractCmisProxyServlet must contain a /* suffix.
5. Override the two abstract methods provided by the AbstractCmisProxyServlet class:
getRepositoryUniqueName() and getRepositoryKey().
These methods return a string containing the unique name and the secret key of the repository to be
accessed. You can override a third method getDestinationName(), which also returns a string. If this
method is overridden, it should return the name of a destination deployed for this application to connect to
the service. This is useful if a service user is used, for example. Ensure that there is a valid custom destination.
6. If you override the getServletConfig() method ensure that you call the superclass in your method.
○ supportAtomPubBinding()
○ supportBrowserBinding()
<security-constraint>
<web-resource-collection>
<web-resource-name>Proxy</web-resource-name>
<url-pattern>/cmis/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>EcmDeveloper</role-name>
</auth-constraint>
</security-constraint>
In some cases it might be useful to grant public access for reading content but not for modifying, creating or
deleting it. For example, a Web content management application might embed pictures into a public Web site
but store them in the document service. For a scenario of this type, override the method readOnlyMode() so
that it returns true. This means that only read requests are forwarded to the repository and all other requests
are rejected. The read-only mode only works with the JSON binding. The other bindings are disabled in this
case.
Note
If you need finer control or dynamic permissions you can override the requireAuthentication() and
authenticate() methods in the AbstractCmisProxyServlet.
9. Optionally, you can override two more methods to customize timeout values for reading and connecting:
getConnectTimeout() and getReadTimeout().
It should only be necessary to use these methods if frequent timeout errors occur.
package my.app;
import com.sap.ecm.api.AbstractCmisProxyServlet;
public class CMISProxyServlet extends AbstractCmisProxyServlet {
@Override
protected String getRepositoryUniqueName() {
return "MySampleRepository";
}
@Override
//For productive applications, use a secure location to store the secret key.
protected String getRepositoryKey() {
return "abcdef0123456789";
}
}
10. To access the proxy brigde from an external application you need the correct URL.
Example
Your proxy bridge application is deployed as cmisproxy.war on the landscape. The cockpit shows the
following URL for your app: https://cmisproxysap.hana.ondemand.com/cmisproxy and the
web.xml is as shown above. Then the URLs is as follows:
○ CMIS 1.1:
AtomPub: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/1.1/atom
Browser: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/json
○ CMIS 1.0:
AtomPub: https://cmisproxysap.hana.ondemand.com/cmisproxy/cmis/atom
Browser: (not available)
These URLs can be passed to the CMIS Workbench from Apache Chemistry, for example.
The workbench requires basic authentication. Please add the following code to your web.xml:
Sample Code
<login-config>
<auth-method>BASIC</auth-method>
</login-config>
Example
A full example that can be deployed consists of two files: a web.xml and a servlet class. This example only
exposes the CMIS browser binding (JSON) using the prefix /cmis in the URL.
Sample Code
web.xml
Sample Code
Servlet
package my.app;
import com.sap.ecm.api.AbstractCmisProxyServlet;
public class CMISProxyServlet extends AbstractCmisProxyServlet {
private static final long serialVersionUID = 1L;
@Override
protected boolean supportAtomPubBinding() {
return false;
}
@Override
protected boolean supportBrowserBinding() {
return true;
}
public CMISProxyServlet() {
super();
}
@Override
protected String getRepositoryUniqueName() {
return "MySampleRepository";
}
@Override
// For productive applications, use a secure location to store the secret
key.
protected String getRepositoryKey() {
return "abcdef0123456789";
}
}
Procedure
Your repository should never be available to the public. In the example, basic authentication and the role
EcmDeveloper are required (see security pages). Assign this role to the users or groups who should be able to
access the account area of cockpit.
Table 278:
Field Value
Type HTTP
Name documentservice
CloudConnectorVersio 2
n
ProxyType Internet
URL https://cmisproxy<account_ID>.hana.ondemand.com/cmisproxy/cmis/
json
5. Create an HTML5 application accessing the document service and open it in the Web IDE. Then create an
index.html file with the following contents:
Example
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>Use CMIS from HTML5 Application</title>
<script type="text/javascript">
function setFilename() {
var thefile = document.getElementById('filename').split('\\').pop();
document.getElementById("cmisname").value = thefile.value;
}
function getChildren() {
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status == 200) {
var children = obj = JSON.parse(this.responseText);
var str = "<ul>";
var repoUrl = "/cmis/<repo-ID>/root/"
for (var i = 0; i <children.objects.length; i++) {
if
(children.objects[i].object.properties["cmis:baseTypeId"].value ==
'cmis:folder') {
str += '<li>'
For more information, see Creating an HTML5 Application [page 1115], Creating a Project [page 87], and
Editing the HTML5 Application [page 87].
a. Open the URL of the proxy bridge from the previous step in a browser, copy the repository ID, for
example, 8d1c2718db5a2fc0d7242585, from the response.
Example
{
"8d1c2718db5a2fc0d7242585": {
"repositoryId": "8d1c2718db5a2fc0d7242585",
"repositoryName": "Sample Repository",
"repositoryDescription": "Sample repository for external access",
"vendorName": "SAP AG",
"productName": "SAP Cloud Platform, document service",
"productVersion": "1.0",
"rootFolderId": "8d1c2718db5a2fc0d7242585",
"capabilities": {
…
b. In your index.html, replace all occurrences of <repo-id> with the extracted repository ID and all
occurrences of <your-proxy-url> with the URL of the proxy bridge application.
c. Create a neo-app.json file in the root of your project directory with the following contents:
Example
{
"welcomeFile": "/index.html",
"routes": [
{
"path": "/cmis",
"target": {
"type": "destination",
"name": "documentservice"
},
"description": "CMIS Connection Document Service"
}
],
"sendWelcomeFileRedirect": true
}
This handles all URLs starting with /cmis to the path specified in the destination named
“documentservice”.
d. Commit your files in Git, create a new version, and activate the version.
For more information, see Creating a Version [page 1116] and Activating a Version [page 1117].
The following sections describe the advanced concepts of the SAP Cloud Platform Document service.
One benefit of Content Management Interoperability Services (CMIS) as compared to a file system is the
extended handling of metadata.
You can use metadata to structure content and make it easier to find documents in a repository, even if it contains
millions of documents. In the CMIS domain model, metadata is structured using types. A type contains the set of
allowed or required properties, for example, an Invoice type that has the InvoiceNo and CustomerNo
properties.
A type is described in a type definition and contains a list of property definitions. CMIS has a set of predefined
types and predefined properties. Custom-specific types and additional custom properties can extend the
predefined types. When a type is created, it is derived from a parent type and extends the set of the parent
properties. In this way, a hierarchy of types is built. The base types do not have parents. Base types are defined in
the CMIS specification. The most important base types are cmis:document and cmis:folder.
Predefined properties contain metadata that is usually available in the existing repositories. These are, for
example, cmis:name, cmis:createdBy, cmis:modifiedBy, cmis:createdAt, and cmis:modifiedAt. They
contain the name of the author, the creation date, and the date of the last modification. Some properties are type-
specific, for example, a folder has a parent folder and a document has a property for content length.
Each property has a data format (String, Integer, Date, Decimal, ID, and so on) and can define additional
constraints, such as:
Each object stored in a CMIS repository has a type and a set of properties. Types and properties provide the
mechanism used to find objects with CMIS queries.
Related Information
http://chemistry.apache.org/
http://chemistry.apache.org/java/developing/guide.html
http://chemistry.apache.org/java/0.9.0/maven/apidocs/
http://chemistry.apache.org/java/examples/index.html
http://docs.oasis-open.org/cmis/CMIS/v1.1
http://docs.oracle.com/javase/6/docs/api/java/security/KeyStore.html
The document store on SAP Cloud Platform supports the cmis:document and cmis:folder types. It also has a
built-in subtype for versioned documents. The types can be investigated using the Apache CMIS workbench.
In addition to the standard CMIS properties, the document service of SAP Cloud Platform supports additional SAP
properties. The most important ones are:
Related Information
http://chemistry.apache.org/java/download.html
http://docs.oasis-open.org/cmis/CMIS/v1.1
Context
The CMIS client API uses a map to pass properties. The key of the map is the property ID and the value is the
actual value to be passed. The cmis:name and cmis:objectTypeId properties are mandatory.
Procedure
1. Use a name that is unique within the folder and a type ID that is a valid type from the repository.
2. Run the sample code.
// properties
Map<String, Object> properties = new HashMap<String, Object>();
properties.put(PropertyIds.OBJECT_TYPE_ID, "cmis:document");
properties.put(PropertyIds.NAME, "Document-1");
// content
byte[] content = "Hello World!".getBytes();
InputStream stream = new ByteArrayInputStream(content);
ContentStream contentStream = new ContentStreamImpl(name,
BigInteger.valueOf(content.length), "text/plain", stream);
// create a document
Results
You can inspect the document in the CMIS workbench. You can see that various other properties have been set by
the system, such as the ID, the creation date, and the creating user.
Context
This procedure focuses on the use of the sap:tags property to mark the document. This is a multi-value
attribute, so you can assign more than one tag to it.
Procedure
1. To assign the Hello and Tutorial tags to the document, use the following code:
Table 279:
Name ID Type Value
This section gives a very brief introduction to querying. The OpenCMIS Client API is a Java client-side library with
many capabilities, for example, paging results. For more information, consult the OpenCMIS Javadoc and the
examples on the Apache Chemistry Web site.
Context
The following procedure focuses on a use case where you have created a second folder and some more
documents. The repository then looks like this:
The Hello Document and Hi Document documents have the tags Hello and Tutorial, the Loren Ipsum
document has no tags.
Procedure
1. Use the CMIS query to search documents in the system based on their properties.
Table 280:
cmis:createdBy cmis:name sap:owner cmis:objectId
Note
In this case, the workbench displays only the first value of multi-valued properties.
Table 281:
cmis:createdBy cmis:name sap:owner sap:tags cmis:objectId
Tutorial
Tutorial
Related Information
http://chemistry.apache.org/java/0.13.0/maven/apidocs/
http://chemistry.apache.org/java/examples/index.html
For the SAP Cloud Platform Document service, you can create new object types or you can remove those new
object types again in accordance with the CMIS standard.
Context
In CMIS, every object, for example a document or a folder, has an object type. The object type defines the basic
settings of an object of that type. For example, the cmis:document object type defines that objects of that type
are searchable.
Furthermore, the object type defines the properties that can be set for an object of that type, for example, an
object of type cmis:document has a mandatory cmis:name property that must be a string. Therefore, every
object of type cmis:document needs a name. Otherwise, the object is not valid and the repository rejects it.
In CMIS, types are organized hierarchically. The most important (predefined) base types are:
CMIS allows you to define additional types provided that each type is a descendant of one of the predefined base
types. In this type hierarchy, a type inherits all property definitions of its parent type. CMIS 1.1 allows type
hierarchy modifications (see the OASIS page) by providing methods for the creation, the modification, and the
removal of object types. Currently, the document service only supports the creation and removal of types. This
allows a developer to define new types as subtypes of existing types. The new types might possess other
properties in addition to all of the automatically inherited property definitions of the parent type. Creating objects
of that type allows you to assign values for these new properties to the object. Remember to also set the values
for the inherited properties as appropriate.
The following example shows how to create a new document type that possesses one additional property for
storing the summary of a document. The developer must implement the MyDocumentTypeDefinition and
MyStringPropertyDefinition classes. Example implementations for these classes as well as for the
Example
import java.util.HashMap;
import java.util.Map;
import org.apache.chemistry.opencmis.client.api.ObjectType;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.ContentStreamAllowed;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
import
org.apache.chemistry.opencmis.commons.exceptions.CmisObjectNotFoundException;
import org.apache.chemistry.opencmis.commons.exceptions.CmisRuntimeException;
// specify type attributes
String idAndQueryName = "test:docWithSummary";
String description = "Doc with Summary";
String displayName = "Document with Summary";
String localName = "some local name";
String localNamespace = "some local name space";
String parentTypeId = BaseTypeId.CMIS_DOCUMENT.value();
Boolean isCreatable = true;
Boolean includedInSupertypeQuery = true;
Boolean queryable = true;
ContentStreamAllowed contentStreamAllowed = ContentStreamAllowed.ALLOWED;
Boolean versionable = false;
// specify property definitions
Map<String, PropertyDefinition<?>> propertyDefinitions
= new HashMap<String, PropertyDefinition<?>>();
MyStringPropertyDefinition summaryPropertyDefinitions
= createSummaryPropertyDefinitions();
propertyDefinitions.put(summaryPropertyDefinitions.getId(),
summaryPropertyDefinitions);
// build object type
MyDocumentTypeDefinition docTypeDefinition
= new MyDocumentTypeDefinition(idAndQueryName, description, displayName,
localName, localNamespace, parentTypeId, isCreatable,
includedInSupertypeQuery, queryable, contentStreamAllowed,
versionable, propertyDefinitions);
// add type to repository
ecmSession.createType(docTypeDefinition);
// create document of new type
ecmSession.clear();
Map<String, String> newDocProps = new HashMap<String, String>();
newDocProps.put(PropertyIds.OBJECT_TYPE_ID, docTypeDefinition.getId());
newDocProps.put(PropertyIds.NAME, "testDocWithNewType");
newDocProps.put("test:summary", "This is a document with a summary property");
● You can only create types with a cmis:document, cmis:folder, or cmis:secondary base type.
● The ID and the query name must be identical and meet the following rules:
○ They must match the regular Java expression "[a-zA-Z][a-zA-Z0-9_:]*".
○ Their names must not start with cmis:, sap, or s: in any combination of uppercase and lowercase
letters, for example, cMis: is also not allowed.
● The ID and the query name must be identical and meet the following rules:
○ They must match the regular Java expression "[a-zA-Z][a-zA-Z0-9_:]*".
○ Their names must not start with cmis:, sap, or s: in any combination of uppercase and lowercase
letters, for example, cMis: is also not allowed.
● If the base type of the new object type is cmis:secondary, no other type definition may already contain a
property definition with the same ID or query name.
● If the base type of the new object type is not cmis:secondary and another type definition already contains a
property definition with the same ID or query name, this property definition must be identical to the one of the
new type.
● You cannot specify default values or choices.
To delete a new object type, you can use the following code snippet: ecmSession.deleteType(typeId);
You can only delete an object type if it is no longer used by any documents or folders in the repository.
OASIS page
Example
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.definitions.TypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.TypeMutability;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public abstract class MyTypeDefinition implements TypeDefinition {
private String description = null;
private String displayName = null;
private String idAndQueryName = null;
private String localName = null;
private String localNamespace = null;
private String parentTypeId = null;
private Boolean isCreatable = null;
private Boolean includedInSupertypeQuery = null;
private Boolean queryable = null;
private Map<String, PropertyDefinition<?>> propertyDefinitions
= new HashMap<String, PropertyDefinition<?>>();
public MyTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
this.description = description;
this.displayName = displayName;
this.idAndQueryName = idAndQueryName;
this.localName = localName;
this.localNamespace = localNamespace;
this.parentTypeId = parentTypeId;
this.isCreatable = isCreatable;
this.includedInSupertypeQuery = includedInSupertypeQuery;
this.queryable = queryable;
if (propertyDefinitions != null) {
this.propertyDefinitions = propertyDefinitions;
}
}
@Override
abstract public BaseTypeId getBaseTypeId();
@Override
public String getDescription() {
return description;
}
@Override
public String getDisplayName() {
return displayName;
}
@Override
public String getId() {
import java.util.List;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.TypeMutability;
public class MyTypeMutability implements TypeMutability {
@Override
public List<CmisExtensionElement> getExtensions() {
return null;
}
@Override
public void setExtensions(List<CmisExtensionElement> arg0) {
}
@Override
public Boolean canCreate() {
return true;
}
@Override
public Boolean canDelete() {
return true;
}
@Override
public Boolean canUpdate() {
return false;
}
}
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.DocumentTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
import org.apache.chemistry.opencmis.commons.enums.ContentStreamAllowed;
public class MyDocumentTypeDefinition extends MyTypeDefinition implements
DocumentTypeDefinition {
private ContentStreamAllowed contentStreamAllowed = null;
private Boolean versionable = null;
public MyDocumentTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
ContentStreamAllowed contentStreamAllowed, Boolean versionable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
this.contentStreamAllowed = contentStreamAllowed;
this.versionable = versionable;
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_DOCUMENT;
}
@Override
public ContentStreamAllowed getContentStreamAllowed() {
return contentStreamAllowed;
}
@Override
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.FolderTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public class MyFolderTypeDefinition extends MyTypeDefinition implements
FolderTypeDefinition {
public MyFolderTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_FOLDER;
}
}
import java.util.Map;
import org.apache.chemistry.opencmis.commons.definitions.FolderTypeDefinition;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.BaseTypeId;
public class MySecondaryTypeDefinition extends MyTypeDefinition implements
FolderTypeDefinition {
public MySecondaryTypeDefinition(String idAndQueryName, String description,
String displayName, String localName, String localNamespace,
String parentTypeId, Boolean isCreatable,
Boolean includedInSupertypeQuery, Boolean queryable,
Map<String, PropertyDefinition<?>> propertyDefinitions) {
super(idAndQueryName, description, displayName, localName, localNamespace,
parentTypeId, isCreatable, includedInSupertypeQuery, queryable,
propertyDefinitions);
}
@Override
public BaseTypeId getBaseTypeId() {
return BaseTypeId.CMIS_SECONDARY;
}
}
import java.util.List;
import org.apache.chemistry.opencmis.commons.data.CmisExtensionElement;
import org.apache.chemistry.opencmis.commons.definitions.Choice;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
abstract public class MyPropertyDefinition<T> implements PropertyDefinition<T> {
private String idAndQueryName = null;
private Cardinality cardinality = null;
private String description = null;
private String displayName = null;
private String localName = null;
private String localNameSpace = null;
private Updatability updatability = null;
private Boolean orderable = null;
private Boolean queryable = null;
public MyPropertyDefinition(String idAndQueryName, Cardinality cardinality,
String description, String displayName, String localName,
String localNameSpace, Updatability updatability,
Boolean orderable, Boolean queryable) {
super();
this.idAndQueryName = idAndQueryName;
this.cardinality = cardinality;
this.description = description;
this.displayName = displayName;
this.localName = localName;
this.localNameSpace = localNameSpace;
this.updatability = updatability;
this.orderable = orderable;
this.queryable = queryable;
}
@Override
public String getId() {
return idAndQueryName;
}
@Override
public Cardinality getCardinality() {
return cardinality;
}
@Override
public String getDescription() {
return description;
}
@Override
public String getDisplayName() {
return displayName;
}
@Override
public String getLocalName() {
return localName;
}
@Override
public String getLocalNamespace() {
return localNameSpace;
}
@Override
abstract public PropertyType getPropertyType();
@Override
public String getQueryName() {
return idAndQueryName;
}
@Override
public Updatability getUpdatability() {
import org.apache.chemistry.opencmis.commons.definitions.PropertyBooleanDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyBooleanPropertyDefinition extends MyPropertyDefinition<Boolean>
implements PropertyBooleanDefinition {
public MyBooleanPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
import java.util.GregorianCalendar;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDateTimeDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.DateTimeResolution;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyDateTimePropertyDefinition extends
MyPropertyDefinition<GregorianCalendar> implements PropertyDateTimeDefinition {
public MyDateTimePropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.DATETIME;
}
@Override
public DateTimeResolution getDateTimeResolution() {
return DateTimeResolution.TIME;
}
}
import java.math.BigDecimal;
import org.apache.chemistry.opencmis.commons.definitions.PropertyDecimalDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.DecimalPrecision;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyDecimalPropertyDefinition extends MyPropertyDefinition<BigDecimal>
implements
PropertyDecimalDefinition {
public MyDecimalPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.DECIMAL;
}
@Override
public BigDecimal getMaxValue() {
return null;
}
@Override
public BigDecimal getMinValue() {
return null;
}
@Override
public DecimalPrecision getPrecision() {
import org.apache.chemistry.opencmis.commons.definitions.PropertyHtmlDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyHtmlPropertyDefinition extends MyPropertyDefinition<String>
implements PropertyHtmlDefinition {
public MyHtmlPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.HTML;
}
}
import org.apache.chemistry.opencmis.commons.definitions.PropertyIdDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyIdPropertyDefinition extends MyPropertyDefinition<String> implements
PropertyIdDefinition {
public MyIdPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.ID;
}
}
import java.math.BigInteger;
import org.apache.chemistry.opencmis.commons.definitions.PropertyIntegerDefinition;
import java.math.BigInteger;
import org.apache.chemistry.opencmis.commons.definitions.PropertyStringDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
import org.apache.chemistry.opencmis.commons.enums.PropertyType;
import org.apache.chemistry.opencmis.commons.enums.Updatability;
public class MyStringPropertyDefinition extends MyPropertyDefinition<String>
implements PropertyStringDefinition {
public MyStringPropertyDefinition(String idAndQueryName,
Cardinality cardinality, String description, String displayName,
String localName, String localNameSpace,
Updatability updatability, Boolean orderable, Boolean queryable) {
super(idAndQueryName, cardinality, description, displayName,
localName, localNameSpace, updatability, orderable, queryable);
}
@Override
public PropertyType getPropertyType() {
return PropertyType.STRING;
}
@Override
public BigInteger getMaxLength() {
return null;
}
}
import org.apache.chemistry.opencmis.commons.definitions.PropertyUriDefinition;
import org.apache.chemistry.opencmis.commons.enums.Cardinality;
● cmis:read
○ Allows fetching an object (folder or document).
○ Allows reading the ACL, properties and the content of an object.
● sap:file
○ Includes all privileges of cmis:read.
○ Allows the creation of objects in a folder and to move an object.
● cmis:write
○ Includes all privileges of sap:file.
○ Allows modifying the properties and the content of an object.
○ Allows checking out of a versionable document.
● sap:delete
○ Includes all privileges of cmis:write.
○ Allows the deletion of an object.
○ Allows checking in and canceling check out of a private working copy.
● cmis:all
○ Includes all privileges of sap:delete.
○ Allows modifying the ACL of an object.
For a repository the initial settings for the root folder are:
● The ACL contains one ACE for the {sap:builtin}everyone principal with the cmis:all permission. With
these settings, all principals have full control over the root folder.
● The owner property is set to {sap:builtin}admin (ownership is described below).
Initially, without specific ACL settings, all documents and folders possess an ACL with one ACE for the built-in
principal {sap:builtin}everyone with the cmis:all permission that grants all users unrestricted access.
ACLs or ACEs are not inherited but explicitly stored at the particular objects. An empty ACL means that no
principal has permission, except the owner of the object. The owner concept is described below in more detail.
The following methods for modifying ACLs (Access Control Lists) in the CMIS client library are available:
To modify the ACL of the current object only, set the propagation parameter to OBJECTONLY. To modify the
ACL of the current object as well as of the ACLs of all of the object's descendants, set the propagation
parameter to PROPAGATE. You can apply PROPAGATE only to folders. It works as follows: The ACEs that are added
and removed at the root folder of the operation are computed and then applyAcl is called with these ACE sets
for each descendant.
Removing a permission for a principal from an object results in no ACE entry for the principal in that ACL. This is
independent of the current settings in the ACL with respect to this principal.
In methods with parameters for adding and removing ACEs, first the specified ACEs are removed and then the
new ones are added.
Every folder and document has the sap:owner property. When an object is created the currently connected user
automatically becomes the owner of the object. The owner of an object always has full access even without any
specific ACEs granting him permission.
The owner property could be changed using the updateProperties method with the following restrictions:
● The new value of the owner property must be identical with the currently connected user.
● The currently connected user has cmis:all privilege.
● The application can use a connect method without explicitly providing a parameter containing a user. Then
the current user is forwarded to the document service. The user's right to access particular documents and
folders is determined using the user ID and the attached ACLs.
● The application can provide a user ID explicitly using a parameter of the connect method. Then this ID is used
for checking the access rights.
Note
Note that the document service is not connected to any Identity Provider or Identity Management System and
considers the provided ID as an opaque string. This is also true for the user or principal strings provided in the
ACEs when setting ACLs at objects.
The application is responsible for providing the correct user ID but it can also submit a technical user ID that
does not belong to any physical user, for example, to implement some kind of service user concept.
Besides providing a user, some connect methods have an additional parameter to provide the IDs of additional
principals to the document service.
If additional principals are provided, the user not only has his or her own permissions to access objects but in
addition gets the access rights of these principles. If, for example, the user him or herself has no right to access a
specific document but one of the additionally provided principals is allowed to read the content, then the user can
also access the content in the context of this connection.
With this concept an application could also use roles (or even groups) in the ACLs by setting ACEs indicating
these roles or groups. Then the roles of the current user can be evaluated during his connection calls and he is
granted access rights according to his role (or group) membership.
It is very important to keep in mind that the additional principals are also opaque strings for the document service.
This leaves it up to the application to decide what kind of information it sends as additional principals, including
identifiers only known by the application itself. On the other hand, the application must ensure that there is no
user with an ID similar to the additional principals, which the application uses in its ACLs because such a user
might unintentionally get too many access rights.
Example
This example shows how to assign write and read permissions for two kinds of users: Authors and readers.
Authors should have write access to documents and readers should only have read access to the documents.
The application defines two roles, one for authors called author-role and one for readers called reader-
role.
For more information about securing applications and using roles, see Securing Applications.
To set up permissions for authors and readers as described in our example, set the appropriate ACEs at the
documents. The following code snippet shows how to set these permissions for a single document:
import com.sap.security.um.service.UserManagementAccessor;
import com.sap.security.um.user.User;
import com.sap.security.um.user.UserProvider;
…
String authorRole = "author-role";
String readerRole = "reader-role";
As long as the user's session is active, his or her permission to access the documents is determined by the
user's role assignment. That is, authors can change documents and readers are only allowed to read them.
Related Information
● The {sap:builtin}admin user who always has full access to all objects no matter which ACLs are set.
Note
Note that the document service considers user IDs only as opaque strings. Therefore, the application must
prevent that a normal user connects to the document service using this administration user ID.
● The {sap: builtin}everyone user applies to all users. Therefore, granting a permission to this user using
an ACE grants this permission to all users.
There are some document service specific rules with respect to ACLs.
Object Creation
When creating an object the connected user becomes the owner of the new object. The ACL of the parent folder is
copied to the new object and modified according to the addAcl and removeAcl parameter settings of the
create method.
Access by Path
A user is allowed to fetch an object using the path if the user has at least the cmis:read permission for the
object. In this case, the ACLs of the ancestor folders of the object are not relevant.
Versioning
● All documents of a version series, except the private working copy (PWC), share the same ACL and owner.
● It is only allowed to modify the ACL on the last version of a version series and only if it is not checked out.
● Principals are allowed to check out a document if they have the cmis:write permission for it. They become
the owner of the PWC and the ACL of the PWC initially contains only one ACE with their principal name and
the cmis:all permission.
● The ACL and the owner of a PWC can be changed independently of the other objects of the version series the
PWC belongs to. Only the owner of the PWC and users with the sap:delete permission are allowed to check
in or to cancel a checkout.
● Only principals having the cmis:all permission for the version series are allowed to add or remove ACEs
when checking in a PWC.
● getChildren
Returns all children the principal is allowed to see. If the principal has no read permission for the current
folder, a NodeNotFoundException is thrown.
● getDecendants
Returns only those descendants of a folder F, which the principal is allowed to see. Only those descendants
are returned for which all folders on the path from F to the descendant are accessible to the principal. If the
principal has no read permission for the current folder F, a NodeNotFoundException is thrown.
● getFolderTree
In many ways the document service behaves like a relational database, where each document and folder is one
entry.
Therefore, most of the performance tips for databases also apply to the document service, for example:
To help you improve the performance of your application that uses the document service, we provide the
following tips.
Note
These are only recommendations, and may not be suitable in every case. There may be situations where you
cannot and should not apply them.
Documents and folders are stored in the document service in different repositories. Creating a large number of
repositories entails significant CPU usage and requires a considerable amount of storage, even if no documents
are stored.
Recommendation
We recommend that you keep the total number of repositories to a minimum. Avoid, for example, creating a
separate repository for each user, especially if the users do not have large amounts of data to store. In such a
situation, create just one repository instead and store the user data in several separate folders.
If folders contain many children, performance might be impaired when you navigate to one of these folders using
a getChildren call. If you navigate to a folder to analyze its data, for example, using the CMIS Workbench, this
analysis becomes complicated. In contrast, fetching a child in a folder with many children by using its object ID or
its path is not a problem.
It is difficult to define what qualifies as a "large" folder. If you send only one getChildren call per hour, then a
thousand or more children would be totally acceptable, but if you send many calls per second, then even 100
children might impair performance. In any case, the load caused by calling this method increases linearly with the
number of children.
Instead of having one folder with many children, you might consider subdividing the children into different
subfolders or even a subfolder hierarchy. Another alternative to using the getChildren call option is to use the
query method with the IN_FOLDER predicate together with additional restrictions to limit the number of matching
results.
Several CMIS methods have a skip count parameter, for example, the getChildren or the query method. Using
large skip counts produces a significant load because a huge number of matching result objects is found and
skipped before the final result set can be collected. To prevent the need for large skip counts, try to reduce the
number of matching results by subdividing the children into different subfolders or by using a more selective
query.
Only use a sort criterion if you really need it, because it might reduce performance significantly (see also Paging
with maxItems and skipCount (for example, for getChildren, query) in the Frequently Asked Questions.
In the operational context (see the OperationalContext.java class), you can define the properties that are to
be returned together with the selected objects. Do not query all properties because this might be time consuming
and it increases the amount of data transferred over the network. In particular, requesting the cmis:path
property can be inefficient because it has to be computed for each call. The general rule is to reduce the amount
It is much faster to access an object using its ID than using its path.
Using the getFolderTree or getDescendants method on large hierarchies is very inefficient. The same is true
for the folder predicate IN_TREE that you can use in the statement of the query method. All these methods are
slow for large hierarchies even if the final result set is small.
The reason for the performance problems with these methods is that all the descendant folders of the start folder
have to be loaded from the database into the server where the document service is running. This results in many
calls to the database and many objects are transferred over the network. Finally, a very complex query with all the
IDs of the folders in the hierarchy has to be created and sent to the database to get the final result.
For the query method, the size of the searchable folder hierarchy is already restricted to a maximum of 1000. For
larger hierarchies an exception is thrown. Be aware that even a hierarchy of 1000 folders is quite large and results
in a heavy load on the system as well as bad performance for the request.
When applications use the document service they fetch a session object using one of the connect methods.
Creating a session is quite an expensive operation, which should be reused and shared if possible. A session
object is thread safe and allows parallel method calls.
Usually, a session is bound to a user. To reduce the number of sessions that are created, fetch the session only for
the first request of the user and store it in the user's HTTP session. Then the session can be reused in subsequent
requests of this user.
If an application uses a service user to connect the session to the document service, we recommend that you
store this session in a central place and reuse it for all subsequent requests.
● A session object has an internal cache, for example, for already fetched objects. To make sure that you fetch
the latest version of specific objects, clear the cache from time to time.
● If a session is used for a very long time, problems might occur that result in exceptions (for example, network
connection problems). A possible solution is to replace the failing session with a new one. However, do not
replace a session if an ObjectNotFound exception is thrown because you tried to fetch a non-existent
document or folder. This also applies to similar situations where the exception is part of the normal method
behavior.
Multitenancy
One document service session is always bound to one tenant and to one user. If you create the session only once,
then store it statically, and finally reuse it for all subsequent requests, you end up in the tenant where you first
created the document service session. That is: You do not use multitenancy.
We recommend that you create one document service session per tenant and cache these sessions for future
reuse. Make sure that you do not mix up the tenants on your side.
If you expect a high load for a specific tenant, we recommend that you create a pool of sessions for that tenant. A
session is always bound to a particular server of the document service and this will not scale. If you use a session
pool, the different sessions are bound to different document service servers and you will get a much better
performance and scaling.
Search Hints
You can indicate hints for queries. The general syntax is:
hint:<hintname>[,<hintname>]*:<cmis query>
● ignoreOwner: Usually, documents are returned for which the current user is the owner OR is present in an
ACE. The ignoreOwner setting returns only documents for which the current user has an ACE; ownership is
ignored in this case. This improves the speed of the query because the owner check is omitted. This is useful if
the owner is present in an ACE anyway.
● noPath: Does not return the path property even if it is requested. This improves the speed of queries on
folders, because paths do not have to be computed internally.
Sample Code
Related Information
The document service executes several backups a day to prevent file loss due to disasters. Backups are kept for
14 days and then deleted. Backups are not needed for simple hard disk crashes, since all storage hardware is
based on redundant hard disks.
If you implement paging using maxItems and skipCount, be aware that the different calls might be send to
different database servers each returning the result objects in a possibly different order. To get a consistent result
for these calls, add a unique sort criterion so that each server returns the objects using the same order. Be aware
that using a sort criterion might reduce the processing speed significantly. Therefore, only use a sort criterion if
really needed.
You can connect to the document service by treating it as an external service and the document service treats
your HTML5 application as an external app that requests access.
Procedure
To enable external access to your document service repositories, deploy a small proxy application that is available
out-of-the-box. For more information about its usage and deployment, see Accessing the Document Service from
an HTML5 Application [page 625].
Related Information
In the cockpit, you can create, edit, and delete a document service repository for your accounts. In addition, you
can monitor the number and size of the tenant repositories of your document service repository.
Note
Due to the tenant isolation in SAP Cloud Platform, the document service cockpit cannot access or view
repostories you create in SAP Document Center or vice versa.
Related Information
In the cockpit, you can create document service repositories for your accounts.
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
Table 282:
Field Entry
Name Mandatory. Enter a unique name consisting of digits, letters, or special characters. The name is
restricted to 100 characters.
Display Name Optional. Enter a display name that is shown instead of the name in the repository list of the ac
count. The name is restricted to 200 characters. You cannot change this name later on.
Description Optional. Enter a descriptive text for the repository. The name is restricted to 500 characters.
You cannot change the description later on.
When you create a repository, you can activate a virus scanner for write accesses. The virus
scanner scans files during uploads. If it finds a virus, write access is denied and an error message
is displayed. Note that the time for uploading a file is prolonged by the time needed to scan the
file for viruses.
Repository Key Enter a repository key consisting of at least 10 characters but without special characters. This
key is used to access the repository metadata.
You cannot recover this key. Therefore, you must be sure to remember it.
You can, however, create a new key using the console client command reset-ecm-key [page
255].
4. Choose Save.
Related Information
In the cockpit, you can change the name, key, or virus scan settings of the repository. You cannot change the
display name or the description.
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, select the repository for which you want
to change the name or the virus scan setting.
3. Choose Edit, and change the repository name or the virus scan setting.
4. Enter the repository key.
5. To change the repository key itself, choose the Change Repository Key button and fill in the key fields that
appear.
In the cockpit, you can delete a repository including the data of any tenants in the repository.
Context
Caution
Be very careful when using this command. Deleting a repository permanently deletes all data. This data cannot
be recovered.
If you simply forgot the repository key, you can request a new repository key and avoid deleting the repository.
For more information, see reset-ecm-key [page 255].
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, select the repository, which you want to
delete.
3. Choose Delete.
4. On the dialog that appears, enter the repository key.
5. Choose Delete.
In the cockpit, you can monitor the number and size of the tenant repositories of your document service
repository.
Context
If an application runs in several different tenant contexts, a tenant repository is created for each tenant context.
The tenant repository is created automatically when the application connects to the document service and the
respective tenant repository did not exist before.
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
2. In Repositories Document Repositories in the navigation area, click the name of your repository.
3. Choose Tenant Repositories in the navigation area.
Related Information
You can create and manage repositories for the document service with client commands.
The following set of console client commands for managing repositories is available:
Related Information
Use SAP Document Center to access and share business content stored in your existing document management
systems, by connecting them to your cloud application.
SAP Document Center helps you provide a seamless user experience to your business users by integrating file
access into the SAP Fiori Launchpad, SAP Jam, and SAP Business Suite applications. Using the native mobile
apps, your employees can access business content everywhere, online or offline - so they can focus on business
anytime, anywhere.
SAP Document Center helps you innovate. Integrate file sharing capabilities into your existing applications.
Expose tailored business content through the ABAP connector implementation. Leverage state-of-the-art
document management capabilities to integrate into your own apps (HTML5, iOS, Android, Windows Mobile, …).
Or use the SAP Cloud Platform Document service to build completely new content-rich applications.
SAP Document Center provides a ready-to-use solution for sharing content based on the SAP Cloud Platform, as
well as an extension platform to integrate custom repositories and custom clients. In addition, it can be integrated
as a tile into the SAP Fiori launchpad. This way, SAP Document Center enables access to existing on-premise
business content, for example, documents that are stored in Microsoft SharePoint or SAP Business Suite. Users
can share content to collaborate with their business partners in a compliant way. Moreover, business document
templates and standards are available company-wide.
On top of the ready-to-use solution, you can use SAP Document Center to integrate a sharing functionality into
your existing applications, implement your own clients for advanced scenarios, and extend ABAP connectivity to
support your business processes.
Related Information
The SAP Cloud Platform Feedback Service (Feedback Service) provides developers, customers, and partners with
the option to collect end user feedback for their applications. In addition, the Feedback Service provides
predefined analytics on the collected feedback data - feedback rating distribution and detailed text analysis of
user sentiment (positive, negative, or neutral).
Note
The Feedback Service is a beta functionality that is available on the SAP Cloud Platform trial landscape for
developer accounts.
To use the Feedback Service, you must enable it from the SAP Cloud Platform cockpit for your account. For more
information, see Accessing Services in the Related Information section.
The Analysis UI leverages the SAP HANA analytics and text analysis capabilities. Feedback data is stored in the
SAP HANA DB.
To be able to operate in Administration and Analysis, you need the following roles assigned to your user:
● FeedbackAdministrator
● FeedbackAnalyst
As an account owner, the roles are automatically assigned to your user once you have enabled the Feedback
Service. If you want to allow other SAP ID users to access the Analysis and Administration UIs, you need to assign
the roles manually. For more information about assigning the required roles, see Consuming the Feedback
Service [page 663].
In the Administration UI, the administrator adds the applications for which feedback is to be collected. As a result,
the developer can use the client API to consume the Feedback Service.
Once the Feedback Service is consumed by the application and feedback data is collected, the feedback analyst
can explore feedback text analysis in the Analysis UI. As a result, a developer can use end user feedback to
improve the performance and appearance of the specific application.
Architecture
The Feedback Service is operated by SAP Cloud Platform and leverages the in-memory technology of the SAP
HANA DB.
Note
The Feedback Service is a beta functionality that is available on the SAP Cloud Platform trial landscape for
developer accounts.
In this section, you will learn how to enable your application to use the SAP Cloud Platform Feedback Service to
collect feedback. To do so, you need to:
Note
For the role assignments to take effect once you have made them, you either use a new browser
session or log out from the cockpit and log on to it again.
4. Add the application for which feedback is to be collected in the Administration UI of the Feedback Service.
For more information about accessing the Administration and Analysis UIs of the Feedback Service, adding
applications, and analyzing feedback, see Getting Feedback for Applications [page 674].
5. Modify your application code to use the Feedback Service client API for collecting your application users'
feedback.
Your application can consume the Feedback Service either via a browser or via web application backend.
Related Information
The SAP Cloud Platform Feedback Service is exposed through a client API that you can use to enable users to
send feedback for your application. You do this by adding code to your application that uses the Feedback Service
client API.
For more information about the tutorials, see the Related Links section.
Request
Your application can consume the Feedback Service through the service's REST API. The messages exchanged
between the client (your application) and the Feedback Service are JSON-encoded. You call the Feedback Service
by issuing an HTTP POST request to the unique application feedback URL that contains your application ID:
https://feedback-account_name.hanatrial.ondemand.com/api/v2/apps/application_id/posts
The application feedback URL is automatically generated after you have registered your application in the
Administration UI of the Feedback Service. For more information about how to obtain the application feedback
URL, see Feedback Service Administration in the Related Links section.
You need to set the Content-Type HTTP header of the request to application/json. In the request body, you
supply a feedback resource in JSON format. The resource may have the following attributes:
en
- English.
To collect feedback data, you need to provide values for at least one rating or one free text attribute. You can
additionally pass values for:
● Up to 5 rating attributes
● Up to 5 free text attributes
● Up to 8 context attributes
Caution
According to the data privacy terms defined in the Terms of Use for SAP HANA Cloud Developer Edition, no
personal data must be collected, processed, stored or transmitted using your developer account on the trial
landscape. Therefore, you must not use the context attributes of the Feedback Service client API to collect
personal data such as user ID, user name, and so on.
Response
Upon successful request, the Feedback Service returns an HTTP response with code 200-OK and an empty body.
Error Handling
In case of errors, the Feedback Service returns an HTTP response with an appropriate error code. Whenever there
is any additional information describing the error, it is contained in the response body as an Error object. For
example:
{
error: {
code: 30,
message: "quota exceeded"
}
}
The value of error.code identifies the cause, and the value of error.message describes the cause. The string in
error.message is not intended to be presented to your application users and therefore not translated. The error
message's purpose is to assist the development of your application.
The table below lists the most common errors that the service can return. In addition to this list, a call to the
Feedback Service may also result in a response with another HTTP response code. In this case the HTTP
response code itself should be enough to describe the issue.
Examples:
Example
A sample request to the Feedback Service may look like this:
● URL: https://feedback-<account_name>.hanatrial.ondemand.com/api/v2/apps/
<application_id>/posts
● HTTP method: POST
● Content-Type: application/json
● Request body:
{
"texts":{
"t1": "Very helpful",
"t2": "Well done",
"t3": "Not usable at all",
"t4": "I don't like it",
"t5": "OK"
},
"ratings":{
"r1": {"value":5},
"r2": {"value":2},
"r3": {"value":5},
"r4": {"value":3},
"r5": {"value":1}
Related Information
This tutorial guides you how to use the SAP Cloud Platform Feedback Service directly via a web browser.
Prerequisites
Procedure
a. From the Eclipse main menu, navigate to File New Dynamic Web Project .
b. In the Project name field, enter feedback-app. Make sure that SAP HANA Cloud is selected as the target
runtime.
c. Leave the default values for the other project settings and choose Finish.
2. Add an HTML file to the web project:
a. In the Project Explorer view, select the feedback-app node.
b. From the Eclipse main menu, navigate to File New HTML File .
c. Enter as file name index.html.
d. To generate the file, choose Finish.
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>Feedback Application</title>
<script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js"
id="sap-ui-bootstrap"
data-sap-ui-libs="sap.m, sap.ui.commons"
data-sap-ui-theme="sap_bluecrystal">
</script>
<script>
var app = new sap.m.App({initialPage:"page1"});
var t1 = new sap.m.Text({text: "Please share your feedback"});
var t2 = new sap.m.Text({text: "Do you like it"});
var ind1 = new sap.m.RatingIndicator({maxValue : 5, value : 4});
var t3 = new sap.m.Text({text: "Some free comments:"});
var textArea = new sap.m.TextArea({rows : 2, cols: 40});
var sendBtn = new sap.m.Button({
text : "Send",
press : function() {
var data = {
"texts": {t1: textArea.getValue()},
"ratings": {r1: {value: ind1.getValue()}},
"context": {page: "page1"}
};
$.ajax({
url: "https://feedback-
<account_name>.hanatrial.ondemand.com/api/v2/apps/<your_application_id>/
posts",
type: "POST",
contentType: "application/json",
data: JSON.stringify(data)
}).done(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Thank you. Your feedback was
accepted.");
}).fail(function() {
jQuery.sap.require("sap.m.MessageToast");
sap.m.MessageToast.show("Something went wrong plese try
again later.");
});
}
});
var vbox = new sap.m.VBox({
fitContainer: true,
displayInline: false,
items: [t1, t2, ind1, t3, textArea, sendBtn]
});
var page1 = new sap.m.Page("page1", {
title: "Feedback Application",
content : vbox
});
app.addPage(page1);
app.placeAt("content");
</script>
</head>
<body class="sapUiBody">
<div id="content"></div>
</body>
</html>
3. Adjust the service URL in the source code so that it points to the application feedback URL generated for your
application.
4. Test the application on SAP Cloud Platform local runtime:
a. Deploy the application on your SAP Cloud Platform local runtime.
b. Open the application in your web browser: http://<host>:<port>/feedback-app/. Send sample
feedback.
5. Test the application on the SAP Cloud Platform:
a. Deploy the application on the SAP Cloud Platform.
b. Start the application and open it in your web browser.
Related Information
This tutorial guides you how to use the SAP Cloud Platform Feedback Service from the Java code in a simple Java
EE Web application.
Prerequisites
a. From the Eclipse main menu, navigate to File New Dynamic Web Project .
b. In the Project name field, enter feedback-app. Make sure that SAP HANA Cloud is selected as the target
runtime.
c. Leave the default values for the other project settings and choose Finish.
2. Add a servlet to the web project:
a. In the Project Explorer view, select the feedback-app node.
b. From the Eclipse main menu, navigate to File New Servlet .
c. Enter the Java package hello and the class name FeedbackServlet.
d. To generate the servlet, choose Finish.
e. Replace the source code with the following content:
FeedbackServlet.java
package hello;
import java.io.IOException;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.apache.http.HttpResponse;
import org.apache.http.client.HttpClient;
import org.apache.http.client.methods.HttpPost;
import org.apache.http.conn.ClientConnectionManager;
import org.apache.http.entity.StringEntity;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.core.connectivity.api.DestinationException;
import com.sap.core.connectivity.api.http.HttpDestination;
/**
* Servlet implementation class FeedbackServlet
*/
public class FeedbackServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER =
LoggerFactory.getLogger(FeedbackServlet.class);
public FeedbackServlet() {
super();
}
protected void doPost(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
HttpClient httpClient = null;
try {
Context ctx = new InitialContext();
HttpDestination destination = (HttpDestination)
ctx.lookup("java:comp/env/FeedbackService");
httpClient = destination.createHttpClient();
HttpPost post = new HttpPost();
String text = request.getParameter("text");
String rating = request.getParameter("rating");
String page = request.getParameter("page");
String body = "{\"texts\":{\"t1\": \"" + text + "\"}, \"ratings\":
{\"r1\": {\"value\": " + rating + "}}, \"context\": {\"page\": \"" + page +
"\", \"lang\": \"en\", \"attr1\": \"mobile\"}}";
//Use the proper content type
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR, "Something
went wrong please try again later.");
} else {
response.getWriter().print("Your feedback was accepted. Thank
You!");
}
} catch (NamingException e) {
LOGGER.error("Cannot lookup the feedback service destination", e);
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
"Cannot lookup the feedback service destination");
} catch (DestinationException e) {
LOGGER.error("Cannot create HttpClient", e);
response.sendError(HttpServletResponse.SC_INTERNAL_SERVER_ERROR,
"Something went wrong please try again later.");
} finally {
if (httpClient != null) {
ClientConnectionManager connectionManager =
httpClient.getConnectionManager();
if (connectionManager != null) {
connectionManager.shutdown();
}
}
}
}
}
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<meta http-equiv="Content-Type" content="text/html;charset=UTF-8"/>
<title>Feedback Application</title>
<script src="https://sapui5.hana.ondemand.com/resources/sap-ui-core.js"
id="sap-ui-bootstrap"
data-sap-ui-libs="sap.m, sap.ui.commons"
data-sap-ui-theme="sap_bluecrystal">
</script>
<script>
var app = new sap.m.App({initialPage:"page1"});
var t1 = new sap.m.Text({text: "Please share your feedback"});
var t2 = new sap.m.Text({text: "Do you like it"});
var ind1 = new sap.m.RatingIndicator({maxValue : 5, value : 4});
var t3 = new sap.m.Text({text: "Some free comments:"});
var textArea = new sap.m.TextArea({rows : 2, cols: 40});
var sendBtn = new sap.m.Button({
text : "Send",
press : function() {
var data = {
"text": textArea.getValue(),
web.xml
...
<resource-ref>
<res-ref-name>FeedbackService</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
...
Name=FeedbackService
Type=HTTP
URL=https://feedback-<account_name>.hanatrial.ondemand.com/api/v2/apps/
<your_application_id>/posts
Authentication=NoAuthentication
The application feedback URL which contains the application ID is automatically generated after you have
registered your application in the Administration UI of the Feedback Service. For more information about
Name=FeedbackService
Type=HTTP
URL=https://feedback-<account_name>.hanatrial.ondemand.com/api/v2/apps/
<your_application_id>/posts
Authentication=NoAuthentication
The application feedback URL which contains the application ID is automatically generated after you have
registered your application in the Administration UI of the Feedback Service. For more information about
how to obtain the application feedback URL, see Feedback Service Administration in the Related Links
section.
c. Start the application and open it in your web browser.
Related Information
Once you deploy your application on the SAP Cloud Platform, you need to add the applications for which feedback
is to be collected in the Administration UI of the feedback service. As a result, a dedicated application feedback
URL is generated. The developer uses this URL in the client API to consume the feedback service. Once the
feedback service is consumed by the application and feedback data is collected, the feedback analyst can explore
feedback rating and text analysis in the Analysis UI of the feedback service. As a result, a developer can use end
user feedback to improve the performance and appearance of the specific application.
To be able to operate in the Administration and Analysis UIs of the feedback service, you need the following roles
assigned to your user:
● FeedbackAdministrator
● FeedbackAnalyst
As an account owner, the roles are automatically assigned to your user once you have enabled the feedback
service. If you want to allow other SAP ID users to access the Analysis and Administration UIs, you need to assign
You can also provide your feedback about the feedback service and its UI. To do that, choose the Feedback button
and share your ideas and suggestions for improvement in the feedback form. Note that information for your
landscape host as well as for the specific place (page, view or tab) from which you have called the feedback form
is collected for analysis purpose.
Related Information
● Add applications for which feedback is to be collected in the Administration UI of the feedback service
● Customize descriptions of feedback questions
● Customize descriptions of context attributes
● Free up feedback quota space
Once you add an application to your list, you enable it to use the feedback service. As a result, a unique account-
specific and application-specific URL is generated. To start collecting feedback, the developer needs to integrate
the URL in her or his application UI where end users post feedback (for example, in a feedback form). The URL is
called through a POST request by the application that wants to send feedback. That is, once an end user submits
the feedback form, the application calls the feedback service through the URL and the service stores user
feedback.
https://feedback-<account_name>.hanatrial.ondemand.com/api/v2/apps/<application_id>/
posts
To be able to operate in the Administration UI of the feedback service, you need to assign the
FeedbackAdministrator role to your user.
https://feedback-<account_name>.hanatrial.ondemand.com/admin/mobile
Each account has a feedback quota assigned – that is, a specific amount of feedback data that can be stored in
the SAP HANA DB. The amount equals to 250 feedback forms filled in by end users. Once you reach 70% of the
feedback quota, you get a warning message. Once you reach the feedback quota limit, however, the feedback
service ceases processing feedback requests and storing feedback data. What you can do in either case is free up
quota space. You do this by deleting the feedback records for a particular time period of your choice.
Note
The feedback administrator can enter as descripitons the questions' text from the application feedback form.
If you have the FeedbackAnalyst role assigned (in addition to the FeedbackAdministrator role), you can analyze
feedback results and export raw feedback data.
Related Information
Context
As a feedback administrator, you can add applications and administer applications' feedback.
Procedure
As a feedback analyst, in the Analysis UI of the Feedback Service you can explore the feedback collected from end
users by viewing detailed rating or text analysis or exporting the feedback text as raw data.
The rating analysis presents information about rating questions and how feedback rating is distributed according
to time and distribution criteria.
You can choose a specific time period for which to view analyzed feedback data and to export raw data. By
default, the time period selected is the last 7 days.
You can export raw feedback data, so that you can perform more specific or tailored to your needs analysis. You
download raw feedback data in a .CSV format encoded in UTF-8.
Note
When you open the exported file, if there are characters that do not appear correctly, reopen the file as a UTF-8
encoded one.
Related Information
As a feedback analyst, you can explore the feedback collected from end users by viewing the detailed text
analysis. Text analysis classifies user feedback by:
On the Overview screen you can see a summary of all free text feedback questions. Each question tile provides the
following information:
The sentiment summary provides useful overview of negative, positive, and neutral sentiments of user feedback.
Feedback from a single user can result in a small or large amount of the overall sentiment count of the specific
question. In other words, sentiment is calculated not per user feedback but by the sentiment elements (words) in
the feedback text.
Once you click on a question tile, you can see detailed information about the specific feedback question:
For exmaple, you can filter your responses' list for a specific question to show only feedback of type Problem that
has Negative and Neutral sentiment. The returned list is ordered by date (most recent is on top).
Note
No matter what filter is applied, the list always displays responses (if any) that are not classified by type or
sentiment.
You can further drill-in to view details about a specific feedback response and examine the actual feedback text
analysis. You can view the whole text of the feedback response with all detected text analysis "hits". In addition,
you can choose which types of "hits" to highlight within the text. For example, you can once again choose to
highlight just the Problem that has Negative and Neutral text analysis. Alternatively, you can choose to remove all
highlights.
As a feedback analyst you can examine the feedback collected from users by viewing detailed rating analysis.
Users can reply to each rating question by choosing a number on scale of 1 to 5 where 1 is the lowest rating and 5
is the highest.
On the Overview screen you can see a summary of all rating questions. Each question tile provides the following
information:
Once you click on a question tile, you can see detailed information about the specific feedback question and for
the time period you specified:
Depending on the time period you have specified the graph and table views show the following data (just in
different format):
● Feedback distribution by rating - graph or table showing what percent of the overall feedback responses
receive a certain rating number. That is, how feedback is distributed in terms of a specific rating.
● Feedback distribution by time period - graph or table in which you can choose to see feedback distribution
among various time frame granularities, for exmaple a day or an year. The data displayed is the average rating
to the specified time granularity and only applies to the time period intially selected.
Overview
The SAP Cloud Platform, gamification service allows the rapid introduction of gamification concepts into
applications. The service includes an online development and operations environment (gamification workbench)
for easy implementation and analysis of gamification concepts. The underlying gamification rule management
provides support for sophisticated gamification concepts, covering time constraints, complex nested missions
and collaborative games. The built-in analytics module makes it possible to perform advanced analysis of the
player's behavior in order to facilitate continuous improvement of game concepts.
● Web-based IDE (gamification workbench) for modeling game mechanics and rules
● Gamification engine for real-time processing of sophisticated gamification concepts involving time
constraints and cooperation
● Built-in runtime game analytics for continuous improvement of game designs
● Web API for easy integration
● Simple SAP UI5 integration based on widgets
● Single-Sign-On (SSO) support based on Identity Authentication
● Enterprise-level performance and scalability
Related Information
Follow the pages below to learn how to enable the gamification service in your account, and how to configure and
use the sample application HelpDesk.
When enabling the service, configuration steps 2, 3, and 4 are executed automatically, as follows:
● All gamification roles are assigned to the user that enabled the service
● The required destinations are created on the account level. The destination gsdest requires credentials
(user/password). For the Trial version it is possible to use the given SCN user for this. However, it is safer to
create a dedicated technical user for this according to the following procedure.
Note
If you use your SCN user for configuring the technical destination gsdest make sure that you change the
destination configuration after changing the SCN user password in SAP ID Service. Otherwise, your user will be
locked when using the HelpDesk app.
Prerequisites
● You have access to a SAP Cloud Platform account for personal development, or to a Trial account.
● You are an account member with the role Administrator.
● You have an SCN user.
Procedure
Prerequisites
You have logged on to the SAP Cloud Platform cockpit with your SCN user and password.
Procedure
Related Information
Prerequisites
You have logged into the SAP Cloud Platform cockpit with your SCN user and password.
Context
You need to configure a destination to allow the communication between your application (in this case, a sample
app) and your subscription to the gamification service. For the sample application, two destinations are
necessary:
Procedure
You can find the application URL of your service instance by navigating to the gamification workbench
Account Services Gamification Service Go to Service .
6. Select the proxy type: Internet.
7. Select the authentication: Basic Authentication
8. Enter user ID. Recommendation: Use a separate technical user, see following procedure. Alternatively, you
can use your SCN user. In this case make sure to update the destination as well in case of password changes.
Otherwise the SAP ID Service will lock you user when using the HelpDesk app.
9. Enter the SCN password.
10. Choose Save.
Related Information
Procedure
You can find the application URL of your service instance by navigating to the gamification workbench
Account Services Gamification Service Go to Service .
Note
It may take up to five minutes until the destinations are available for the service.
Related Information
Prerequisites
● You have logged into the SAP Cloud Platform cockpit with your SCN user and password.
● You are an account member with role Administrator.
To support application-to-application SSO as part of destination gswidgetdest, you have to configure your
account to allow principal propagation.
Procedure
1. Open the cockpit and choose the Trust sub-tab in the Security tab.
2. Choose the Local Service Provider sub-tab.
3. Choose Edit.
4. Change the Principal Propagation value to Enabled.
Related Information
Prerequisites
● You have logged into the SAP Cloud Platform cockpit with your SCN user and password.
● You have the role TenantOperator.
After a while, you will see a notification: “Gamification concept successfully created.”
5. Switch to the HelpDesk application by using the dropdown box in the upper right corner.
6. Go to the Summary tab to check if all game mechanics are available.
Prerequisites
Procedure
The gamification development cycle describes the processes involved in the introduction of gamification in
existing or new applications.
Creation of the gamification concepts is a purely conceptual tasks that is typically executed by gamification
designers. The task is executed during the design phase and covers the specification of a meaningful game /
gamification design.
Implementation of the gamification concept covers the mapping of the gamification concept to the game
mechanics offered by the gamification service. This task is normally performed by gamification designers and/or
IT experts.
Integration with the application is a development tasks which covers the technical integration of the target
application with the APIs of the gamification service. This is normally performed by application developers, since
technical knowledge of the application is required (such as implementation points for listening for events or visual
representation of achievements).
A gamification concept is normally developed by gamification designers and domain experts. The gamification
concept describes the (game) mechanics that will serve to encourage users (players) to perform certain tasks. An
example of this is to encourage call center employees to process tickets or motivate them to process
cumbersome tickets first.
Note
Creation of the gamification concept is not a service that is covered or supported by the gamification service.
A simple gamification concept covers elements such as points and badges. Users are awarded experience points
for certain actions for example, and badges as a visual representation. The gamification concept describes how
these elements are used to intrinsically motivate the users. It therefore includes descriptions of the actions (within
the application) that allow users to attain the various achievements.
Examples are missions to foster collaboration or timel constraints that encourage users to work faster.
Related Information
The implementation of the gamification concepts is required in order to map the gamification concept to the
elements used in the gamification service. The gamification workbench is used to maintain the gamification
elements, such as points, badges, levels or rules.
The gamification concept can be modified at runtime. Please be aware that gamification is about full transparency
to the users and is used primarily to encourage them. We therefore advise against modifying the gamification
significantly without informing the users, since this might catch them by surprise and could possibly demotivate
them.
Related Information
Integration with the application covers the technical integration of the target application with the APIs of the
gamification service. Firstly, integration is required to send events that are of interest to the gamification service,
for example to send the event that a user in a call center has successfully processed a ticket. Secondly,
integration is necessary to notify the user about his/her achievements, to send notifications to the user for earned
points, or to display the user’s profile.
The gamification service is designed to support the integration of mainly cloud applications running with SAP
Cloud Platform. Integration of other applications is technically possible, but restricted for security reasons.
Related Information
Gamification is a continuous process. It is crucial to continuously monitor the influence of a gamification concept
and react to the users' behavior. For example, you want to know if your gamification concept motivates the target
group or if users lose interest.
The gamification service offers basic analytics: for example, the assignment of points or badges to users over
time. Therefore, you can analyze peaks and troughs of user achievements.
The introduction of gamification often requires the acquisition of sensitive information. It might be necessary for
example to track the user behavior within an application in order to allow the gamification of onboarding
scenarios.
The gamification service makes it possible to anonymize user data. The gamification service also offers secure
communication via the various APIs.
It is the responsibility of the host application to ensure data privacy however. As a developer of the host
application, you are responsible for ensuring that only data that is necessary is sent to the gamification service.
Related Information
The gamification workbench is the central point for managing all gamification content associated with your
account and for accessing key information about your gamification usage. It allows you to manage the
Summary Dashboard
The figure below shows an example of the Summary dashboard in the workbench and is followed by an
explanation:
The entry page Summary of the gamification workbench provides an overview of the gamification concept for the
selected app, the overall player base and overall landscape.
Logon
You can log on with your account user via SSO (single-sign on).
The gamification workbench can be accessed using the Subscription tab in the SAP Cloud Platform cockpit. The
following link will be used: https://< SUBSCRIPTION_URL>/gamification .
Navigation
● Summary
● Game Design
Note
You must have specific roles in order to access the gamification workbench, see Roles [page 697].
Table 285:
Level Description
Game Design ● Allows you to read and configure game mechanics (man
aging points, badges, levels, missions and rules for exam
ple) for multiple applications
Terminal ● Allows you to test the gamification concept using the API
1.4.6.3.1 Roles
Different roles can be assigned to users, to enable them to explicitly access the gamification workbench.
Prerequisites
Procedure
Context
The gamification service offers the gamification workbench, an API for integration and a demo app. The access to
the user interfaces and API is protected using SAP Cloud Platform roles.
Note
Roles must be explicitly assigned to a SAP Cloud Platform user.
Note
The API can be used for the integration of host applications. For productive use a technical user (SAP Cloud
Platform user) should be created for a communication between the host application and the gamification
service. (The use of a personal account or user is only recommended for testing or demo purposes.)
1.4.6.4.1 Roles
The following roles can be assigned to access the gamification service gamification workbench, API or demo app
and must be explicitly assigned to a SAP Cloud Platform user:
Table 286:
Role Type Access Level Description
AppStandard Technical API (methods are annotated ● Write only - using rules;
with required role) reading achievements is
possible, but should be
Terminal (send events for
avoided
testing purposes)
● Send player-related
events
● Read player achieve
ments and available ach
ievements
AppAdmin Technical API (methods are annotated ● Read and delete a player
with required role) record for a single app or
for the whole tenant
● Create and delete a user
or a team
Player (automatically as Technical (implicit role) API (methods are annotated ● Send player-related
signed) with required role) events (only works for
the user that is authenti
cated using the identity
provider which is config
ured for your account)
Note
This role is not a standard
SAP Cloud Platform role. It
is automatically assigned
to a user (player) that is
created using the
gamification service and
cannot be explicitly as
signed to a SAP Cloud
Platform user.
Prerequisites
You have logged on to the SAP Cloud Platform cockpit with your account user.
Procedure
Related Information
The SAP Cloud Platform, gamification service meets the security and data privacy standards of the SAP Cloud
Platform. In general, the gamification service is not responsible for any content such as game mechanics or player
achievements. It is the responsibility of the host application to meet any local data privacy standards. Therefore,
you need to make sure that the personal information of players is protected according to the local regulations. In
some cases where the gamification is applied to employee scenarios work council approval for the gamified host
application might be necessary.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench, and have opened the Apps tab
in the Operations section.
The gamification service introduces the concept of apps. An app represents a self-contained, isolated context for
defining and executing game mechanics such as points, levels, and rules.
All data or meta data associated with an app are stored in an isolated way. In addition to this, an isolated rule
engine instance is created and started for each app.
Note
Players are stored independently from apps and can therefore take part in multiple apps.
Prerequisites
You have the roles TenantOperator and GamificationDesigner, are logged into the gamification workbench,
and have opened the Apps tab in the Operations section.
Context
An app represents a self-contained, isolated context for defining and executing game mechanics.
Creating Apps
Procedure
Updating Apps
Procedure
Deleting Apps
Procedure
Prerequisites
You have the role GamificationDesigner or TenantOperator or both and are logged into the gamification
workbench.
Context
By switching the app, the gamification workbench only shows game mechanics and player achievements
associated with the selected app.
Procedure
1. Select an app in the app selection combo box located in the upper right corner of the gamification workbench.
2. Optional: Review whether the app has been changed successfully, for example by comparing the summary
page (tab Summary).
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench and have opened the Operations
tab, and navigated to the Data Management section.
The gamification service allows exporting all available apps including their content. You can choose between a full
tenant export including all player data and an export of game mechanics only. The latter can be imported again.
Procedure
1. Select the Export mode in the combo box labeled Export in the form area Import / Export.
○ Full Export: export all game mechanics and player data.
○ Game Mechanics: export game mechanics only.
2. Press Download to start the export. Your browser should show the file storing dialog.
3. Store the provided ZIP file on your disk.
Prerequisites
● You have the role TenantOperator, are logged into the gamification workbench and have opened the
Operations tab, and navigated to the Data Management section.
● You have a gamification service export file.
Note
See section Exporting Apps [page 704] for details.
Context
The gamification service allows importing game mechanics based on existing gamification service export files (ZIP
format). Section Exporting Apps explains how to do the export.
Procedure
1. Press Browse in the form area Import / Export to select the import file.
2. Press Upload to start the import based on the selected file.
Note
See section Configuring Rules [page 720] for details.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench, and have opened the Operations
tab, and navigated to the Data Management section.
Context
The gamification service is shipped with selected demo content comprising game mechanics as well as demo
players. The demo content is created within the context of a new app.
Procedure
Note
Appropriate content (points, levels, badges, and rules) is created for the app automatically.
Prerequisites
You have the role TenantOperator, are logged into the gamification workbench and have opened the Operations
tab, and navigated to the Data Management section.
Context
The gamification service is shipped with selected demo content comprising game mechanics as well as demo
player. The demo content is created within the context of a new app. The app can be deleted manually, but this will
not delete generated demo players. To delete the full demo content, the explicit action must be triggered.
Procedure
Prerequisites
You have the GamificationDesigner role , are logged on to the gamification workbench and have opened the
Game Design tab.
Context
The gamification concept describes the metrics, achievements and rules that are applied to an application. The
following checklist describes the tasks required to implement your gamification concept in your subscription of
the gamification service.
1. Configuring Achievements:
○ Configuring Points (Point Categories) [page 708]
○ Configuring Levels [page 710]
General Procedure
For each game mechanics entity there is a tab with a master and details view.
● Master View
○ Shows the list of available entities.
○ Add button for adding a new entity.
○ Edit All button for switching to batch deletion mode.
● Details View
○ Shows entity attributes and images.
○ Edit button for editing entity attributes.
○ Duplicate button for cloning the complete entity including attribute values.
○ Delete button for deleting the given entity.
Each entity has at least the attributes name and a display name. The name serves as the unique identifier and is
immutable.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Points tab.
Points are the fundamental element of a gamification design. For example, points can indicate the progress in
various dimensions. Points can be flagged as "Hidden from Player" for security or privacy reasons. Points that are
flagged as hidden are not visible to players. Instead they can be utilized in rules. Furthermore points can have
various different subtypes. The table lists the available point types.
Type Description
ADVANCING Advancing points are points that can never decrease. They are
used to reflect progress.
Points can be configured in the Points subtab of the Game Design tab.
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab.
Procedure
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab
Procedure
Prerequisites
You have the GamificationDesigner role, are logged on to the gamification workbench and have opened the
Points tab
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Caution
Only levels that are based on the default point category are exposed to the default user profile.
A level describes the status of a user once a specific goal is reached. The gamification service allows you to define
levels based on a defined point category. The threshold defines the value of the selected point type to reach the
level.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Context
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Levels tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Context
A badge is a graphical representation of an achievement. Hidden badges are not visible to the user before the
assignment and can be used as surprise achievements.
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Procedure
Prerequisites
You have logged onto the gamification workbench with the role GamificationDesigner and you have opened
the Badges tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Context
A mission defines what has to be achieved to gain a measurable outcome. Besides basic standalone missions the
gamification service allows modelling complex mission structures using mission conditions and consequences.
Note
Mission conditions and consequences are of descriptive nature only. Actual condition checking and the
execution of consequences has to be done by corresponding rules. These rules are not generated
automatically yet.
● Point Conditions: A number of points, each with a respective threshold. Each point can be considered as a
progress indicator: As soon as the threshold is reached, the condition is met.
● A list of missions that have to be completed. Within the API such missions are referred to as sub missions.
The consequences part is limited to a list of follow-up missions, which should be assigned or unlocked after the
current mission has been completed. Within the API such follow-up missions are referred to as nextMissions.
Example for a rule that checks a point condition in its WHEN part and assigns a follow-up mission in its THEN part:
● WHEN
$p : Player($playerid : id)
eval(queryAPIv1.hasPlayerMission($playerid, 'Troubleshooting', false) == true)
eval(queryAPIv1.getScoreForPlayer($playerid, 'Critical Tickets', null,
null).getAmount() >= 5)
● THEN
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Results
Note
Adding a sub mission or follow-up mission only creates relations in the database. The corresponding rules for
checking conditions, assigning follow up missions, or both are not generated yet. They have to be created
manually. But without storing these relationships and making them available through the achievement query
API it would not be possible to create such rules at all.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Missions tab.
Procedure
● System Missions: the mission life cycle is fully controlled by the service using API calls within rules.
● User-accepted Missions: the player actively decides whether to accept or reject missions, while the remaining
mission life cycle (unlocking or completing a mission) is controlled by the service. In both cases the API calls
have to be executed within rules to ensure data consistency between the engine and the backend.
All state transitions are triggered by calling the respective API methods within rules, while the list of missions in a
certain state can be retrieved either by calling the API directly or within a rule.
Sample rule for assigning a system mission as part of the user init rule:
● WHEN
● THEN
● WHEN
$p : Player($playerid : id)
eval(queryAPIv1.hasPlayerMission($playerid, 'Troubleshooting', false) == true)
eval(queryAPIv1.getScoreForPlayer($playerid, 'Critical Tickets', null,
null).getAmount() >= 5)
● THEN
Note
Invoking the manual mission methods via the user endpoint currently does not trigger any rules. If there is a
rule that has to trigger when missions become active for players it would require a separate event to trigger this
rule.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Context
The rules are a fundamental element of the game mechanics. They describe the consequences of actions, the
corresponding constraints and the goals that can be achieved. The rules allow you to define complex conditions
and consequences based on common complex event processing (CEP) operators.
Related Information
Rules are the core elements of the gamification design. Generally they follow the event condition action (ECA)
structure as for active rules in event driven architectures. Each rule is structured in two parts:
● Left hand side (LHS): rule conditions or trigger (events conditions and/or player conditions)
● Right hand side (RHS): rule consequences (updates from the player and/or event generation)
The rule conditions (LHS) are maintained in the Trigger (“when”) area. Examples are:
The rule consequences (RHS) are maintained in the Consequences (“then”) area. Examples are:
● Create new events - new event with the type “solvedProblemDelayed” that is triggered with a delay of 1
minute:
Note
The gamification service follows the “rule-first” approach. This means that any achievements of a player are
always updated using the rule engine. A modification of player achievements cannot be done using an API
(without any rule execution).
The SAP Cloud Platform, gamification service allows you to write rules to reach the best flexibility for the targeted
game concept. Additionally you can write rules in one of the multiple graphical (form based) editors in the
gamification workbench.
The declaration of the trigger (“when”) part is based on the Drools Rules Language (DRL).
The trigger part defines the constraints that must be fulfilled in order to execute the consequences ("then" part).
Variables can be defined and used both in the "when" and in the "then" part. This is generally recommended in
case you want to use the same object more than once. Multiple constraints can be described in one trigger part.
The constraints are typically described using the logical operators (within eval statements) and evaluation of the
event object. The event object must be defined with a type and can include multiple parameters. Additionally, DRL
allows you to define temporal constraints using common complex event processing (CEP) operators.
Related Information
http://docs.jboss.org/drools/release/5.6.0.Final/drools-expert-docs/html/ch05.html
The gamification service rule engine allows the use of two event streams:
● Managed event stream - eventstream: All events and user actions that are sent using the API will
automatically be sent using the managed event stream. “Managed” means that all events are retracted
automatically. Point-in-time events (duration=0) are retracted immediately after execution of the
corresponding rules while long-living events (duration >0) are retracted 1 second after they have expired. If
this automated event retraction is not suitable for your use case, you can use the unmanaged stream instead.
● Unmanaged event stream - unmanagedstream: For this stream you must take care of event retraction
yourself, which offers more flexibility with regards to rule design. For stability reasons, events sent to this
stream are retracted automatically after 28 days.
You must explicitly declare in the trigger part which event stream will be used. Furthermore, you must explicitly
declare in the consequences part which event stream is used in case you create new events. Using the managed
stream is strongly recommended. Only use the unmanaged stream if the auto-retraction does not work with your
rule design.
1.4.6.6.5.2.1.2 Variables
Context
Variables can be defined in the trigger part and can afterwards be used in both the trigger and the consequences
part. Variables are recommended in case one object is used more than once. For example, a player object needs
to be updated multiple times.
Procedure
A variable is declared by any string with a leading $ sign, for example $player or $var.
Declaration of a variable:
$<VARIABLE> : <EXPRESSION>
Context
An event type must be set for each incoming event. The event type needs to be checked within the trigger part.
The player's ID is sent with each event, it should be stored in a variable for further use.
Additionally, multiple parameters can be passed with an event and evaluated. The parameters can be a string or
any numeric values. The parameters can be evaluated with logical operators such as equal (=), larger than (>) and
smaller than (<).
Procedure
Declaration of an event object with a given event type and declaration of a variable with a given player ID:
Note
It is recommended to always assign the player ID (playerid) within the event object of a variable since the
player ID is necessary to get the according player object for updating achievements in the consequence part.
Declaration of an event with a given event type, declaration of a variable with a given player ID and evaluation of a
property:
EventObject(type=='<EVENT_TYPE>', data['<PROPERTY>']<OPERATOR><VALUE>
$playerid:playerid) from entry-point eventstream
Note
It is recommended to always evaluate event parameters within the event object instead of defining additional
parameters and using additional eval statements.
EventObject(type=='solvedProblem', data['relevance']=='critical',
$playerid:playerid) from entry-point eventstream
● Declaration of event with the given type “buttonPressed” and a property with the name “color” and the value
“red”.
● Declaration of event with the given type “temperatureIncreased” and an integer property with the name
“temperatureValue” where the numeric value is larger than 30.
EventObject(type=='temperatureIncreased',
Integer.parseInt(data['temperatureValue'])>30, $playerid:playerid) from entry-
point eventstream
● Declaration of two events of type “ticketEventA” and “ticketEventB”. Both events must occur and they have
to belong to different players.
EventObject(type=='ticketEventA', $playerid:playerid)
EventObject(type=='ticketEventB', playerid!=$playerid)
● Declaration of two events of type “ticketEventA” and “ticketEventB” using the explicit “and” operator. Both
events must occur and they have to belong to different players.
● Declaration of two events of type “ticketEventA” and “ticketEventB” using the “or” operator that describes
that “eventA” or “eventB” must occur and the "player IDs" must not be the same.
(EventObject(type=='ticketEventA', $playerid:playerid) ||
EventObject(type=='ticketEventB', playerid!=$playerid))
● Declaration of two events of type “ticketEvent” where the “player IDs” are different and the “ticked id” is the
same and another event of the type “connectedEvent” that must not be true.
Context
Eval statements are used to define constraints with data that is not available in the working memory, such as
status of player achievements. Multiple constraints can be defined in one rule with the combination of multiple
logical operators.
Note
It is recommended to avoid using an eval statement since it is an expensive operation. Use it as late as possible
within your trigger part.
Procedure
eval(<EXPRESSION><OPERATOR><VALUE>)
● Expression: It is recommended to only use methods of the Query API in eval conditions. The use of the Query
API allows you to evaluate available player details and achievements using Java statements.
● Operator: All logical operators supported by Java are supported.
● Declaration of an eval statement where the mission “Troubleshooting” is assigned to the player.
● Declaration of an eval statement where the “Experience Points” of the player are larger or equal to 10.
● Declaration of an eval statement where the player does not have the badge “Sporting Ace” assigned.
Note
The use of an invalid expressions may lead to an error during rule execution. Make sure that referenced point
categories or missions exist and the spelling is correct.
Creating generic facts (a Map object with an optional key) and storing them in the working memory is supported.
This allows you to store temporary results and create complex constraints (e.g.: count the number of a specific
event type). Generic facts can be evaluated in all rules if they exist.
The data structure of a generic fact is Map<String, Object> data. Additionally, you can set a key for the generic
factr to identify it. A generic fact must be initialized in the consequences part.
GenericFact(key=='<KEY>')
$<FACT_VARIABLE>: GenericFact(key=='<KEY>')
Examples for querying generic facts and assignment to a variable that can be used for evaluation:
● $loginCounter: GenericFact(key=='LoginCounter')
● $daysOfWeek: GenericFact(key=='DaysOfWeek')
The declaration of the consequences (“then”) part supports writing code with the Drools Rules Language (DRL) in
version 5.6.0 and Java code.
Note
The formatting in the consequences part must be in the Java style. The DRL can be used in combination with
Java code.
The consequences part defines what will be executed once the trigger part is fulfilled. It allows you to update the
player achievements or to create new events. Multiple consequences can be defined within one consequences
part.
Related Information
http://docs.jboss.org/drools/release/5.6.0.Final/drools-expert-docs/html/ch05.html
The Update API can be used to update any player achievements. Multiple updated can be executed within on the
consequences part.
updateAPIv1.<QUERY_API_METHOD>(<PLAYER_ID>, <PARAMS>);
update(engine.getPlayerById(<PLAYER_ID>));
updateAPIv1.addMissionToPlayer($playerid, 'Troubleshooting');
update(engine.getPlayerById($playerid));
updateAPIv1.completeMission($playerid, 'Troubleshooting');
update(engine.getPlayerById($playerid));
● Increasing the “Experience Points” of the player by one, complete mission “Troubleshooting, and add badge
“Champion Badge”.
New events can be created in the consequences part. They can be used for more complex game mechanics
(cascading rules), changing the state of facts or even for temporal triggers.
Generic facts can be used as global variables and are stored in the working memory. The creation of a generic fact
instance has to be done in the consequences part. In the trigger part you can query for certain generic fact
instances and (if required) bind them to local variables. This works just like querying the EventObject.
● Declaration of a generic fact with the key “factB” with a property “relevance” and according value “critical”.
$<FACT_VARIABLE>.getData();
$<FACT_VARIABLE>.setData(<VALUE>);
update($<FACT_VARIABLE>);
$loginCounter.setData("59");
update($loginCounter);
● Assigning the value of the variable “lCounter” to the generic fact “loginCounter”.
$loginCounter.setData(lCounter);
update($loginCounter);
retract($<FACT_VARIABLE>);
retract($loginCounter);
Using Java code in the consequences part is allowed and very complex rules can be created. You can work with all
Java control flow statements, a selected set of Java objects, for example collections, create generic facts or
update the player's achievements.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Procedure
Caution
A newly created rule is not automatically deployed. The deployment is initiated once you apply the
changes. The rule must be activated to be deployed.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Drools allows the specification of timer or scheduling constraints for rules using Java interval expression or cron
expressions. If the WHEN-part of such a rule is satisfied this results in a scheduled activation, which is put on the
Drools agenda. Unlike normal activations, these scheduled activations are not executed as part of a fireAllRules.
Instead, a scheduler executes these activations according to the specified timer or scheduling expression.
Note
As soon as the rule condition (WHEN-part) is not satisfied anymore, all scheduled activations are canceled. If
for instance a rule is triggered based on a certain event type, the scheduled activations are canceled as soon as
the corresponding event that activated the rule is retracted.
Procedure
○ Cron Job: Specify a schedule based on a valid cron expression. A simple wizard appears that helps to
create simple expressions. For more advanced expressions: http://www.quartz-scheduler.org/
documentation/quartz-1.x/tutorials/crontrigger .
○ Interval: Use a Java interval expression. The first parameter specifies the initial delay. The second
parameter specifies the interval. For example:. "0 3m", "10h 10s", "3h". For more information refer to the
Drools language documentation .
○ Expression: Provide a valid Drools expression - either a delay in ms or a variable from the drools when
statement. The variable has to contain the delay in ms.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab. A rule already exists and is not enabled.
Procedure
1. Check the Activate on Engine Update checkbox of the rule you want to enable.
2. Open the Rule Engine Manager by pressing Rule Engine.
Note
A rule that contains errors will not be deployed. Errors can be viewed by pressing the Show Issues button in
the Rule Engine Manager.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab. A rule already exists and is enabled.
Procedure
1. Uncheck the Activate on Engine Update checkbox of the rule you want to disable.
2. Open the Rule Engine Manager by pressing Rule Engine.
3. Commit your changes by pressing the Apply Changes button in the Rule Engine Manager. The rule will be
deployed immediately once the validation was successful.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Procedure
1. Click on the name of the rule in the entity list to open the rule editor.
2. Change the rule code.
3. Press Save.
4. Optional: Create or modify additional rules.
5. Close the rule editor and apply changes to deploy the rules.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rules tab.
Procedure
Caution
Only rules that are disabled can be deleted.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Rule Engine tab.
Context
The gamification workbench supports to detect issues with rules during design time and during runtime. Any
detected issues will be displayed in the Rule Engine tab. Syntax errors are already checked during design time
after the user applied the changes.
1. Reported rule warnings are displayed in a table, sorted by the rule which caused them.
2. Optional: Press the refresh button attached to the rule warnings table to refresh and check for new warnings.
Prerequisites
You have logged on to the gamification workbench with the role TenantOperator or AppAdmin, and you have
opened the Rule Engine tab of the releated app.
Context
The gamification service creates a rule engine instance for each app. Over time the state of each rule engine
instance changes based on its usage. A recovery mechanism for different rule engine states has been introduced
to allow a clean recovery in case of errors, rule set changes or system migrations. This mechanism allows to
create and restore snapshots of the current rule engine instance session and its deployed rule set. Snapshots are
stored into the database.
Generation of snapshots
Using “apply changes” (see Updating Rules [page 732] for details), the current rule set stored in the database is
deployed on the currently running rule engine instance. Technically, the current session, which includes all facts
and events, is upgraded to a new rule set. To assure compatibility of new rules with the existing session, rules are
being evaluated one by one. Compatible pairs of session and rule set are stored as snapshots.
Additionally, when receiving events via the “handleEvent” method, the session will change as well and requires the
same recovery mechanism. The gamification service service will generate snapshots during event execution in
dynamic intervals.
The gamification service manages rules and corresponding snapshots in the following way:
● After each successful rule deployment (Apply Changes) the corresponding rule set as well as the session are
both tagged with a new version. The service stores the latest 10 versions at max.
● For the latest (currently active) version as well as the previous version the gamification service stores the 10
latest snapshots in slots numbered 1 through 10.
● By using the API and the Workbench you can retrieve all available snapshots as well as the corresponding rule
set. Additionally, the rule engine can be restored to any of these snapshots.
Procedure
1. The Rule Engine section lists a table with all available rule engine snapshots and their details.
Note
Rule engine snapshots are constantly being created, when events are being sent. Older snapshots are
removed by the system during the process. It is recommended to stop any applications from sending
events to the rule engine while restoring snapshots.
Related Information
Notifications are messages that inform users about certain state changes, for example earned achievements, new
missions, new teams. They are considered "see and forget" information and won't stay long in the system.
Context
On one hand, notifications are created automatically when calling certain API methods. On the other hand, you
can also create and assign custom notifications by using the methods addCustomNotificationToPlayer and
addCustomNotificationToTeamMembers.
Notifications are delivered to players or teams by implementing a polling-based approach using the API methods
getNotificationsForPlayer and getAllNotifications.
The gamification service automatically creates notification for users when calling certain API methods. The table
below lists all methods, which implicitly generate notifications and explains the corresponding notification
parameters.
Table 288:
API Method Player Type Category Subject Details Message Date Created
Custom messages can usually be specified using an optional parameter <notificationMessage> of the
corresponding API method.
Examples:
Besides the automatically generated notification it is possible to add custom notifications to players or teams
using the methods addCustomNotificationToPlayer and addCustomNotificationToTeamMembers from
within rules.
The table explains how the notification parameters are used when creating custom notifications.
Table 289:
API Method Player Type Category Subject Detail Message Date Created
Context
Notifications are strictly defined as "see and forget". The gamification service will only store the last 25
notifications for each player (currently "X" defaults to 25). The show notifications to players a polling-based
approach has to be implemented using the following API methods:
● getNotificationsForPlayer(playerId, timestamp)&app=APPNAME
Returns the latest notifications for a player starting from the timestamp. This mechanism allows other
applications to better track which notifications have been requested or displayed already. This is the current
approach for "user2service" communication. It works well with the user endpoint using JavaScript.
● getAllNotifications(timestamp)&app=APPNAME
Returns all generated notifications for all players within one app starting from the provided timestamp. This is
the current approach for "application2service" communication. An application can query all notifications for
the app using the tech endpoint and forward the information to the user using custom events or
communication channels. This avoids having all clients in parallel polling for notifications.
Procedure
You can see the Notification Widget in the Helpdesk Scenario (sap_gs_notifications.js) for more information on
how the polling of notifications can be implemented at the client side. The notification polling is handled as follows:
1. Retrieve the gamification service server time on initialization, using the method getServerTime.
Prerequisites
You have logged into the gamification workbench and opened the Terminal tab.
Context
The Terminal within the game mechanics area allows you to quickly execute one or more API calls. Make sure that
you have the appropriate access rights for executing the call.
A comprehensive documentation of the API can be found in your SAP Cloud Platform, gamification service
subscription under Help API Documentation .
Procedure
1. Enter the list of JSON RPC calls as a JSON array: [JSON_RPC_CALL1, JSON_RPC_CALL2,…]
Example:
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in the JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right.
Note
The calls are executed in the context of the currently selected app (see dropdown box in the upper right
corner of the gamification workbench).
Press the Restore Example button in the Terminal section to show some example requests. Use the API
Documentation ( Help Open API Documentation ) to find a list of all available methods.
Related Information
Prerequisites
Navigate to the Terminal in the Game Design tab. Your user has the role AppAdmin.
Context
The Terminal allows you to send events that are typically sent to the host application.
Note
The Terminal should be only used to send events for testing purposes. In case you send events for a user that is
used in a productive environment it will modify the real achievements!
Procedure
1. Enter the list of JSON RPC calls with the method handleEvent.
[ {"method":"handleEvent", "params":[{"type":"myEvent","playerid":"demo-
user@mail.com","data":{}}]} ]
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in a JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right.
Once the event is send successfully the response is true.
4. All rules that listen on the according event type (when clause) will be executed.
Prerequisites
Context
The Terminal allows you to execute all methods for retrieving the user achievements data.
Procedure
1. Enter list of JSON RPC calls with the method with the desired achievement query methods.
Example getPlayerRecord:
[ {"method":"getPlayerRecord", "params":["demo-user@mail.com"]} ]
2. Press Execute to execute the calls. Check Force synchronous execution checkbox to enforce sequential
execution of calls in a JSON array.
3. Review server response. You can view the detailed JSON response by clicking on the symbol on the right.
Once the event is send successfully you will see the result.
Prerequisites
You are logged into the gamification workbench and have opened the Logging tab.
Context
The logging view allows you to search the event log for the selected app. The event log includes all API calls related
to “Event Submission” as well as the corresponding API calls executed from within the rules, which were triggered
by the corresponding events.
Note
The maximum retention time for the event log is 7 days, but not exceeding 500,000 log entries.
Rules with an EventObject fact and one or more other facts (Player or GenericFact)
in WHEN part cause endless loops.
Understanding why such rule sets result in loops requires a deeper understanding of the gamification service
itself:
● Rules with fact-based conditions are triggered on changes of the respective fact or facts. For example, insert,
update or retract fact.
● handleEvent inserts a fact of type EventObject and fires all rules. For example the THEN parts of all rules
that satisfy a fact-based condition involving EventObject will be executed.
● THEN execution may involve the modification of facts (insert, update, delete), which in turn may trigger
further rules. For example, insert a new GenericFact or update an existing fact (Player or GenericFact).
Rule execution runs until there are no more rules to fire.
● Endless loops occur if there are circles in the rule execution graph, for example, one rule calling another and
vice versa. The gamification service loop detection will detect such loops at runtime and stop the engine until
the problems are resolved.
● The EventObject inserted by handleEvent is per default retracted automatically after all rules have fired.
Thus, if the WHEN part includes EventObject conditions and further fact conditions, for example, Player(),
the rule will trigger again if one of the respective facts changed and the overall condition is still true.
● This can cause an endless loop. For example: Rule 1 WHEN includes EventObject and queries for
corresponding player (Player(playerid==$playerid)). Rule 2 WHEN expects Player change only
(Player()) in WHEN. If both, Rule 1 and Rule 2, include an update($player) in the THEN part, this will result
in an endless loop.
Mitigation strategy
● Use update(fact) with care. Think if it is needed and check for rules that could trigger accidently.
Both, key and value are interpreted as Strings. Thus, an explicit type conversion is required if you want to
compare them with numbers. This type conversion is done using the standard Java approach for the different
numeric types, for example, Integer.parseInt(value) or Double.parseDoube(value).
Example:
[
{"method":"handleEvent", "params":
[{"type":"solvedProblem","playerid":"D053659","data":
{"relevance":"critical","processTime":15}}]}
]
Related Information
Context
The integration of a (gamified) cloud application must consider the following aspects:
1. Sending gamification-relevant events to a player or a team, for example the user has completed a task for
which the gamification service grants a point.
2. Giving feedback to the players/teams, for example by showing achievements, progress, and game
notifications, .
3. Integrating the user management - creating or enabling players/teams, blocking players/teams, deleting
players/teams.
The following sections describe how you can deal with these aspects using the Web APIs provided. The sample
code shown is based on the demo application "Help Desk". The demo application's source code is also available in
GitHub .
Note
The sample code used to demonstrate the integration is not ready for production.
The Application Programming Interface (API) of the gamification service is the central integration point of your
application.
● Technical endpoint for integrating gamification events and user management in youur backend.
● User endpoint for integrating user achievements in the application frontend.
It is recommended to use the technical endpoint only for executing methods of the gamification service that must
not be executed by the users themselves, such as sending events to the gamification service that trigger certain
achievements or performing user management tasks, creating players for example. Authentication and
authorization in this case is based on a technical user that is created for the application itself.
The user endpoint should be used for accessing user related information for example earned achievements,
available achievements/mission, notifications and others. A great advantage of this approach is that the
gamification service manages access control, based on the user roles. For instance to make sure that a user
cannot access other users' data. For this, the authenticated user must be passed to the user endpoint.
Note
The whole integration can be done by using only the technical endpoint. However, in this case you must
manage access control yourself..
The documentation for the API can be found in your gamification service under Help API Documentation or
at https://gamification.hana.ondemand.com/gamification/documentation/documentation.html.
The graphic below illustrates how a gamified application (gamified app) running on SAP Cloud Platform is typically
integrated with the gamification service. The demo application "Help Desk" follows this integration architecture:
In a SAP Cloud Platform setting we assume that the gamified app and the gamification service subscription are
located in the same account. Furthermore, we assume that the application back end is written in Java, while the
application front end is based on HTML5 or SAP UI5.
The technical endpoint is used to send gamification-relevant events and perform user management tasks from
the application back end. Communication is based on a BASIC AUTH destination that uses the user name and
password of a technical user.
The easiest way to show player achievements is to integrate a default user profile that comes with the
gamification service subscription as an iFrame in the application's web front end.
To implement a user profile or single widgets (for example a progress bar tailored to the application's front end),
we recommend you use the user endpoint in combination with a local proxy servlet and an app-to-app SSO
destination. The proxy servlet prevents running into cross-site scripting issues and the app-to-app SSO
destination automatically forwards the credentials of the authenticated user to the gamification service. This
allows reuse of the access control mechanisms offered by the gamification service.
Since the user endpoint is used from a browser it is protected against cross-site request forgery. Accordingly, an
XSRF token has to be acquired by the client first.
Related Information
Context
If the user performs actions in the application that are relevant to gamification, the gamification service has to be
informed by invoking the corresponding API method. To prevent cheating this should be done in the application
back end using the technical endpoint offered by the API.
Procedure
Note
See also:
○ Demo application source code: https://github.com/SAP/gamification-demo-app
○ API Documentation: SAP Cloud Platform, gamification service subscription, under Help API
Documentation .
Related Information
Context
The gamification service subscription includes a default user profile, which you can include in your application as
an <iFrame/>.
https://<Subscription URL>/gamification/userprofile.html?name=<userid>&app=<appid>
2. Include the default user profile in your HTML5 code as an iFrame:
Prerequisites
Configure your account to allow principal propagation. For more information, see HTTP Destinations [page 366]
Context
The integration of custom gamification elements tailored to your application's user interface requires the
development of custom JavaScript/HTML5 widgets. To avoid cross-site-scripting issues, you should introduce a
proxy servlet in the application. This servlet forwards JSON-RPC requests to the user endpoint using an App-to-
App SSO destination. This way, the gamification service has access to the user principle and the built-in access
control is active.
Procedure
API Documentation: SAP Cloud Platform, gamification service subscription under Help API
Documentation .
Context
The players (users) must be explicitly created before they can be used to assign achievements. A player context is
always valid for one tenant and therefore can be used across multiple apps (managed in one tenant).
Procedure
1. Register (create) a player (user) for a tenant subscription using the API method createPlayer.
Note
This is done automatically on the first event if the flag Auto-Create Players is set to true for the given app.
2. (Optional) Initialize a player (user) by creating a rule listening for an event of type initPlayerForApp.
a. Precondition: The player is registered.
b. On event: if a player has not been initialized for the given app yet an event of type initPlayerForApp is
automatically inserted into the engine. The THEN-part of this rule should include the user-defined init
actions, for example assigning initial missions.
c. (Optional) If you want players to be created with a display name you can add the optional parameter
playerName to the event. During the automated player creation this parameter is used for setting the
player name. Example:
{"method":"handleEvent","params":
[{"type":"linkProvided","playerid":"maria.rossi@sap.com", "playerName":
"Maria Rossi", "data":{}}]}
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Game Design tab.
Context
The gamification introduction is a continuous process since the modification of game mechanics can be done at
any point in time. For example, the number of points a player can reach might be changed in order to change the
behavior of the user.
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Analytics tab.
Context
You can view the statistics of achievements such as points and badges. The points metrics that can be viewed are
all point categories and badges that are maintained for your application.
The following aggregations can be selected (the values for badges cannot be aggregated):
Note
The analytics are currently limited to point categories and badges. Analytics on player level are not available
due to privacy reasons.
Procedure
Prerequisites
You have logged on to the gamification workbench with the role GamificationDesigner and you have opened
the Analytics tab. You have selected the statistics you are interested in. A time range must be selected.
Context
You can view the statistics of achievements such as points and badges. The selected values can be compared to
an earlier time range in order to identify changes in the assignment of achievements.
Note
A time range for the statistics must be selected.
View a lag chart for a comparison of the selected data to an earlier time range.
1. Select the Enable lag chart checkbox.
2. Select the lag amount for comparison.
The lag chart displays the difference of the aggregated values to the values before the lag amount. For
example, when you select the sum of point category for the current month, the lag chart will show the
difference compared to the month before, provided you have selected a lag amount equal to one month.
In this case study, a demo application will be gamified in order to demonstrate the implementation and
configuration of a gamification concept step by step.
The demo host application is a “Help Desk” software, which is typically used by call center employees. Customers
can create tickets (for an issue with software or hardware, for example) and call center employees can process
these tickets.
The image below shows the welcome screen of the Help Desk application. The welcome screen appears once the
user is successfully authenticated using the identity provided. The user must have the role helpdesk. The
assignment of roles is described in page Roles [page 697].
Context
The demo application (Help Desk) will be automatically subscribed for each account that is subscribed to the
gamification service.
The gamification service has already been integrated within the demo application. Events such as the processing
of tickets will be sent to the gamification service of the account subscription for example, and the achievements
are going to be retrieved by the corresponding interfaces.
Since the gamification service and the demo applications are subscriptions, a destination has to be enabled in
order to allow communication between the services. A technical user is also required in order to allow secure
communication.
Procedure
The Help Desk app can be accessed via the menu Help Open Help Desk . The following link will be used:
https://< SUBSCRIPTION_URL>/helpdesk. The role helpdesk must be granted to the user.
Context
The user requires the role helpdesk in order to access the help desk application.
Procedure
Related Information
The destination requires a technical user for secure communication between your application and the
gamification service subscription.
Context
Note
You can request user IDs at the SAP Service Marketplace: http://service.sap.com/request-user SAP
Service Marketplace users are automatically registered with the SAP ID service, which controls user access to
SAP Cloud Platform.
1. Request a technical user via SMP. (You can use your account user as well, but this is not recommended for
security reasons.)
2. In the SAP Cloud Platform cockpit, choose the Services tab.
3. Click the Gamification Service tile.
4. Click on the Configure Gamification Service link.
Related Information
Prerequisites
For more information about how to install the SAP Cloud Platform tools, see Eclipse Tools [page 100].
Context
The demo application's (Help Desk) source code is also available in GitHub .
Procedure
3. Open Eclipse with SAP Cloud Platform tools and choose File Import .
For more information, see Deploying on the Cloud from Eclipse IDE [page 1047].
7. Configure destinations and roles for the deployed application. Use the same configuration as described in
section HelpDesk App - Configuration of Available Subscription [page 755].
The host application without the application does not allow the user (call center employee) to see any feedback on
his/her daily work. The user does not really know how s/he performs compared to other colleagues either.
The requirement for gamification in the demo applications is to intrinsically motivate the users with instant
feedback (achievements). Collaborative feedback will be introduced, and the progress for each individual user will
be visible as well as the performance compared to others.
Points Categories
Levels
Based on the number of experience points a user gains, s/he can reach different levels. Three levels are
introduced:
“Competent” - this level can be reached once the user has gained 10 “Experience Points”
“Expert” - this level can be reached once the user has gained 50 “Experience Points”
Badges
Based on the successful completion of a mission, the user will gain a badge. The following badges are introduced:
“Troubleshooting Champion”
Missions
Missions will be introduced to motivate continuous efforts. The following missions will be introduced:
“Troubleshooting”
Rules
For each processed ticket, the user will gain 1 “Experience point”.
For each processed ticket categorized as “critical”, the user will gain 2 additional “Experience Points” to motivate
him or her to solve critical tickets with higher priority.
For each processed ticket categorized as “critical”, the user will gain 1 “Critical Tickets” point.
Once the mission troubleshooting is completed, the user will gain the “Troubleshooting Champion” badge.
The gamification concept introduced above can be generated automatically within the gamificationworkbench.
The generated gamification concept is designed for the demo application only and is designed to provide an
example of a gamification concept.
The demo content for the Help Desk application can be generated in the OPERATIONS tab. You need to have the
TenantOperator role. Go to "Demo Content Creation" (shown in the picture below) and select the Create
HelpDesk Demo button. After a short while you will see a notification Gamification concept successfully
created. once the content generation was successful. The demo content has been generated into a new app:
HelpDesk.
The generated gamification concept contains more gamification elements than described in Switching Apps
[page 704] to provide additional examples.
The following sections describe how the gamification design is realized in the gamification workbench.
The gamification workbench makes it possible to manage gamification concepts for multiple apps. An app must
be created before the gamification concept can be implemented.
Procedure
1. Go to the OPERATIONS tab. The user must have the TenantOperator role.
2. Go to Apps.
Next Steps
Once the app has been created, it must be selected in the top right corner so that the gamification concept can be
implemented for it.
Procedure
Results
You should now see both point categories (“Experience Points” and “Critical Tickets”) in the list for Points.
Procedure
7. Press Add.
8. Enter Name: “Competent”.
9. Select Points: “Experience Points”.
10. Enter Threshold: “10”.
11. Press Add.
Results
You should now see all three levels (“Novice”, “Competent”, and “Expert”) in the list for Levels.
Procedure
You should now see all badges (“Troubleshooting Champion”) in the list for Badges.
Procedure
You should now see all missions (“Troubleshooting”) in the list for Missions.
Context
Procedure
Procedure
1. Press Add.
2. Enter Name: “GiveXPCritical”
3. Enter Description: “Give additional Experience Points for critical ticket.”
4. Enter the following text for the trigger:
Procedure
1. Press Add.
2. Enter Name: “GiveCT”
3. Enter Description: “Give Critical Ticket Points for processed ticket.”
4. Enter the following text for the trigger:
Procedure
1. Press Add.
2. Enter Name: “AssignMissionTS”
3. Enter Description: “Assign Troubleshooting mission.”
4. Enter the following text for the trigger:
$p : Player($playerid : uid)
$event : EventObject(type=='initPlayerForApp', $playerid==playerid) from entry-
point eventstream
updateAPI.addMissionToPlayer($playerid, 'Troubleshooting');
update($p);
Procedure
1. Press Add.
$p : Player($playerid : uid);
eval(queryAPI.hasPlayerMission($playerid, 'Troubleshooting') == true)
eval(queryAPI.getPointsForPlayer($playerid, 'Critical Tickets').getAmount() >= 5)
updateAPI.completeMission($playerid, 'Troubleshooting');
updateAPI.addBadgeToPlayer($playerid, 'Troubleshooting Champion', 'You solved 5
critical tickets!');
update($p);
1.4.6.9.5.6.6 Result
You should now see the created rules in the list for Rules.
Results
You can use the monitoring service to receive the metrics of your Java applications running on SAP Cloud
Platform.
Overview
You can develop a custom application to request the states or the metric details of your Java applications and the
applications' processes. That is accomplished via GET REST API calls. For more information about the format of
the REST APIs, see Monitoring API.
Example
Use the following request to receive all the metrics of a Java application located in the European data center
(with hana.ondemand.com host):
https://api.hana.ondemand.com/monitoring/v1/accounts/<account_name>/apps/
<application_name>/metrics
All Java applications include these default metrics. Custom metrics can also be added to the default metrics.
Table 290:
Metric Value
Used Disc Space Percentage of the whole disc space currently used.
Requests per Minute Number of HTTP requests processed by the Java application
during the last minute.
CPU Load Average percentage CPU usage during the last minute.
Disk I/O Number of bytes per second currently being read or written to
the disc.
Average Response Time Average response time in milliseconds for all requests proc
essed during the last minute.
Busy Threads Current number of threads that are processing HTTP re
quests.
Benefits
You can use the monitoring service for the following actions:
1. A custom application requests metrics of a Java application from the monitoring service via a REST API call.
2. The monitoring service sends back a JSON response with a status code 200 OK.
The format of the REST API request specifies the metrics to be returned in the JSON response. For more
information about the requests, see Monitoring API.
3. The custom application uses these metrics to perform operations.
4. The custom application requests the metrics of other Java applications by repeating steps 1 to 3.
Related Information
You retrieve Java application metrics in a JSON format by performing a REST API request defined by the
monitoring API.
Note
The easiest way to view the metrics is to enter the request URI in your browser. You may be asked to provide
your credentials before the retrieval process is performed. You can then use any JSON prettifier or formatter to
improve the readability of the results.
Parameter Value
processes A list of processes. Each process contains the process ID, the
state of the process, and the list of the metrics for that proc
ess.
Example
The JSON response for Java application metrics may look like the following example:
[
{
"account": "myAccount",
"application": "hello",
"state": "Ok",
"processes": [
{
"process": "bf061f611cc520f39839f2fa9e44813b2a20cdb7",
"state": "Ok",
"metrics": [
{
"name": "Used Disc Space",
"state": "Ok",
"value": 43,
"unit": "%",
"warningThreshold": 90,
"errorThreshold": 95,
"timestamp": 1456408611000,
"output": "DISK OK - free space: / 4177 MB (54% inode=84%); /
var 1417 MB (74% inode=98%); /tmp 1845 MB (96% inode=99%);",
"metricType": "rate",
"min": 0,
"max": 8063
},
{
"name": "Requests per Minute",
"state": "Ok",
"value": 0,
"unit": "requests",
"warningThreshold": 0,
"errorThreshold": 0,
"timestamp": 1456408611000,
"output": "JMX OK - RequestsCountMin = 0 ",
Related Information
This tutorial describes the configuration of a custom application that retrieves the metrics of Java applications
running on SAP Cloud Platform. Consequently, the implemented dashboard displays the states of the Java
applications and can display the state and metrics of the processes running on those applications.
Prerequisites
● To test the whole scenario, you need accounts on SAP Cloud Platform in two data centers (EU and US East).
● To retrieve the metrics of Java applications as shown in this scenario, you need two deployed and running
Java applications.
Context
This tutorial uses a Java project published on GitHub. This project contains a notification application that requests
the metrics of the following Java applications (running on SAP Cloud Platform):
Procedure
Note
You can also upload your project by copying the URL from GitHub and pasting it as a Git repository path or
URI after you switch to the Git perspective. Remember to switch back to a Java perspective afterward.
3. Open the Configuration.java class in Eclipse and update the following information: your logon
credentials, your Java applications and their accounts and data centers (landscape hosts).
...
private final String user = "my_username";
private final String password = "my_password";
private final List<ApplicationConfiguration> appsList = new
ArrayList<ApplicationConfiguration>();
public void configure(){
String landscapeFQDN1 = "api.hana.ondemand.com";
String account1 = "a1";
String application1 = "app1";
ApplicationConfiguration app1Config = new
ApplicationConfiguration(application1, account1, landscapeFQDN1);
this.appsList.add(app1Config);
Note
The example above shows only two applications, but you can create more and add them to the list.
Tip
View the status of your Java applications and start them in the SAP Cloud Platform cockpit.
○ When you select an application, you can view the states of the application’s processes.
○ When you select a process, you can view the process’s metrics.
Related Information
This tutorial will help you configure an example notification scenario. The scenario includes a custom application
that notifies you of critical metrics via e-mail or SMS. The application also performs actions to fix issues based on
these critical metrics.
Prerequisites
● To test the whole scenario, you need accounts on SAP Cloud Platform in two data centers (EU and US East).
● To retrieve the metrics of Java applications as shown in this scenario, you need two deployed and running
Java applications.
Note
If a Java application is not started yet, the notification application will trigger the start process.
Context
In this tutorial, you will implement a notification application that requests the metrics of the following Java
applications (running on SAP Cloud Platform):
Note
Since the requests are only sent to two applications, the Maven project that you import in Eclipse only spawns
two threads. However, you can change this number in the MetricsWatcher class, where the
ScheduledThreadPoolExecutor(2) method is called. Furthermore, if you decide to change the list of
applications, you also need to correct the list in the Demo class of the imported project.
When the notification application receives the Java application metrics, it checks for critical metrics. The
application then sends an e-mail or SMS depending on whether the metrics are received as critical once or three
times. In addition, the application restarts the Java application when the metrics are detected as critical three
times.
Procedure
3. Open the Demo.java class and update the following information: your e-mail and SMS addresses, your logon
credentials, your Java applications and their accounts and data centers.
...
String mail_to = "my_email@email.com";
String mail_to_sms = "my_email@sms-service.com";
private final String auth_user = "my_user";
private final String auth_pass = "my_password";
String landscapeFqdn1 = "api.hana.ondemand.com";
String account1 = "a1";
String application1 = "app1";
String landscapeFqdn2 = "api.us1.hana.ondemand.com";
String account2 = "a2";
String application2 = "app2";
...
4. Open the Mailsender.java class and update your e-mail account settings.
...
private static final String FROM = "my_email_account@email.com";
final String userName = "my_email_account";
final String password = "my_email_password";
...
public static void sendEmail(String to, String subject, String body) throws
AddressException, MessagingException {
// Set up the mail server
Properties properties = new Properties();
properties.setProperty("mail.transport.protocol", "smtp");
properties.setProperty("mail.smtp.auth", "true");
properties.setProperty("mail.smtp.starttls.enable", "true");
properties.setProperty("mail.smtp.port", "587");
properties.setProperty("mail.smtp.host", "smtp.email.com");
properties.setProperty("mail.smtp.host", "mail.email.com");
...
To do this, you can create a JMX check with a very low critical threshold for HeapMemoryUsage so that
the check will always be received in a critical state.
Example
To use the console commands, you need to set up the console client. For more information, see
Setting Up the Console Client [page 52].
Related Information
Performance statistics enable you to monitor the resources used by your applications and to investigate the
causes of performance issues.
Note
This is a beta feature available on SAP Cloud Platform for developer accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Performance statistics are disabled by default, and you need to enable them to start gathering data. In the
cockpit, the Performance Statistics tab of a started application allows you to enable the collection of performance
statistics data. To view the collected performance statistics data, you have to generate a report.
Each report provides a breakdown of the time and resources such as CPU, memory and so on, used by the
different services of the platform for each HTTP request to your application. You can get insight on specific
requests and the respective behavior of your application. Currently, the supported services are the platform
runtime and the persistence service.
Note
The performance statistics service does not support the persistence service metrics for Java Web Tomcat 7
and Java Web Tomcat 8 application runtime container.
Related Information
You can see the report's metrics in a viewer, or you can download them as a JSON file.
Note
This is a beta feature available on SAP Cloud Platform for developer accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Table 292:
Metric Displayed in Viewer Respective Metric in JSON File Value
Note
This metric is not supported for Java
Web Tomcat 7 and Java Web Tomcat
8 application runtime container.
Note
This metric is not supported for Java
Web Tomcat 7 and Java Web Tomcat
8 application runtime container.
Note
This metric is not supported for Java
Web Tomcat 7 and Java Web Tomcat
8 application runtime container.
External Calls (ms) extTime Time in milliseconds of the calls from the
application to systems that are not part
of SAP Cloud Platform
Example
The JSON file may look like the following:
{
"name": "AllRecords",
"children": [{
"name": "0",
"children": [{
"name": "action",
"value": "https://myappmyaccount.hana.ondemand.com/test"
}, {
"name": "actionType",
"value": "0"
}, {
"name": "addInfo",
"value": ""
}, {
"name": "allocMem",
"value": "126656"
}, {
"name": "cpuTime",
"value": "10"
}, {
Related Information
You collect performance statistics to monitor the resources used by your applications and to investigate the
causes of performance issues.
Note
This is a beta feature available on SAP Cloud Platform for developer accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
Prerequisites
Procedure
○ To generate an intermediate report without terminating the data collection, choose the Generate Report
button.
○ To generate a report and terminate the data collection, choose the Stop Collecting button.
Related Information
Store data using in-memory and relational data persistence made available to your applications that run on SAP
Cloud Platform by the persistence service. This ensures data availability and resiliency. In addition to managing
and providing access to databases, the persistence service also performs other tasks such as backup, recovery
and load balancing.
The persistence service supports the SAP HANA and the SAP ASE database, as shown in the high-level overview
below:
For productive use, a dedicated SAP HANA database is available as a hosted solution and enables you to work
with SAP HANA in the same way as with an on-premise version. There are some obvious restrictions, such as no
access to the operating system. A shared SAP HANA database system, on the other hand, available on a trial
basis, provides a managed environment in which additional restrictions apply to ensure user and data isolation.
You can use the SAP HANA database with multitenant database container support enabled (beta), both in a trial
and in a productive landscape. The main differences are outlined below:
Java Development
The persistence service supports JPA (Java Persistence API) and JDBC (Java Database Connectivity), with the
recommended programming model being JPA 2.0, with EclipseLink as the persistence provider.
● Application redeployment with the same schema: Provided the schema has not been dropped, a redeployed
application can reuse the schema with its associated database objects and data.
● Shared schemas: Allow data to be shared between applications
● Multiple schemas: Allow multiple databases to be used in parallel
● Local test facility: On the local runtime, the persistence service automatically enables an embedded Apache
Derby database and configures the default data source accordingly. You can reconfigure the persistence
service to replace the standard database with a database of your choice.
Restrictions
When consuming the persistence service in your Java applications, be aware of the following restrictions:
● No database abstraction
The persistence service does not provide database abstraction for the supported database types (SAP HANA
database and SAP ASE database). Applications must be aware of the type of database they use and must be
written, if necessary, in a database-specific way.
● No automatic life cycle management for database objects
The persistence service does not provide automatic life cycle management for database objects, such as
tables, indices, sequences, and so on. It is the responsibility of the application to create the necessary
database objects, either by using JDBC to send the corresponding data definition statements to the database
Related Information
The persistence service provides relational database storage for applications that are hosted on SAP Cloud
Platform. This section introduces the key concepts of the persistence service and shows how you can use JPA and
JDBC to manage relational data in your applications.
Table 293:
Topic Description
Tutorials [page 794] Familiarize yourself with the JPA and JDBC technologies on SAP Cloud Platform by completing
the tutorials.
Administering Database Find out how to administer your applications' database schemas.
Schemas [page 901]
Programming with JPA Particular aspects about working with JPA and JDBC that were introduced in the tutorials are
[page 938] explained in more detail.
Investigating Performance Activate the SQL trace to include SQL details in the standard trace files.
Issues Using the SQL Trace
[page 965]
Accessing Databases Re Access your database schema and tables in the cloud.
motely [page 919]
Testing on the Local Run Check the configuration requirements for local testing.
time [page 962]
Frequently Asked Questions Frequently asked questions about the persistence service
[page 972]
Related Information
1.4.9.2 Tutorials
The tutorials provide an introduction to object-relational persistence using JPA 2.0, with EclipseLink as the
persistence provider, and relational persistence using JDBC. JPA is considered the standard approach for
developing applications for the SAP Cloud Platform, with container-managed persistence representing the model
most commonly adopted by Web applications.
JPA provides an object-oriented view of the persisted data and allows you to work directly with Java objects that
are automatically synchronized with the database. Unlike JDBC, it does not require you to manually write SQL
statements to read and write objects from and to the database tables.
The tutorials can be run on all databases supported on the SAP Cloud Platform. For local deployment, the
persistence service provides an embedded Apache Derby database instance.
Related Information
Adding Container-Managed Persistence with JPA (Java EE 6 Web Profile SDK) [page 795]
Adding Application-Managed Persistence with JPA (Java Web SDK) [page 807]
Adding Persistence with JDBC (Java Web SDK) [page 819]
Migrating Web Applications That Use context.xml [page 829]
Creating an SAP HANA Database from the Cockpit [page 830]
Creating an SAP HANA Database Using Console Client [page 836]
This step-by-step tutorial shows how you can use JPA together with EJB to apply container-managed persistence
in a simple Java EE web application that manages a list of persons.
Table 294:
Steps Sample Application
5. Prepare the Web Application Project for JPA [page 801] Sample name: persistence-with-ejb
6. Extend the Servlet to Use Persistence [page 801] Location: <sdk>/samples folder
7. Test the Web Application on the Local Server [page 803] More information: Samples [page 60]
Note
The tutorial is based on the SDK for Java EE 6 Web Profile.
Note
The tutorial and sample use EclipseLink version 2.5. If you use an earlier version of EclipseLink, bear in mind
that additional settings are required to deploy with the SAP HANA database. For more information, see Special
Settings for EclipseLink Versions Prior to 2.5 [page 940].
Prerequisites
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 43].
Note
You need to install the SDK for Java EE 6 Web Profile.
SAP HANA database only: You have downloaded the EclipseLink JAR file (eclipselink.jar):
Create a dynamic web project with the JPA project facet. This enables the relevant JPA tooling and adds the
required libraries and artifacts, such as the persistence.xml file. Then add a servlet (you will extend it in step 6
to use the JPA persistence entity and EJB session bean).
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. On the Dynamic Web Project screen, define the following settings:
1. Enter the Project name persistence-with-ejb.
2. In the Target Runtime pane, select Java EE 6 Web Profile as the runtime you want to use to deploy the
application.
3. In the Dynamic web module version section, select 3.0.
4. In the Configuration section, choose Modify and select the JPA checkbox in the Project Facets screen.
5. Choose OK and return to the Dynamic Web Project screen.
6. Choose Next
Create a JPA persistence entity class named Person. Add an auto-incremented ID to the database table as the
primary key and person attributes. You also need to define a query method that retrieves a Person object from
the database table. Each person stored in the database is represented by a Person entity object.
package com.sap.cloud.sample.persistence;
import javax.persistence.Basic;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;
/**
* Class holding information on a person.
*/
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private Long id;
@Basic
private String firstName;
@Basic
private String lastName;
public long getId() {
return id;
}
public void setId(long newId) {
this.id = newId;
}
public String getFirstName() {
return this.firstName;
}
public void setFirstName(String newFirstName) {
this.firstName = newFirstName;
}
public String getLastName() {
return this.lastName;
}
public void setLastName(String newLastName) {
this.lastName = newLastName;
}
}
1. Select persistence.xml, and from the context menu choose Open With Persistence XML Editor .
2. On the General tab, make sure that org.eclipse.persistence.jpa.PersistenceProvider is entered
in the Persistence provider field.
3. On the Options tab, make sure that the DDL generation type Create Tables is selected.
4. On the Connection tab, select the transaction type JTA.
5. Save the file.
package com.sap.cloud.sample.persistence;
import java.util.List;
import javax.ejb.LocalBean;
import javax.ejb.Stateless;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
/**
* Session Bean implementation class PersonBean
*/
@Stateless
@LocalBean
public class PersonBean {
@PersistenceContext
private EntityManager em;
public List<Person> getAllPersons() {
return em.createNamedQuery("AllPersons").getResultList();
}
public void addPerson(Person person) {
em.persist(person);
em.flush();
}
}
If you intend to deploy with the SAP HANA database, add the EclipseLink JAR file to the web application project:
Extend the servlet to use the Person entity and EJB session bean. The servlet adds Person entity objects to the
database, retrieves their details, and displays them on the screen.
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.SQLException;
import java.util.List;
import javax.ejb.EJB;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementation class PersistenceEJBServlet
*/
@WebServlet("/")
public class PersistenceEJBServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER = LoggerFactory
.getLogger(PersistenceEJBServlet.class);
@EJB
PersonBean personBean;
/** {@inheritDoc} */
4. Save the servlet. The project should compile without any errors.
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploying Locally from Eclipse IDE [page 1045].
You should see the following output:
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
John Smith is added to the database as shown below:
If you add more names to the database, they will also be listed in the displayed table. This confirms that you
have successfully enabled persistence using the Person entity.
To test your web application in the cloud, define a server in Eclipse. Use the cockpit to create a default binding for
your application. Add the application to the new server and start it. This will the deploy the application on the
cloud, and you should see the same output as when the application was tested on the local server.
● You have set up your runtime environment in the Eclipse IDE. For more information, see Setting Up the
Runtime Environment [page 48].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see Developing
Java Applications [page 1034] or Importing Samples as Eclipse Projects [page 62]
Note
The application name should be unique enough so that your deployed application can be easily
identified.
○ Select a runtime. If you leave the Automatic option, the server will load the target runtime of your
application.
○ Enter your account name, e-mail or user name, and password and choose Next.
Note
Adding an application would automatically start this application with the effect that it would fail
because no data source binding exists. You will add an application in a later step.
○ Choose Finish.
1. In the cockpit, select an account and choose Persistence Databases & Schemas in the navigation area.
2. Select the database that you want to create a binding for.
3. Choose Data Source Bindings in the navigation area.
Note
For more information on Data Source Bindings, see Binding Databases [page 863].
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
This creates a default binding for the application. You can use an existing database or create a new one.
1. On the Servers view in Eclipse, open the context menu for the server and choose Add and Remove...
<application name> . To add the application to the server, add the application to the panel on the right side.
Choose Finish.
2. Start the server. This will deploy the application and start it on the SAP Cloud Platform.
You can access the application by clicking the application URL on the application overview page in the cockpit.
Note
You cannot deploy multiple applications on the same application process. Deployment of a second application
on the same application process overwrites any previous deployments. If you want to deploy several
applications, deploy each of them on a separate application process.
This step-by-step tutorial shows how you can use JPA to apply application-managed persistence in a simple Java
EE web application that manages a list of persons.
Table 295:
Steps Sample Application
3. Maintain the Metadata of the Person Entity [page 812] The application is also available as a sample in the SDK for
Java Web:
4. Prepare the Web Application Project for JPA [page 812]
Sample name: persistence-with-jpa
5. Extend the Servlet to Use Persistence [page 813]
6. Test the Web Application on the Local Server [page 816] Location: <sdk>/samples folder
7. Deploy Applications Using Persistence on the Cloud from More information: Samples [page 60]
Eclipse [page 816]
Note
The tutorial is based on the SDK for Java Web.
Note
The tutorial and sample use EclipseLink version 2.5. If you use an earlier version of EclipseLink, bear in mind
that additional settings are required to deploy with the SAP HANA database. For more information, see Special
Settings for EclipseLink Versions Prior to 2.5 [page 940].
Prerequisites
You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 43].
Note
You need to install the SDK for Java Web.
Create a dynamic web project with the JPA project facet. This enables the relevant JPA tooling and adds the
required libraries and artifacts, such as the persistence.xml file. Then add a servlet (you will extend it later to
use the JPA persistence entity).
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. On the Dynamic Web Project screen, define the following settings:
1. Enter the Project name persistence-with-jpa.
2. In the Target Runtime pane, select Java Web as the runtime you want to use to deploy the application.
3. In the Dynamic web module version section, select 2.5.
4. In the Configuration section, choose Modify and select the JPA checkbox in the Project Facets screen.
5. Choose OK and return to the Dynamic Web Project screen.
6. Choose Next.
Create a JPA persistence entity class named Person. Add an auto-incremented ID to the database table as the
primary key and person attributes. You also need to define a query method that retrieves a Person object from
the database table. Each person stored in the database is represented by a Person entity object.
package com.sap.cloud.sample.persistence;
import javax.persistence.Basic;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.Id;
import javax.persistence.NamedQuery;
import javax.persistence.Table;
/**
* Class holding information on a person.
*/
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private Long id;
@Basic
private String firstName;
@Basic
private String lastName;
public long getId() {
return id;
}
public void setId(long newId) {
this.id = newId;
}
public String getFirstName() {
return this.firstName;
}
public void setFirstName(String newFirstName) {
this.firstName = newFirstName;
}
public String getLastName() {
return this.lastName;
}
public void setLastName(String newLastName) {
this.lastName = newLastName;
}
}
1. Select the persistence.xml and from the context menu choose Open With Persistence XML Editor .
2. On the General tab, define the following settings:
1. Make sure that org.eclipse.persistence.jpa.PersistenceProvider is entered in the
Persistence provider field.
2. In the Managed Class section, choose Add.... Enter Person and choose Ok.
3. On the Connection tab, make sure that the transaction type Resource Local is selected.
4. On the Schema Generation tab, make sure the DDL generation type Create Tables in the EclipseLink
Schema Generation section is selected.
5. Save the file.
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
<servlet-mapping>
<servlet-name>PersistenceWithJPAServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
Note
An application's URL path contains the context root followed by the optional URL pattern ("/<URL
pattern>"). The servlet URL pattern that is automatically generated by Eclipse uses the servlet’s class
name as part of the pattern. Since the cockpit only displays the context root, this means that you cannot
directly open the application in the cockpit without adding the servlet name. To call the application by only
the context root, use "/" as the URL mapping, then you will no longer have to correct the URL in the
browser.
Extend the servlet to use the Person entity. The servlet adds Person entity objects to the database, retrieves
their details, and displays them on the screen.
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
import org.eclipse.persistence.config.PersistenceUnitProperties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
4. Save the servlet. The project should compile without any errors.
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploying Locally from Eclipse IDE [page 1045]. You should see the following output:
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
John Smith is added to the database as shown below:
If you add more names to the database, they will also be listed in the displayed table. This confirms that you
have successfully enabled persistence using the Person entity.
To test your web application in the cloud, define a server in Eclipse. Use the cockpit to create a default binding for
your application. Add the application to the new server and start it. This will the deploy the application on the
cloud, and you should see the same output as when the application was tested on the local server.
Prerequisites
● You have set up your runtime environment in the Eclipse IDE. For more information, see Setting Up the
Runtime Environment [page 48].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see Developing
Java Applications [page 1034] or Importing Samples as Eclipse Projects [page 62]
○ Select a runtime. If you leave the Automatic option, the server will load the target runtime of your
application.
○ Enter your account name, e-mail or user name, and password and choose Next.
Note
○ If you have previously entered an account and user name for your landscape host, these names will
be prompted to you in dropdown lists.
○ A dropdown list will be displayed as well for previously entered landscapes hosts.
Note
Adding an application would automatically start this application with the effect that it would fail
because no data source binding exists. You will add an application in a later step.
○ Choose Finish.
3. On the Servers view, open the context menu for the server you just created and choose Show In
Cockpit . The cockpit opens inside Eclipse.
1. In the cockpit, select an account and choose Persistence Databases & Schemas in the navigation area.
2. Select the database that you want to create a binding for.
3. Choose Data Source Bindings in the navigation area.
Note
For more information on Data Source Bindings, see Binding Databases [page 863].
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
This creates a default binding for the application. You can use an existing database or create a new one.
1. On the Servers view in Eclipse, open the context menu for the server and choose Add and Remove...
<application name> . To add the application to the server, add the application to the panel on the right side.
Choose Finish.
2. Start the server. This will deploy the application and start it on the SAP Cloud Platform.
You can access the application by clicking the application URL on the application overview page in the cockpit.
Note
You cannot deploy multiple applications on the same application process. Deployment of a second application
on the same application process overwrites any previous deployments. If you want to deploy several
applications, deploy each of them on a separate application process.
This step-by-step tutorial shows how you can use JDBC to persist data in a simple Java EE web application that
manages a list of persons.
Table 296:
Steps Sample Application
3. Create the Person DAO [page 820] The application is also available as a sample in the SDK for
Java Web:
4. Prepare the Web Application Project for JDBC [page 823]
Sample name: persistence-with-jdbc
5. Extend the Servlet to Use Persistence [page 824]
6. Test the Web Application on the Local Server [page 826] Location: <sdk>/samples folder
7. Deploy Applications Using Persistence on the Cloud from More information: Samples [page 60]
Eclipse IDE [page 826]
Note
The tutorial is based on the SDK for Java Web.
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK. For more
information, see Setting Up the Development Environment [page 43].
Note
You need to install the SDK for Java Web.
● You have created a database. If you use an account on the trial landscape, you need to create a HANA MDC
tenant database.For more information, see Creating Databases [page 857].
Create a dynamic web project and add a servlet, which you extend in step 4.
1. From the Eclipse main menu, choose File New Dynamic Web Project .
2. Enter the Project name persistence-with-jdbc.
package com.sap.cloud.sample.persistence;
/**
* Class holding information on a person.
*/
public class Person {
private String id;
private String firstName;
private String lastName;
public String getId() {
return id;
}
public void setId(String newId) {
this.id = newId;
}
public String getFirstName() {
return this.firstName;
}
public void setFirstName(String newFirstName) {
this.firstName = newFirstName;
}
public String getLastName() {
return this.lastName;
}
public void setLastName(String newLastName) {
this.lastName = newLastName;
}
}
Create a DAO class, PersonDAO, in which you encapsulate the access to the persistence layer.
package com.sap.cloud.sample.persistence;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.UUID;
import javax.sql.DataSource;
/**
* Data access object encapsulating all JDBC operations for a person.
*/
public class PersonDAO {
private DataSource dataSource;
/**
* Create new data access object with data source.
*/
public PersonDAO(DataSource newDataSource) throws SQLException {
setDataSource(newDataSource);
}
/**
* Get data source which is used for the database operations.
*/
public DataSource getDataSource() {
return dataSource;
}
/**
* Set data source to be used for the database operations.
*/
public void setDataSource(DataSource newDataSource) throws SQLException {
this.dataSource = newDataSource;
checkTable();
}
/**
* Add a person to the table.
*/
public void addPerson(Person person) throws SQLException {
Connection connection = dataSource.getConnection();
try {
PreparedStatement pstmt = connection
.prepareStatement("INSERT INTO PERSONS (ID, FIRSTNAME,
LASTNAME) VALUES (?, ?, ?)");
pstmt.setString(1, UUID.randomUUID().toString());
pstmt.setString(2, person.getFirstName());
pstmt.setString(3, person.getLastName());
pstmt.executeUpdate();
} finally {
if (connection != null) {
connection.close();
}
}
}
/**
* Get all persons from the table.
*/
public List<Person> selectAllPersons() throws SQLException {
Connection connection = dataSource.getConnection();
try {
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<servlet-mapping>
<servlet-name>PersistenceWithJDBCServlet</servlet-name>
<url-pattern>/</url-pattern>
</servlet-mapping>
Note
If your servlet version is 3.0 or higher, you just need to change the WebServlet annotation in the
PersistenceWithJDBCServlet.java class to be as the following: @WebServlet("/").
Note
An application's URL path contains the context root followed by the optional URL pattern ("/<URL
pattern>"). The servlet URL pattern that is automatically generated by Eclipse uses the servlet’s class
name as part of the pattern. Since the cockpit only displays the context root, this means that you cannot
directly open the application in the cockpit without adding the servlet name. To call the application by only
Extend the servlet to use the persistence functionality. The servlet adds Person entity objects to the database,
retrieves their details, and displays them on the screen.
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.SQLException;
import java.util.List;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.sap.security.core.server.csi.IXSSEncoder;
import com.sap.security.core.server.csi.XSSEncoder;
/**
* Servlet implementing a simple JDBC based persistence sample application for
* SAP Cloud Platform.
*/
public class PersistenceWithJDBCServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
private static final Logger LOGGER = LoggerFactory
.getLogger(PersistenceWithJDBCServlet.class);
private PersonDAO personDAO;
/** {@inheritDoc} */
@Override
public void init() throws ServletException {
try {
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx
.lookup("java:comp/env/jdbc/DefaultDB");
personDAO = new PersonDAO(ds);
} catch (SQLException e) {
throw new ServletException(e);
} catch (NamingException e) {
throw new ServletException(e);
}
}
/** {@inheritDoc} */
@Override
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
response.getWriter().println("<p>Persistence with JDBC!</p>");
try {
appendPersonTable(response);
4. Save the servlet. The project should compile without any errors.
1. To test your web application on the local server, follow the steps for deploying a web application locally as
described in Deploying Locally from Eclipse IDE [page 1045]. You should see the following output:
2. Enter a first name (for example, John) and a last name (for example, Smith) and choose Add Person.
John Smith is added to the database as shown below:
If you add more names to the database, they will also be listed in the table displayed.
To test your web application in the cloud, define a server in Eclipse. Use the cockpit to create a default binding for
your application. Add the application to the new server and start it. This will the deploy the application on the
cloud, and you should see the same output as when the application was tested on the local server.
Prerequisites
● You have set up your runtime environment in the Eclipse IDE. For more information, see Setting Up the
Runtime Environment [page 48].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see Developing
Java Applications [page 1034] or Importing Samples as Eclipse Projects [page 62]
Note
The application name should be unique enough so that your deployed application can be easily
identified.
○ Select a runtime. If you leave the Automatic option, the server will load the target runtime of your
application.
○ Enter your account name, e-mail or user name, and password and choose Next.
Note
○ If you have previously entered an account and user name for your landscape host, these names will
be prompted to you in dropdown lists.
Note
Adding an application would automatically start this application with the effect that it would fail
because no data source binding exists. You will add an application in a later step.
○ Choose Finish.
3. On the Servers view, open the context menu for the server you just created and choose Show In
Cockpit . The cockpit opens inside Eclipse.
1. In the cockpit, select an account and choose Persistence Databases & Schemas in the navigation area.
2. Select the database that you want to create a binding for.
3. Choose Data Source Bindings in the navigation area.
Note
For more information on Data Source Bindings, see Binding Databases [page 863].
4. Define a binding (<DEFAULT>) for the application and select a database ID. Choose Save.
This creates a default binding for the application. You can use an existing database or create a new one.
1. On the Servers view in Eclipse, open the context menu for the server and choose Add and Remove...
<application name> . To add the application to the server, add the application to the panel on the right side.
Choose Finish.
2. Start the server. This will deploy the application and start it on the SAP Cloud Platform.
You can access the application by clicking the application URL on the application overview page in the cockpit.
Note
You cannot deploy multiple applications on the same application process. Deployment of a second application
on the same application process overwrites any previous deployments. If you want to deploy several
applications, deploy each of them on a separate application process.
This three-step guide shows how applications can replace context.xml with web.xml.
Overview
Earlier versions of the persistence tutorials used context.xml to declare a reference to the default data source
provided by the persistence service. The tutorials have since been adapted to include the resource reference
description in the web.xml deployment descriptor, in accordance with the Java EE Specification, as follows:
<resource-ref>
<res-ref-name> NAME </res-ref-name>
<res-type> TYPE </res-type>
</resource-ref>
If you have Web applications that use context.xml, you are advised to switch to web.xml as soon as possible by
completing the migration steps described below. The use of context.xml is no longer supported.
Procedure
1. Open the context.xml file in the WebContent/META-INF folder of your Web application project. You should
see the following with similar values (the values shown below are based on the tutorials):
<Resource name="jdbc/DefaultDB"
auth="Container"
type="javax.sql.DataSource"
factory="com.sap.jpaas.service.persistence.core.JNDIDataSourceFactory"/>
You require the resource name and type values in the next step.
2. Add the resource reference description to the web.xml file:
1. Open web.xml in the WebContent/WEB-INF folder of your Web application project.
2. Insert the following content after the <servlet-mapping> elements:
<resource-ref>
<res-ref-name>NAME</res-ref-name>
<res-type>TYPE</res-type>
</resource-ref>
3. Replace the values for the resource name and type with those from step 1, as shown in the example
below, and save:
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
3. Delete context.xml from the WebContent/META-INF folder of your Web application project.
Adding Application-Managed Persistence with JPA (Java Web SDK) [page 807]
Adding Persistence with JDBC (Java Web SDK) [page 819]
This step-by-step tutorial shows how you can create a database on an SAP HANA database system from a
selected account in the SAP Cloud Platform cockpit.
Context
In your account in the SAP Cloud Platform cockpit (cockpit), you create a database on an SAP HANA database
system that is enabled for multitenant database container support. Once the database is available, you start the
SAP HANA Web-based Development Workbench (Web IDE) from the cockpit and create an SAP HANA XS Hello
World. Then you run the program from the Web IDE.
In the cockpit you create a binding between the database and an existing Java application. You deploy the Java
application from the cockpit and run it.
You can view the application in the browser and enter first names and last names in the table. Then switch to the
Catalog view in the Web IDE and search for the new table. Check that the names you entered are available in the
database.
Note
This document relates to beta functionality available on SAP Cloud Platform. To be able to use this
functionality, please order an SAP HANA database system enabled for SAP HANA multitenant database
containers..
Please contact SAP for details at the SAP Support Portal as described at Get Support [page 1444].
Caution
You should not use SAP Cloud Platform beta features in productive accounts, as any productive use of the beta
functionality is at the customer's own risk, and SAP shall not be liable for errors or damages caused by the use
of beta features.
Table 297:
Steps Tools
1. Create a Database in the Cockpit [page 831] SAP Cloud Platform cockpit
2. Create a Database User with Permissions for Working with Web IDE [page 832] SAP HANA cockpit
3. Start and Work with the Web IDE [page 833] SAP Cloud Platform cockpit
4. Deploy the Persistence with JDBC Java Application [page 834] Maven
Browser
5. View Table Content in Web IDE [page 835] SAP HANA Web-based Development
Workbench
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP HANA Tools for Eclipse, SAP Cloud Platform Tools for
Java, and SDK. For more information, see Installing SAP HANA Tools for Eclipse [page 68] and https://
tools.hana.ondemand.com/#cloud.
Note
The tutorial is based on the SDK for Java Web.
● You have installed an SAP HANA database system enabled for multitenant database container support. This
system must be assigned to an account.
● You have a user with the administrator role for the account.
● You have installed Maven.
Table 298:
Property Value
Database System An example of a database system is an SAP HANA system that has multitenant
database container support enabled (productive and trial).
mdc1 (HANAMDC)
Note
mdc1 corresponds to the database system on which you create the data
base.
SYSTEM User Password Provide the password for the SYSTEM user of the database.
5. Choose Save.
The Events page is displayed. It shows the progress of the database creation. Wait until the tenant database is
in state Started.
6. (Optional) To view the details of the new database, choose Overview in the navigation area and select the
database in the list. Verify that the status STARTED is displayed.
Next step: You can start the SAP HANA Web-based Development Workbench (Web IDE) to work with the new
database. To open the link to the Web IDE, you need a database user with the required permissions to work with
the Web IDE. To create the user with the required permissions, proceed as described in 2. Create a Database User
with Permissions for Working with Web IDE [page 832].
2. Create a Database User with Permissions for Working with Web IDE
You want to connect to the Web IDE and work with it. First you need to create a new database user in the SAP
HANA cockpit and assign the user the required permissions.
1. Go to the cockpit and log on to the SAP HANA cockpit with the SYSTEM user and password.
A message is displayed to inform you that at that point, you lack the roles that you need to open the SAP
HANA cockpit.
1. To open the SAP HANA cockpit, go to the database overview page in the SAP Cloud Platform cockpit.
2. Choose Persistence Databases & Schemas in the navigation area and select the relevant database
in the list.
3. In the database overview, open the SAP HANA cockpit link under Development Tools.
Caution
At this point, you are still logged on with the SYSTEM user. You can only use your new database user to work
with SAP HANA Web-based Development Workbench by logging out from SAP HANA cockpit first. Otherwise,
you would automatically log in to the SAP HANA Web-based Development Workbench with the SYSTEM user
instead of your new database user. Therefore, choose the Logout button before you continue to work with the
SAP HANA Web-based Development Workbench, where you need to log on again with the new database user.
Result: You have created a database user and assigned the user the required roles.
Start the SAP HANA Web-based Development Workbench (Web IDE) from the cockpit and log on with your new
database user and password. Use the Editor to create an SAP HANA XS project and start it.
1. In the SAP Cloud Platform cockpit, choose Persistence Databases & Schemas in the navigation area.
2. Select the relevant database in the list.
3. In the overview that is shown in the lower part of the screen, click the SAP HANA Web-based Development
Workbench link under Development Tools.
Note
Use the Logout button in the header to log on with a different user.
6. To create a new package, choose New Package from the context menu for the Content folder.
7. Enter a package name.
The package appears under the Content folder node.
8. From the context menu for the new package node, choose File Create Application .
9. Select HANA XS Hello World as template and choose Create.
When you click the files under the new package in the hierarchy, they open in the editor.
10. To deploy the program, select the logic.xsjs file from the new package and choose Run.
The program is deployed and displayed in the browser: Hello World from User <Your User>.
You want to work with the application. Deploy the Persistence with JDBC sample in the cockpit, create a binding,
and start the application.
You have downloaded and set up your Eclipse IDE, SAP HANA Tools for Eclipse, and SDK.
For more information, see Installing SAP HANA Tools for Eclipse [page 68].
Note
Do not choose Start. If you choose Start, a default schema and binding will be created for the database.
1. In the cockpit, choose Persistence Databases & Schemas in the navigation area.
2. Select the database in the list.
3. Choose New Binding.
4. Leave the default settings for the data source (<DEFAULT>).
5. Select your Java application.
6. Enter your user for the database and your password.
7. Save your entries.
The binding appears in the list.
To verify that the data you entered in the table is available, use the Web IDE.
1. To view the table in the Web IDE, you have the following options:
○ If the Web IDE is still open, choose Navigation Links Catalog .
○ If you need to reopen the Web IDE, proceed as described in 3. Start and Work with the Web IDE [page
833], and on the entry page choose the Catalog entry point.
2. In the tree, choose Catalog/YourUser/Tables/T_PERSONS.
3. In the table view, choose Open Content to view the table entries.
Related Information
This step-by-step tutorial shows how you create a database in an SAP HANA database system with multitenant
database container support enabled, using SAP Cloud Platform Console Client commands.
Context
In the console client command line, you execute the command to create a database. Once the database is
available, you use the console client command to create a binding between the database and an existing Java
application. You use the commands to deploy the Java application and run it. You can view the application in the
browser, enter first names and last names in the table, and check in SAP HANA Client that the names you entered
are available in the database.
Note
To be able to use this functionality, please order an SAP HANA database system enabled for SAP HANA
multitenant database containers.
Please contact SAP for details at the SAP Support Portal as described at Get Support [page 1444].
Caution
You should not use SAP Cloud Platform beta features in productive accounts, as any productive use of the beta
functionality is at the customer's own risk, and SAP shall not be liable for errors or damages caused by the use
of beta features.
Table 299:
Steps Tools
1. Create a Database Using Database System mdc1 [page Console client, SDK
837]
3. Create a Database User and Assign a Role [page 839] Console client, SDK
Database tunnel
4. Bind Java Application to the Database [page 841] Console client, SDK
5. Start Java Application and Add Person Data with Servlet Console client, SDK
[page 841]
Browser
Prerequisites
● You have downloaded and set up your SDK and SAP HANA client. For more information, see https://
tools.hana.ondemand.com/#cloud.
Note
The tutorial is based on the SDK for Java Web.
● You have installed an SAP HANA database system enabled for multitenant database container support. This
system is assigned to an account.
● You have a user with the administrator role for the account.
● You have installed Maven.
Output Code
Create Database
Note
To create a tenant database on a trial landscape, use -trial- instead of the ID of a SAP HANA tenant database.
To access the SAP HANA database, provide the SYSTEM user password.
If the console client reponse is that the status is CREATING, repeat the command until the status is STARTED.
Output Code
You need the tunnel to connect to your database. You can use the connection details you obtain from the tunnel
response to connect to database clients, for example, Eclipse Data Tools Platform (DTP).
Note
The database tunnel must remain open while you work on the remote database instance. Only close the tunnel
once you have completed the session.
Tip
Only use this command window for the tunnel command.
Output Code
Note
You can also create a database user with SAP HANA studio in Eclipse IDE. For more information, see Creating
an SAP HANA Database from the Cockpit [page 830].
Open a new command window and navigate to the <SAP>/hdbclient folder. Start the client to work in
interactive mode.
\hdbclient>hdbsql
Output Code
Output Code
Password:
Connected to localhost:30015
Output Code
0 rows affected (overall time 286,192 msec; server time 11,370 msec)
If the database has a password policy that requires users to change their password after the initial logon, you need
to provide a new password, otherwise you cannot work with the servlet.
Use the quit command to log off from the hdbsql client.
hdbsql NEO_MULTID...=> \q
\hdbclient>hdbsql
Output Code
Output Code
Password:
You have to change your password.
Enter new Password:
Confirm new Password:
Connected to localhost:30015
Output Code
Output Code
Output Code
Copy the URL from the status command into the address field of your browser and add /persistence-with-
jdbc/. Start the servlet in the browser and add person data.
Output Code
2 rows selected (overall time 291,603 msec; server time 156 usec)
Related Information
SAP Cloud Platform account administrators can create databases on database management systems in their
account. Developers can bind databases to applications running on SAP Cloud Platform.
A database is associated with a particular account and is available to applications in this account. You can create
databases, bind them to applications, and delete them using the console client or the cockpit. You can bind the
same database to multiple applications, and the same application to multiple databases.
You can work with different database systems on SAP Cloud Platform, each of which has different capabilities and
may be suited better in a trial or a productive scenario. Read the following explanation and choose the one that fits
your scenario best.
Terminology
We use the term “database” to refer commonly to all database types and systems currently in use with SAP Cloud
Platform. Note that more specific names might be used to refer to databases in the context of the corresponding
technology. SAP Adaptive Server Enterprise(SAP ASE) speaks of user databases for example. SAP HANA speaks
of multitenant database containers (MDC), also called tenant databases.
A database management system (DBMS) is a computer system that enables administrators, developers, and
applications to interact with one or more databases and provides access to the data contained in the database. It
runs on a hardware host (or several hosts for distributed database systems) and has a version. Examples for
DBMSs are SAP HANA and SAP ASE.
A database is an organized collection of the data that can be backed up and restored separately. The database is
the technical unit that contains the data where DBMS is a service that enables users to define, create, query,
update, and administer the data. Therefore, the term “database” is not equivalent with the term “database
system” even if the term “database” is often used to refer to both a database and the DBMS used to access and
manage it.
Table 300:
You want to use an SAP You can use an SAP HANA database in productive mode.
HANA database on a pro
The productive SAP HANA database provides you with a database reserved for your exclusive
ductive landscape.
use, enabling you to develop with SAP HANA as with an on-premise system. You have full con
trol of user management and can use a range of tools.
For more information, see Using a Productive SAP HANA Database System [page 1080].
You want to use an SAP You can try out working with an SAP HANA database on the trial landscape.
HANA database on the trial
The trial SAP HANA database provides you with a single database schema or repository pack
landscape.
age on a shared HANA database, enabling you to work with SAP HANA in a managed environ
ment. Your SAP HANA packages or schemas (and therefore your data) might be distributed
across different databases. Restrictions apply to ensure user and data isolation. Developers
have limited access rights. You use predefined scripts to grant additional rights and privileges.
You can create SAP ASE databases on database management systems in your account and bind databases to
applications running in the cloud.
You receive a database reserved for you that resides in a multiple-container system (MDC, tenant database).
Hosting of multiple SAP HANA databases on a single SAP HANA database system is possible when the support
for multitenant database container is enabled for the SAP HANA databases (currently a beta feature). All tenant
databases in the same system share the same system resources (memory and CPU cores) but each tenant
database is fully isolated with its own database users, catalog, repository, persistence (data files and log files) and
services.
Restriction
To be able to use this functionality, please order an SAP HANA database system enabled for SAP HANA
multitenant database containers. It is not possible to enable SAP HANA multitenant database containers for
existing SAP HANA database systems.
Please contact SAP for details at the SAP Support Portal as described at Get Support [page 1444].
You want to use an SAP You can use a tenant database reserved for you in productive mode. However, some restric
HANA tenant database on a tions apply.
productive landscape.
Caution
You should not use SAP Cloud Platform beta features in productive accounts, as any pro
ductive use of the beta functionality is at the customer's own risk, and SAP shall not be lia
ble for errors or damages caused by the use of beta features.
Restriction
Backup
● When you delete tenant databases, data and log backups are also deleted so that the
database cannot be recovered.
● When you stop a tenant database for several days, it may not be possible to recover
the database. It is important to keep databases running without longer downtimes.
Monitoring
● The availability of SAP HANA databases enabled for multitenant database container
support is not monitored and no alerts are sent when a database is not available.
● The registration of availability checks for HANA native applications is not supported
yet.
Memory Management
● Memory allocation limits must be set manually per tenant database using HANA tools
like HANA studio or HANA Web IDE. The sum of the specified allocation limits must not
exceed the memory available for tenant databases. There is no overview available on
database system level regarding actual memory consumption and specified memory
limits.
● If the specified memory limit for a certain tenant database is exceeded, the connection
to the tenant database may not be possible anymore until the tenant database is re
started or the limit is increased by SAP Cloud Platform operators.
● Be aware that setting tight memory limits for tenant databases may lead to failing
backups and a recovery may not always be possible.
Connectivity
You want to use an SAP You can try out working with a tenant database on the trial landscape.
HANA tenant database on
The trial tenant database offers you the same user experience as the productive tenant data
the trial landscape.
base. You create a trial tenant database in the same way with the only difference that you se
lect the database system HANA MDC (<trial>).
Restriction
● You can create your own trial database on a shared HANA MDC system. The persis
tence service determines to which database system the tenant is assigned.
● You can create only one trial tenant database in the account.
● Trial databases are configured using fixed quota for RAM and CPU.
● You can use the trial tenant database for 12 hours. It will be shut down automatically
after this period to free resources.
● If you do not use the tenant database for 7 days, it will be deleted automatically to free
the consumed disk space.
● Backup is not enabled and no recovery is possible.
● There are some other restrictions which HANA features can be used in the trial sce
nario and which not.
Related Information
You can manage the database systems available in your account on SAP Cloud Platform.
Prerequisites
A database management system (DBMS) is a computer system that enables administrators, developers, and
applications to interact with one or more databases and provides access to the data contained in the database. It
runs on a hardware host (or several hosts for distributed database systems) and has a version. Examples for
DBMSs are SAP HANA and SAP ASE.
A database is an organized collection of the data that can be backed up and restored separately. The database is
the technical unit that contains the data where DBMS is a service that enables users to define, create, query,
update, and administer the data. Therefore, the term “database” is not equivalent with the term “database
system” even if the term “database” is often used to refer to both a database and the DBMS used to access and
manage it.
SAP Cloud Platform account administrators can create databases on database management systems in their
account. You can use the cockpit or the console client to manage the database systems in the cloud. Typical tasks
that you perform for database management systems are installing and updating database systems, monitoring,
or restart.
Note
We do not offer database systems on the trial landscape.
You can view all the information related to database systems in the cockpit. Start on the dashboard for a selected
account by checking the number of available database systems. Navigate to Persistence Database Systems
and drill down to the level of individual database systems to trigger actions like restart, install, or update.
The following sections are about tasks you perform related to database systems in the cloud.
Related Information
Learn about the activities that you need to perform to update your SAP HANA database systems.
Prerequisites
Basic authentication must be enabled for SAP HANA Application Lifecycle Management to be able to update SAP
HANA XS-based components. You can check and enable basic authentication using the SAP HANA XS
Administration Tool. Navigate to the sap/hana/xs/lm package and add Basic in the Authentication section.
Context
To update your SAP HANA database systems, you have the following options:
● Update the software components installed on your SAP HANA database system to a higher version
● Apply a single Support Package on top of an existing SAP HANA database system
Remember
Make sure that you read the SAP Notes listed in the UI before the update. Apply all the steps required before or
after the update.
Recommendation
We recommend always using the latest available version. For more information about the availability of new
HANA revisions for the update, please refer to the release notes of SAP Cloud Platform. To ensure that you
can use a new HANA revision for productive use, check whether it is marked as production-ready in SAP Note
2021789 - SAP HANA Revision and Maintenance Strategy.
Please expect a temporary downtime for the SAP HANA database or SAP HANA XS Engine when updating SAP
HANA. You might not be able to work with SAP HANA studio, SAP HANA Web-based Development Workbench,
and cockpit UIs that depend on SAP HANA XS.
Procedure
1. Log on to the cockpit with the administrator role on the productive landscape.
2. Select an account.
All database systems available in the account are listed with their details, including the database type, version,
memory size, state, and the number of associated databases.
4. To select the entry for the relevant database system in the list, click the link on its name.
The overview of the database system shows details, including the database version and state, and the number
of associated databases.
5. To update an SAP HANA database system, choose Check for updates.
All versions available for the specified productive SAP HANA database system are listed.
6. Select a version to update.
Remember to read the corresponding release note if you select the option to update to a higher version.
Note
You can select SAP HANA revisions approved for use in SAP Cloud Platform only. If you want to update to
another revision, please contact SAP Support.
Updating a SAP HANA database system to a maintenance revision can result in upgrade path limitations.
See SAP Note 1948334 for details.
7. (Optional) Specify if you would like the update process to stop and prompt for confirmation before the update
of the SAP HANA database system is applied and the system downtime is started.
This option is selected by default. If you deselect it, the update is performed without any user interaction.
8. Choose Continue/Update.
The system begins preparing to update. The update process will take some time and is executed
asynchronously. The update dialog box remains on the screen while the update is in progress. It is safe to
close the dialog box and reopen it later.
9. (Optional) If you chose to be prompted for confirmation after preparation of the update, the update process
will stop and prompt for your confirmation to start the update.
While preparing the update, the SAP HANA database system is not modified, so it is safe to cancel the update
process.
10. Choose Update.
The update starts and takes about 20 minutes.
Results
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is currently supported by SAP Cloud Platform.
If your databases are not working properly, you can try to solve the issues by restarting the corresponding SAP
HANA or SAP ASE database system. The restart is done for the whole database system.
Procedure
1. Log on to the cockpit and select the account that owns the SAP HANA or SAP ASE database system you
would like to restart.
Note
If security OS patches are pending for the database system you have restarted, the host of the database
system will also be restarted.
Results
If you triggered the restart of an SAP HANA database system, you can also monitor the system status during the
restart using the HANA tools. Connected applications and database users cannot access the system until it is
restarted. The restart for the SAP HANA database system is complete when HANA tools like SAP HANA cockpit
are available again.
● To restart an SAP HANA database system from the console client, use the restart-hana [page 258] command.
● To restart a single tenant database instead of the whole database system, use the stop-db-hana [page 286]
and start-db-hana [page 281] commands or the cockpit.
If your database system is corrupt, you can perform a point-in-time restore by creating a service request in the
cockpit.
Procedure
1. Log on to the cockpit with the administrator role and select an account.
Caution
If you restore a database system, all databases within this system will be restored. If you want to
restore a single database only, see Restoring Databases [page 874].
b. Select the Database System you want to restore from the dropdown box.
c. Use the Restore To field to specify a specific point in time to which you want to restore the database
system.
Caution
You will lose all data stored in the databases in the database system between the time you specify in
the New Service Request screen and the time at which you create the service request. If you create a
restore request at 3pm to restore your database system to 9am on the same day for example, all data
stored between 9am and 3pm will be lost.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to your clipboard.
Tip
Navigate to Persistence Service Requests and choose the Display icon next to your request to
find the template for opening a ticket at any time.
f. Choose Close.
Note
You need the authorization to create an incident. Contact a user administrator in your company to
request this authorization.
Tip
You can find a detailed step-by-step instruction for creating an incident in the Report an Incident - Help .
6. Once you have reached the Enter Incident view, enter the following data:
a. In the Classification panel, enter the component for persistency.
Note
For a complete list of SAP Cloud Platform components, see 1888290 .
b. In the Problem Details panel, enter the title Database System Restore Request in the Short Text
field.
c. Paste the template text you copied to your clipboard into the Long Text field.
d. Choose Send Incident.
Results
You have created a request for restoring a database system and sent the request to SAP Support for processing.
As soon as your database system is restored, the state of your request will be set to Finished in the cockpit and
the incident you created will be set to Completed. You can see the state of your request in the cockpit by
navigating to Persistence Service Requests . The state is displayed next to your service request. In the
meantime, SAP Support might contact you in case they need further clarification. You will be notified by e-mail if
you need to take any further action.
Note
Your database system is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to Persistence Service Request , choose your restore request and
select the Delete icon. Note that your request can only be cancelled if it has the state New.
Prerequisites
Basic authentication must be enabled for SAP HANA Application Lifecycle Management to be able to install SAP
HANA XS-based components. You can check and enable basic authentication using the SAP HANA XS
Administration Tool. Navigate to the sap/hana/xs/lm package and add Basic in the Authentication section.
Context
● SAP HANA platform components, which are installed on the SAP HANA database system on operating
system level
● SAP HANA XS applications, which are deployed on the SAP HANA database system
Note
You can install only SAP HANA components, which are enabled in your account.
Restriction
Installation of SAP HANA XS-based components on SAP HANA database systems, which are configured to
support SAP HANA multitenant database containers, is currently not supported.
Installation of SAP HANA XS-based components is supported on SAP HANA database systems with version
SPS09 or higher.
Recommendation
We recommend always using the latest available version.
Please expect a temporary downtime for the SAP HANA database or SAP HANA XS Engine when installing some
SAP HANA components. You might not be able to work with SAP HANA studio, SAP HANA Web-based
Development Workbench, and cockpit UIs that depend on SAP HANA XS.
1. Log on to the cockpit with the administrator role on the productive landscape and select an account.
All database systems available in the account are listed with their details, including the database type, version,
memory size, state, and the number of associated databases.
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
3. To select the entry for the relevant database system in the list, click the link on its name.
The overview of the database system shows details, including the database version and state, and the number
of associated databases.
4. To install an SAP HANA component for the selected productive database system, choose Install components.
All solutions which are available for the installation are listed.
5. Select a solution to install.
If you have a license for the solution in your account, all SAP HANA components, which are part of the
solution, are listed.
6. Select the target version for all listed components.
7. (Optional) Specify if you would like the installation process to stop and prompt for confirmation before the
SAP HANA components are installed and the system downtime is started.
This option is selected by default. If you deselect it, the installation is performed without any user interaction.
8. Choose Continue/Install.
The system begins preparing to install. The installation process will take some time and is executed
asynchronously. The installation dialog box remains on the screen while the installation is in progress. It is
safe to close the dialog box and reopen it later.
9. (Optional) If you chose to be prompted for confirmation after preparation of the installation, the installation
process will stop and prompt for your confirmation to start the installation.
While preparing the installation, the SAP HANA database system is not modified, so it is safe to cancel the
installation process.
10. Choose Install.
The installation starts and takes about 20 minutes.
Results
SAP HANA components are installed on your SAP HANA database system.
Note
For more information, see the SAP HANA Developer Guides listed below. Refer to the SAP Cloud Platform
Release Notes to find out which HANA SPS is supported by SAP Cloud Platform.
The persistence service provides a set of console client commands for managing database systems. For example,
you can list database systems available for an account or restart a whole SAP HANA database system.
Related Information
You can create databases, bind them to applications running on SAP Cloud Platform, and delete them.
Note
This section explains working with SAP HANA multitenant database containers (MDC - also called tenant
databases), and SAP ASE databases.
For more information about working with SAP HANA database systems (using schemas instead of tenant
databases), see Administering Database Schemas [page 901] and SAP HANA: Development [page 1078].
Create
You can create databases on database management systems in your account and assign properties like database
size. The database is independent of any single application and has to be explicitly bound.
You can use a freely definable database ID. What elements you are allowed to use depends on the database type
that you create. A database ID must only occur once throughout the databases in an account. Remember that the
physical database name is not the same as the database ID.
Bind
Bindings are identified by a data source name, which must only occur once in any one application. You can bind
databases to applications based on an explicitly named data source or using the default data source. The main
differences are as follows:
You can share a database between applications by binding the same database to more than one application.
Remember the following when binding databases to applications:
● An application’s bindings are based on either named data sources or the default data source. An application
cannot use a combination of the two types of bindings.
● When named data sources are used, binding names must only occur once in any one application.
When you bind the database to an application, you specify a custom logon, which consists of a database user
name and a password, that is then used by the application to access the database.
Delete
You should drop a database if it is no longer required, or if you want to redeploy an application from scratch
cleaning old data.
Before deleting a database, you should explicitly remove any bindings that still exist between the database and an
application. You can also remove all bindings by enforcing deletion of the database by executing the
corresponding console client command.
Restart
If your databases are not working properly, you can try to solve the issues by restarting either the whole SAP
HANA database system, or a single tenant database.
For more information about restarting a SAP HANA database system, see Restarting Database Systems [page
850].
Related Information
Use the cockpit to create databases on database management systems in your account and assign properties like
database size.
Context
In the cockpit, you can create databases at the account and the database system level. The procedures listed
below describe how to create a database at the account level. To create a database at the database system level,
choose Persistence Database Systems in the navigation area at the account level. Select a database
system in the list. Choose Databases in the navigation area at the database system level. Then choose New
Database and enter the required details.
The number of SAP HANA tenant databases and SAP ASE user databases you can create is limited. You receive
an error message once the maximum number of databases is reached. The default limits for SAP HANA and SAP
ASE database systems are shown in the tables below.
Note
Depending on your database system configuration, the number of tenant/user databases you can create might
differ from the limits shown below.
SAP HANA Memory Size Number of Tenant Databases You Can Create
24GB 1
32GB 1
64GB 4
128GB 10
256GB 24
512GB 50
Table 303:
SAP ASE T-Shirt Size Number of User Databases You Can Create
120MB 5
40GB 200
80GB 200
160GB 200
320GB 200
640GB 200
Related Information
Use the cockpit to create an SAP HANA database enabled for multitenant database container support (beta) on
an SAP HANA database management system in your account.
Context
The procedure below describes how to create a database at the account level.
Procedure
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
5. Select a Database System from the dropdown box. The list shows the IDs of all database systems deployed in
your account.
6. Specify the SYSTEM user password to access the database.
7. (For accounts on the trial landscape only) Turn on the Configure User for SHINE switch to create a user for the
SAP HANA Interactive Education (SHINE) demo application.
a. In the SHINE User Name field, provide a user name for the SHINE user.
Note
The user name can only contain uppercase and lowercase letters ('a' - 'z', 'A' - 'Z'), numbers ('0' - '9'),
and underscores ('_').
b. Provide a password for the SHINE user in the SHINE User Password field and repeat the password in the
Repeat Password field.
Note
The password must contain at least one uppercase and one lowercase letter ('a' - 'z', 'A' - 'Z') and one
number ('0' - '9'). It can also contain special characters (except ", ' and \).
Note
The number of databases you can create is limited. You receive an error message once the maximum
number of databases is reached. For more information on tenant database limits, see Creating Databases
[page 857].
Results
The Events page is displayed. It shows the progress of the database creation. Wait until the tenant database is in
state Started.
Next Steps
You can perform further actions for the newly created database, for example, configure, or delete it. Proceed as
follows:
● To create bindings for the database, choose Data Source Bindings in the navigation area.
● To monitor the progress of the database creation in detail, choose Events in the navigation area.
● To delete a database, first delete all existing bindings to the database. In the overview of the database, choose
the Delete button. It is only enabled if a database does not have any bindings.
Use the cockpit to create an SAP ASE database on an SAP ASE database management system in your account
and assign properties like database size.
Context
The procedure below describes how to create a database at the account level.
Procedure
Tip
To view the details of a database, for example, its state and the number of existing bindings, select a
database in the list and click the link on its name. On the overview of the database, you can perform further
actions, for example, delete the database.
This parameter sets the maximum database size. The minimum database size is 24 MB. An error message
appears if you enter a database size that exceeds the quota for this database system.
The size of the transaction log will be at least 25% of the database size you specify.
7. Specify a database user.
The user is created for you on the database and enables you to access the database.
8. Specify the database user password to access the database.
9. Choose Save.
Note
The number of databases you can create is limited. You receive an error message once the maximum
number of databases is reached. For more information on user database limits, see Creating Databases
[page 857].
Results
The Events page is displayed. It shows the progress of the database creation. Wait until the database is in state
Started.
Next Steps
You can perform further actions for the newly created database, for example, configure, or delete it. Proceed as
follows:
● To create bindings for the database, choose Data Source Bindings in the navigation area.
● To monitor the progress of the database creation in detail, choose Events in the navigation area.
● To delete a database, first delete all existing bindings to the database. In the overview of the database, choose
the Delete button. It is only enabled if a database does not have any bindings.
Related Information
Prerequisites
If you want to bind your application to a database that is owned by another account of your global account, you
need permission to use the database. See Adding New Cross-Account Permissions [page 881].
Context
You can bind your applications to databases that are owned by your own account or by other accounts of your
global account.
Note
To bind your databases to accounts that do not belong to your global account, see Sharing Databases with
Other Accounts [page 886].
In the cockpit, you can create and delete database bindings at both the database and application level:
● To create bindings by database, use the Data Source Bindings panel at the database level.
● To create bindings by application, use the Data Source Bindings panel at the application level.
Procedure
Note
The application must be deployed in the selected account.
3. Enter a database user name and a password in the Custom Logon section.
Caution
The initial password of this database user needs to be changed before binding
the application to an SAP HANA database, since the application will otherwise
throw an exception.
4. Select the checkbox Verify credentials to verify the validity of the custom logon
data.
5. Save your entries.
By application 1. Choose Applications Java Applications in the navigation area and select the
relevant application in the application list.
2. Choose Configuration Data Source Bindings in the navigation area.
The overview shows the bindings available for the specific application.
3. Choose the New Binding button.
4. In the New Binding screen, enter the following details:
1. Enter a data source name.
2. Select the database that you want the application to be bound to.
3. (optional) If the database does not already exist, create it first. For more informa
tion, see Creating Databases [page 857].
4. Enter a database user name and a password in the Custom Logon section.
Caution
The initial password of this database user needs to be changed before binding
the application to an SAP HANA database, since the application will otherwise
throw an exception.
5. Select the checkbox Verify credentials to verify the validity of the custom logon
data.
6. Save your entries
Next Steps
The state of an application decides when a newly bound database will become effective. If an application is already
running (Started state), it will continue using the old database until restarted. A restart is also required if
additional databases have been bound to the application.
Note
To unbind a database from an application, simply delete the binding. The application will maintain access to the
database until restarted.
Related Information
Procedure
Note
By default, the user has the permissions required to use the new schema. You can assign the user
additional permissions or remove permissions, as necessary.
5. Log off and then reconnect to the SAP HANA system using the database user and password you just created.
6. Change the initial password when prompted.
Caution
Make sure you change the initial password when prompted and before binding the HANA database to the
application, since the application will otherwise throw an exception.
7. In the Systems view, expand the Catalog node. You should see a schema with the same name as your
database user.
Related Information
Procedure
In the console client command line, execute the create-db-user-ase command. This command creates a user
for an SAP ASE database.
The database user you specify when you create the binding determines which schema an application is able to
access. Typically the application uses the database user’s default schema, but since a database user may have
access to more than one schema, it could potentially also use any of these non-default schemas.
Default Schemas
The default schema is the schema whose name is identical to that of the database user. It is created automatically
when a database user is created.
We recommend working with a database user’s default schema. If you require multiple schemas, simply create
separate appropriately named database users and then bind each of their default schemas to the application
using named data sources. If you choose to use non-default schemas, be aware that this is more error prone and
requires greater care with the application code.
Non-default Schemas
An application can access a non-default schema in its program code by adding the schema name as a prefix to the
table name as follows: <schema name>.<table name>
When programming with JPA, you add the schema prefix to the table annotation in the JPA entity class.
Example
Table T_PERSON in the schema COMPANYDATA:
@Entity
@Table(name = "COMPANYDATA.T_PERSON")
For JDBC, all occurrences of the table names in SQL statements require the schema prefix.
Example
Table T_PERSONS in the schema COMPANYDATA:
Table 304:
INSERT "INSERT INTO COMPANYDATA.T_PERSONS (ID, FIRSTNAME, LASTNAME) VALUES (?, ?, ?)"
CREATE "CREATE TABLE COMPANYDATA.T_PERSONS (ID VARCHAR(255) PRIMARY KEY NOT NULL,
FIRSTNAME VARCHAR (255), LASTNAME VARCHAR (255))"
Note
When you retrieve database metadata in order to check whether a table already exists, bear in mind that you
might also need to specify the schema parameter, in particular, if you have multiple schemas containing tables
with identical names:
Example
Java applications deployed on the SAP Cloud Platform can be assigned one or more database schemas. For
developing Java applications on productive SAP HANA databases, custom logons allow you to control which
schemas an application is able to access.
Prerequisites
● You have installed the required tools. See Installing SAP HANA Tools for Eclipse [page 68].
● You have connected to the productive SAP HANA database from Eclipse. See Connecting to SAP HANA
Databases via the Eclipse IDE [page 932].
● You have set up the console client. See Setting Up the Console Client [page 52].
● You have created a database user that you use to access the database. See Creating a Database
Administrator User [page 1084].
Context
Productive SAP HANA databases are designed for developing with SAP HANA in a productive environment and
provide you with a database reserved for your exclusive use. When you bind Java applications to a productive SAP
HANA database, you specify a custom logon, which consists of an SAP HANA database user, in effect the relevant
schema owner, and a password. The database user is then used by the application to access the SAP HANA
1. Create an SAP HANA database user. You can use Eclipse (as described below) or the SAP HANA Web-based
Development Workbench.
2. Bind the HANA database to the Java application using the new database user. You can use the cockpit or
console client.
Procedure
Note
By default, the user has the permissions required to use the new schema. You can assign the user
additional permissions or remove permissions, as necessary.
5. Log off and then reconnect to the SAP HANA system using the database user and password you just created.
6. Change the initial password when prompted.
7. In the Systems view, expand the Catalog node. You should see a schema with the same name as your
database user.
Related Information
In the cockpit, you can create bindings at both the account and application level, that is, by HANA database or by
Java application.
Procedure
○ Persistence Databases & Schemas : In the list, select the relevant SAP
HANA database.
Note
For productive SAP HANA databases, the ID is identical to the database system
name.
Note
○ The specified application must be deployed in the selected account.
○ To create a binding to the default data source, enter the data source name
<DEFAULT>.
Caution
The initial password of this database user needs to be changed before binding the
application to an SAP HANA database, since the application will otherwise throw an
exception.
6. Select the checkbox Verify credentials to verify the validity of the custom logon data.
7. Save your entries.
By Java application 1. Choose Applications Java Applications in the navigation area and select the
relevant application in the application list.
Caution
The initial password of this database user needs to be changed before binding
the application to an SAP HANA database, since the application will otherwise
throw an exception.
7. Select the checkbox Verify credentials to verify the validity of the custom logon data.
8. Save your entries.
An application’s state influences when a newly bound SAP HANA database becomes effective. If an application is
already running (Started state), it will not have access to the newly bound HANA database until it has been
restarted.
Related Information
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command, replacing the
values as appropriate:
Example:
Note that in this example a data source name has not been specified and the application therefore uses the
default data source.
Caution
The initial password of the database user needs to be changed before binding the application to an SAP
HANA database, since the application will otherwise throw an exception.
For the example above, the output should show the following:
Related Information
The database user you specify when you create the binding determines which schema an application is able to
access. Typically the application uses the database user’s default schema, but since a database user may have
access to more than one schema, it could potentially also use any of these non-default schemas.
Default Schemas
The default schema is the schema whose name is identical to that of the database user. It is created automatically
when a database user is created.
We recommend working with a database user’s default schema. If you require multiple schemas, simply create
separate appropriately named database users and then bind each of their default schemas to the application
using named data sources. If you choose to use non-default schemas, be aware that this is more error prone and
requires greater care with the application code.
Non-default Schemas
An application can access a non-default schema in its program code by adding the schema name as a prefix to the
table name as follows: <schema name>.<table name>
When programming with JPA, you add the schema prefix to the table annotation in the JPA entity class.
Example
Table T_PERSON in the schema COMPANYDATA:
@Entity
@Table(name = "COMPANYDATA.T_PERSON")
Example
Table T_PERSONS in the schema COMPANYDATA:
Table 306:
INSERT "INSERT INTO COMPANYDATA.T_PERSONS (ID, FIRSTNAME, LASTNAME) VALUES (?, ?, ?)"
CREATE "CREATE TABLE COMPANYDATA.T_PERSONS (ID VARCHAR(255) PRIMARY KEY NOT NULL,
FIRSTNAME VARCHAR (255), LASTNAME VARCHAR (255))"
Note
When you retrieve database metadata in order to check whether a table already exists, bear in mind that you
might also need to specify the schema parameter, in particular, if you have multiple schemas containing tables
with identical names:
Example
If your database is corrupt, you can perform a point-in-time restore by creating a service request in the cockpit.
Procedure
1. Log on to the cockpit with the administrator role and select an account.
d. Choose Save.
A template for opening an incident in the SAP Support Portal is displayed.
e. Select the text in the template between the two dashed lines and copy it to the clipboard.
Tip
Navigate to Persistence Service Requests and choose the Display icon to find the template for
opening a ticket at any time.
f. Choose Close.
4. Log on to the SAP Support Portal with your S-user ID and password and create a new incident by choosing
Report an Incident.
Note
You need the authorization to create an incident. Contact a user administrator in your company to
request this authorization.
Note
You can find a detailed step-by-step instruction for creating an incident in the Report an Incident - Help .
6. Once you have reached the Enter Incident view, enter the following data:
a. In the Classification panel, enter the component for persistency.
Note
For a complete list of SAP Cloud Platform components, see 1888290 .
b. In the Problem Details panel, enter the title Database Restore Request in the Short Text field.
c. Paste the template text you copied to your clipboard into the Long Text field.
d. Choose Send Incident.
Results
You have created a request for restoring a database and sent the request to SAP Support for processing. As soon
as your database is restored, the state of your request will be set to Finished in the cockpit and the incident you
created will be set to Completed. You can see the state of your request in the cockpit by navigating to
Persistence Service Requests . The state is displayed next to your service request. In the meantime, SAP
Note
Your database is available for use for all users immediately after the restore has been successful.
Note
To cancel your restore request, go to Persistence Service Request , choose your restore request and
select the Delete icon. Note that your request can only be cancelled if it has the state New.
Related Information
Before you delete an SAP Cloud Platform account in which an SAP ASE database is deployed, you can export the
data from that database.
Procedure
1. To open a tunnel to the SAP ASE database, follow the steps described in Opening a Database Tunnel [page
921].
2. Connect to your database using the Eclipse Data Tools Platform (DTP) as described in Connecting to the
Remote SAP ASE Database [page 930].
Note
Instead of Eclipse DTP, you can also use any other JDBC client that offers export or extraction
functionality.
3. In Eclipse, select all tables that contain data you would like to export in the Data Source Explorer view and
choose Data Export on each table from the context menu.
4. Select the location of the file and choose a file format for the data export by selecting .csv or .data from the
dropdown list.
5. Choose Save.
You can share productive SAP HANA and SAP ASE databases that are provisioned in an account with other
accounts.
When you provision a database in an SAP Cloud Platform account, only this account has access to it. You can
change this by using the cockpit and/or the console client and give other accounts controlled access to
productive SAP HANA databases and SAP ASE databases that are owned by a different account. You can also
allow other accounts to bind their Java applications to a database in a different account.
Table 307:
Method Description
Sharing Databases in the Same Global This method allows you to give an account permission to use a database that is
Account [page 878] owned by a different account. You can add and revoke this permission using the
cockpit or the console client. See Managing Cross-Account Permissions [page
880].
Restriction
The account providing the permission and the account receiving the permission
must be part of the same global account. For more information on global ac
counts, see Accounts [page 13].
The account receiving the permission can bind its applications and/or open a tunnel
to the database in the different account. See Binding Applications to Databases in
the Same Global Account [page 885] and Opening a Database Tunnel [page 921].
Sharing Databases with Other Accounts This method allows you to give any account permission to use a database that is
[page 886] owned by a different account. You can add and revoke this permission using the
console client. See: Managing Access to Databases for Other Accounts [page 889]
The account receiving the permission uses an access token to bind a Java applica
tion or to open a tunnel to a database in the other account. See:
You can share productive SAP HANA or SAP ASE databases that have been provisioned in an account with other
accounts of your global account.
Note
The following explanations only apply to accounts that belong to the same global account. If you want to share
a database with an account that is not part of your global account, see Sharing Databases with Other Accounts
[page 886].
You can give accounts controlled access to a database owned by another account by adding a cross-account
permission for the accounts requesting access. Depending on the type of permission your provide, the owners of
the accounts receiving the permission can bind their applications to the database [page 863] and/or open a
tunnel to the database [page 921] that is owned by another account.
Note
Sharing databases in the same global account is only possible on the productive landscape, not on the trial
landscape.
To give cross-account permissions to other accounts in your global account, you log on to the account in which
the database you want to share is provisioned. Then you use the SAP Cloud Platform cockpit or the console client
to give permissions to other accounts. Owners of the accounts receiving the permission will be able to see the
database listed in the cockpit and in the console client, and use it in accordance with the permissions given.
The table below lists the tasks and the person responsible for sharing databases with other accounts in the same
global account:
Table 308:
Adding New Cross-Account Permissions Administrator in the account that owns grant-db-access [page 191]
[page 881]
the database
Revoking Cross-Account Permissions Administrator in the account that owns revoke-db-access [page 260]
[page 884]
the database
Binding Applications to Databases in the Member of the account that has re bind-db [page 115]
Same Global Account [page 885]
quested permission to use a database
owned by another account
Opening a Database Tunnel [page 921] Member of the account that has re open-db-tunnel [page 246]
After the cross-account permissions have been given, members of account C can see the databases owned by
account A and B in the console client and in the cockpit. As shown in the picture below, account C binds two of its
Java applications to the database in Account A. The cross-account permission for data source bindings provided
to account C by account A is not restricted to a single application. All members of account C can bind multiple
Java applications to the database in account A. Due to the cross-account permission for opening database
tunnels provided to account C by account B, all members of account C can also open a tunnel to the database in
account B.
As an account member with the administrator role, you can add, change, and revoke cross-account permissions
for accounts in your global account by using the cockpit or the console client.
Caution
If you want to share a database with an account that is not part of your global account, follow the steps
described in Sharing Databases with Other Accounts [page 886].
You use the cockpit or the console client to create a new cross-account permission, allowing an account to use a
database that is owned by another account.
Prerequisites
● The database you would like to share has been provisioned in an account. See Creating Databases [page
857].
● You have the administrator role in that account.
● (For the console command only) You have set up the console client. See Setting Up the Console Client [page
52] and Using the Console Client [page 102].
Context
As an account member with the administrator role, you use the cockpit or the console client to give accounts
permission to use a productive SAP HANA or SAP ASE database that is owned by another acccount.
Restriction
The account providing the permission to use the database and the account receiving the permission must be
part of the same global account.
Procedure
1. Log on to the cockpit with the administrator role and select the account that owns the da
Using the Cockpit
tabase you would like to share.
Using the Console Client 1. Open the command window in the <SDK>/tools folder and enter the following com
mand:
Note
For an example, see grant-db-access [page 191].
2. Optional: Check that permission has been given successfully by entering the following
command:
Note
For an example, see list-db-access-permissions [page 220].
Results
You have given an account permission to use a database that is owned by another account. In the account that
owns the database, the Shared icon is displayed in the Databases & Schemas list in the cockpit next to all
databases that can be used by other accounts.
Related Information
You use the cockpit to change the type of an existing cross-account permission.
Prerequisites
● You have the administrator role in the account that owns the database.
● You have given an account permission to use a database that is owned by another account. The account
providing the permission and the account receiving the permission are part of the same global account. See
Adding New Cross-Account Permissions [page 881].
Procedure
1. Log on to the cockpit with the administrator role and select the account that owns the database for which you
would like to change permissions.
Results
Related Information
You use the cockpit or the console client to revoke a cross-account permission.
Prerequisites
● You have the administrator role in the account that owns the database.
● You have given an account permission to use a database that is owned by another account. The account
providing the permission and the account receiving the permission are part of the same global account. See
Adding New Cross-Account Permissions [page 881].
● (For the console command only) You have set up the console client. See Setting Up the Console Client [page
52] and Using the Console Client [page 102].
Procedure
1. Log on to the cockpit with the administrator role and select the account that owns the da
Using the Cockpit
tabase for which you would like to revoke permissions.
2. Choose Persistence Databases & Schemas in the navigation area.
3. Choose the required database.
4. In the navigation area, choose Cross-Account Permissions.
5. Choose the Delete icon next to the account that you want to revoke the permission for.
Caution
Choosing the Delete icon will revoke all cross-account permissions for this account. To
change the type of permission for an account, from Tunnel to Binding for example, see
Changing Cross-Account Permission Types [page 883].
Using the Console Client Open the command window in the <SDK>/tools folder and enter the following command:
Note
For an example, see revoke-db-access [page 260].
You have revoked the permission to access a database for another account.
Related Information
You use the cockpit or the console client to bind a Java application that you deployed in one account to a
productive SAP HANA and SAP ASE database that is owned by another account.
Prerequisites
● You have deployed a Java application to SAP Cloud Platform. See Deploying and Updating Applications [page
1043].
● (For the console commands only) You have set up the console client. See Setting Up the Console Client [page
52] and Using the Console Client [page 102].
● The account that owns the database and the account in which the Java application has been deployed are
part of the same global account. The account that owns the database has given the account in which the Java
application has been deployed permission to bind the application to the database. See Managing Cross-
Account Permissions [page 880].
Procedure
Using the cockpit Log on to the cockpit and choose the account in which the application you would like
to bind has been deployed. Follow the steps described in Binding Databases [page
863]. When prompted to select the database that you want to bind the application to,
select the database that is owned by another account.
Note
To unbind the database from an application, simply delete the binding. The appli
cation will maintain access to the database until restarted.
Using the console client Open the command window in the <SDK>/tools folder and enter the command for
binding an application to a database in another account (same global account) de
scribed in bind-db [page 115].
Note
To unbind the database from an application, open the command window in the
<SDK>/tools folder and enter the following command:
Results
You have bound a Java application to a database that is owned by another account in your global account.
You can share a productive SAP HANA or SAP ASE database that is owned by an account with other accounts.
Note
We recommend using this method if you want to share your database with accounts that do not belong to your
global account. To share your database in the same global account, see Sharing Databases in the Same Global
Account [page 878].
You can allow an account to access a database that is owned by another account by generating an access token
with the console client. A member of the account requesting access to the database can use the access token to
bind a Java application [page 897] and/or to open a tunnel [page 898] to the database in question.
Note
Sharing databases with other accounts is only possible on the productive landscape, not on the trial landscape.
The access token uniquely identifies the access permission based on the following:
● It always applies to one database (and one application, if the permission allows for a data source binding) and
is not transferrable.
● It has an unlimited validity period.
● (For application bindings only) It can be used for as long as application bindings exist or until the permission is
revoked. It can be revoked whenever you wish, irrespective of whether the target application has already been
bound to the database.
The table below lists the tasks and the person responsible for sharing databases with other accounts:
Table 309:
Giving Applications in Other Accounts Administrator in the account that owns grant-schema-access [page 193]
Permission to Access a Database [page
the database
890]
Revoking Database Access Permissions Administrator in the account that owns revoke-schema-access [page 262]
for Applications in Other Accounts [page
the database
892]
Giving Other Accounts Permission to Administrator in the account that owns grant-db-tunnel-access [page 192]
Open a Database Tunnel [page 893]
the database
Revoking Tunnel Access to Databases Administrator in the account that owns revoke-db-tunnel-access [page 261]
for Other Accounts [page 895]
the database
Binding Applications to Databases in Member of the account that has re bind-db [page 115] (for SAP HANA MDC
Other Accounts [page 897]
quested permission to use a database and SAP ASE databases)
owned by another account
bind-hana-dbms [page 118] (for produc
tive SAP HANA database systems)
Opening Tunnels to Databases in Other Member of the account that has re open-db-tunnel [page 246]
Accounts [page 898]
quested permission to use a database
owned by another account
In addition, a member of account C has requested to open a tunnel to a database in account B. An administrator in
account B hence generates an access token and creates a database user with the appropriate roles and privileges.
The administrator provides the credentials of that user together with the access token to at least one member of
account C.
As shown in the picture below, the access token provided by account A is used by a member of account C to bind
Java application 1 to the database in account A. The token only applies to Java application 1, it would not be
possible to bind other Java applications in account C to the database in account A. The access token provided by
account B is used by a member of account C to open a tunnel to the database in account B. All members of
account C can open tunnels to the database in account B if they are in possession of the access token.
As an account member with the administrator role, you can manage access to databases for other accounts.
Caution
If you want to share a database with an account that is part of your global account, we recommend you follow
the steps described in Managing Cross-Account Permissions [page 880].
You can give a Java application in another account permission to access a productive SAP HANA and SAP ASE
database in your account.
Prerequisites
● The database you would like to share has been provisioned in an account. See Creating Databases [page
857].
● You have the administrator role in that account.
● You have set up the console client. See Setting Up the Console Client [page 52] and Using the Console Client
[page 102].
Context
To give a Java application permission to access a database in your account, you generate an access token using
the grant-schema-access. A member of the account in which the application has been deployed uses the token
to create a data source bindng.
● It always applies to one database and one application, and is not transferrable
● It has an unlimited validity period
● It can be revoked whenever you wish, irrespective of whether the target application has already been bound to
the database
● It can be used for as long as application bindings exist or until the permission is revoked
Procedure
Open the command window in the <SDK>/tools folder and enter the following command:
Note
Specify the requesting application in the format <account>:<application>.
Next Steps
To give a Java application in another account access to your database, create a database user and a password
and provide it, together with the access token, to a member of the account receiving the permission.
Related Information
You can revoke the permission to access a productive SAP HANA and SAP ASE database in your account for
applications in other accounts.
Prerequisites
● You have given an application in another account permission to use a database in your account. See Giving
Applications in Other Accounts Permission to Access a Database [page 890].
● You have the administrator role in that account.
● You have set up the console client. See Setting Up the Console Client [page 52] and Using the Console Client
[page 102].
Context
Note
You can revoke the permission to use a database in your account for applications in other accounts at any time,
irrespective of whether the applications have already been bound to the database.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Table 310:
Access Token Provided To Bound
2. To revoke the permission, enter the following command and copy across the access token obtained in the
previous step:
Caution
We strongly recommend also deleting the database user and password you provided to the other account
requesting the access to your database.
If the access token has already been used to bind the database, revoking the access permission will also
unbind the database. If the application is running, it will continue to use the database until it is restarted.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in step 1
or using the display-schema-info command.
Related Information
You can allow other accounts to open a tunnel to a productive SAP ASE or SAP HANA database in your account.
Prerequisites
● The database you would like to share has been provisioned in an account. See Creating Databases [page
857].
Context
To give another account permission to open a tunnel to your database, you create a database user for that
account and provide these credentials, together with an access token, to a member of the account that requested
permission to open a database tunnel. This allows this account member to open a database tunnel to the
database in your account. All members of the account receiving the permission can access the database in your
account.
Provide the following information to a member of the account that requested permission to open a database
tunnel:
● To check if the database access has been given successfully, you can view a list of all currently active
database access permissions to other accounts, which exist for a specified account, by using the The token is
simply a random string, for example, 31t0dpim6rtxa00wx5483vqe7in8i3c1phv759w9oqrutf638l, which
remains valid until the provider account revokes it again.list-db-tunnel-access-grants command.
● The token is simply a random string, for example,You can revoke the database access permission at any point
in time using the revoke-db-tunnel-access command. See Revoking Tunnel Access to Databases for
Other Accounts [page 895].
Note
Only the provider account can revoke the access permission. When you revoke the access permission, we
highly recommend that you disable the database user and password created for the access permission on
the database itself and that you close any open sessions on the SAP HANA database.
If an account member has already used the access token and there are open database tunnels, they remain open
until they are closed, even though the user has been disabled.
We highly recommend that you create a dedicated database user on the database for each access permission.
Procedure
Related Information
You can revoke the permission to open database tunnels to a productive SAP HANA adatabase in your account
for other accounts.
Prerequisites
● You have given another account permission to use a database in your account. See Giving Other Accounts
Permission to Open a Database Tunnel [page 893].
● You have the administrator role in that account.
● You have set up the console client. See Setting Up the Console Client [page 52] and Using the Console Client
[page 102].
Context
Note
You can revoke the permission to use a database in your account for other accounts at any time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
Table 311:
Database ID Granted to Access Token
2. To revoke the permission, enter the following command and copy across the access token obtained in the
previous step:
Note
Only the provider account can revoke the access permission. When you revoke the access permission, we
highly recommend that you disable the database user and password created for the access permission on
the database itself and that you close any open sessions on the SAP HANA database.
You have revoked the permission to open tunnels to a database in your account for other accounts.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in step 1.
Related Information
To bind applications to productive SAP HANA and SAP ASE databases in other accounts, you use a remote
access token that indicates that access to the database has been permitted.
Prerequisites
You have set up the console client. For more information, see Setting Up the Console Client [page 52].
Context
When you bind Java applications to the specified database in other accounts, you provide a database user and
password and an access token that you have received from the database owner. You can use this token for as
long as application bindings exist, or until the permission is revoked.
Note
The token is not transferrable to other applications in your account. The owner account can revoke access to
the database at any point in time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command:
SAP HANA and SAP ASE database neo bind-db --account salescorp --
application salesapp --host
hana.ondemand.com --user salesuser --
access-token
vm6431dhjcr2e3dbt0fk6jpzm2w7oo3q48yumf
1c6uu8b9pt9z --db-user
<HANA_database_user> --db-password
<database_user_password>
Note that you use the access-token parameter instead of the database ID parameter.
You have bound your application to the database in the other account.
Related Information
If you want to open a tunnel to a database that is owned by another account, you request permission from that
account. If your request is approved, the account that owns the database in question provides you with an access
Prerequisites
● You have set up the console client. For more information, see Setting Up the Console Client [page 52].
● The account that owns the database has given you an access token and a database user and password. See
Giving Other Accounts Permission to Open a Database Tunnel [page 923].
Context
Once you have received the token and the database credentials, you can open the database tunnel. You use the
access token parameter for the open-db-tunnel command instead of the database ID parameter. Then you can
use a database tool of your choice to connect to the database in another account. Log on to the database with the
user and password that you received from the provider. You can then work on the remote database instance. This
works just like the open-db-tunnel command, except that you use the access token instead of the database ID.
Note
All members of the consumer account have permission to access the database in the provider account.
Procedure
Next Steps
Once you have opened the tunnel, you can connect to the database. See:
Related Information
To learn how to access your SAP HANA or SAP ASE database remotely, please refer to Accessing Databases
Remotely [page 919] in the programming guide.
Related Information
The persistence service provides a set of console client commands for managing databases. These allow you to
create databases with specific properties, bind and unbind databases, delete databases, and display information
about databases.
Related Information
Each application deployed on SAP Cloud Platform can be assigned one or more database schemas. A schema is
associated with a particular account and is available solely to applications within this account. A schema can be
bound to multiple applications.
Creation
You can create schemas explicitly with a freely definable name and assign them certain properties, such as a
specific database type. The schema is independent of any application and has to be explicitly bound.
Schemas can also be created automatically for applications. If you have not explicitly bound a schema to an
application when it is deployed and started for the first time, a schema is created and bound implicitly. This is the
fallback behavior on SAP Cloud Platform.
Note that a schema ID is unique within an account. When a schema is created automatically, an ID is also created
based on a combination of the account and application names and the suffix web.
Binding
Schemas can be bound to applications based on an explicitly named data source or using the default data source.
The main differences are as follows:
You can share a schema between applications by binding the same schema to more than one application. Bear in
mind the following when binding schemas to applications:
● An application’s bindings are based on either named data sources or the default data source. An application
cannot use a combination of the two types of bindings.
● When named data sources are used, binding names must be unique per application.
In the overview below, applications 1 and 2 have been explicitly bound to the associated schemas, while
application 3 uses a schema that was automatically created and bound:
Note that applications can also use schemas belonging to other accounts if they are explicitly granted access
permission.
Unbind a schema from an application if the application no longer needs it. It can still be used by other applications
to which it is still bound. Before a schema can be deleted, it has to be unbound from all applications. Schemas can
only be deleted if they no longer have any bindings.
If an application is undeployed but was not unbound from the schema beforehand, the schema will still be listed as
bound to the application and will therefore still be bound if the application is redeployed.
Deletion
You should drop a schema when it is no longer required or if you want to redeploy an application from scratch.
Before deleting a schema, you should explicitly remove any bindings that still exist between the schema and an
application. You can also remove all bindings by enforcing the deletion of the schema.
JNDI Lookup
When using explicitly named data sources to create bindings between schemas and applications, make sure that
the data source names are the same as the JNDI names used in the applications.
Data sources are defined as resources in the web.xml file, or as JTA or non-JTA data sources in the
persistence.xml file in the normal manner. Data sources can be referenced in the application code using a
context.lookup or annotations (@Resource, @PersistenceUnit, @PersistenceContext).
When using explicitly named data sources in the Java EE 6 Web Profile runtime environment, you need to create
two additional bindings:
● A binding between the application and schema using a data source named jdbc/
defaultManagedDataSource
● A binding between the application and schema using a data source named jdbc/
defaultUnmanagedDataSource
Related Information
You create schemas for a selected account. Schemas have properties, such as a database type and database
version, and are identified by an ID that is unique within the account. The schema is independent of any
application.
Context
You can create schemas using the cockpit and the console client. The procedure below describes schema
creation using the cockpit.
Procedure
Note
To display a schema’s details, for example, its state and the number of existing bindings,select the relevant
schema in the list and click the link on its name. On the overview of the schema, you can perform further
actions, for example, delete the schema.
3. To create a new schema, choose New on the Databases & Schemas page.
An empty New Database/Schema screen is displayed.
4. Enter the following schema details:
○ Schema ID: A schema ID is freely definable but must start with a letter and contain only uppercase and
lowercase letters ('a' - 'z', 'A' - 'Z'), numbers ('0' - '9'), and the special characters '.' and '-'. Note that the
actual schema ID assigned in the database will be different to this version.
○ Database System: Select an available database (HANA (<shared>) or MaxDB (<shared>)) from the
dropdown box.
To create schemas on your productive HANA instances, you have to use the HANA-specific tools.
5. Save your entries.
The overview of the new schema is displayed with details about its state, quota used, and the number of
existing bindings. You can perform further actions for the newly created schema, for example, delete it.
Related Information
To use a schema, you bind it to an application. Bindings are identified by a data source name, which must be
unique per application. You can bind the same schema to multiple applications, and the same application to
multiple schemas.
Context
In the cockpit, you can create and delete schema bindings at both the schema and application level:
● To create bindings by schema, use the Data Source Bindings panel at the schema level.
● To create bindings by application, use the Data Source Bindings panel at application level.
Procedure
By application 1. Choose Applications Java Applications in the navigation area and select the
relevant application in the application list.
Note
○ To create a binding to the default data source, enter the data source name <DEFAULT>.
○ An application that is bound to the default data source (shown as <DEFAULT>) cannot be bound to
additional schemas. To use additional schemas, first rebind the application using a named data source.
○ Data source names are freely definable but need to match the JNDI data source names used in the
respective applications, as defined in the web.xml or persistence.xml file. For more information,
see the example scenarios.
Next Steps
An application’s state influences when a newly bound schema becomes effective. If an application is already
running (Started state), it will continue to use the old schema until it is restarted. A restart is also required if
additional schemas have been bound to the application.
Note
To unbind a schema from an application, simply delete the binding. The application will retain access to the
schema until it is restarted.
Related Information
Database schemas contain a database property, which determines on which database an application will run.
Each account has a default database system.
Context
The default database system is used when schemas are created automatically. This occurs if an application is
started but has not yet been assigned a schema.
You can change the default database system at any point in time, however, bear in mind the following:
● A new application that has not been explicitly assigned a schema will use whichever default database system
is effective when automatic schema creation is triggered, that is, when the application is started for the first
time.
● When deploying an application from the Eclipse IDE, in contrast to the console client, an application is
deployed and started in one step.
● An application that is already using a default database system will not be affected by any changes. Its schema
remains associated with the default database system effective at the time when it was created.
Procedure
2. Choose the (edit) icon on the tile for the account in question.
3. Select the new default database system from the dropdown box and save your changes.
Related Information
The schema management scenarios outline the steps involved for the most typical use cases of schemas. To
manipulate schemas, the scenarios use the console client together with the schema commands provided by the
persistence service. The scenarios can also be performed from the cockpit in a similar manner.
For the sake of simplicity, the scenarios described in this section use JDBC and web.xml to illustrate the
definition of data sources. Depending on your application and runtime environment, you can obviously use other
options, such as the persistence.xml file and annotations.
Related Information
You can create schemas with a freely definable name and assign them certain properties, such as a specific
database type. This allows you, for example, to create schemas that are associated with a database platform of
your choice, rather than the default database platform assigned to the account. To use a schema, you bind it to an
application.
Prerequisites
You have set up the console client. For more information, see Setting Up the Console Client [page 52].
Context
In this scenario, an application has been deployed with the default database type assigned to the account. You use
the unbind-schema command to first remove the schema already assigned to the application and then create a
schema with the database type you want to use (create-schema) and bind it to the application (bind-schema).
The following example data is used:
● The application myapp runs on the SAP MaxDB database and is bound to a schema that was created
automatically. The application has been stopped.
● Runtime environment: Java Web
● Data source name: jdbc/dshana
● Schema: myhana
Procedure
1. In the application's web.xml file, update the resource definition by replacing the default data source <res-
ref-name>jdbc/DefaultDB</res-ref-name>, or similar, with the named data source <res-ref-
name>jdbc/dshana</res-ref-name>:
<resource-ref>
<res-ref-name>jdbc/dshana</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
2. Adjust the JNDI lookup in the application to use the data source you just defined in the web.xml file. You will
later bind the the application to the myhana schema using this data source:
# JNDI lookup
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/dshana");
3. Open the command window in the <SDK>/tools folder and enter the following command to create a schema
for the SAP HANA database:
Example output:
Schema ID DB Type
myhana hana
5. Unbind the current schema from the application. Since the application has a default binding, you do not need
to specify a data source name:
Example output:
Related Information
Multiple schemas allow you to use multiple databases in parallel. You might, for example, want to use SAP MaxDB
for normal transaction processing and the SAP HANA database for analytics.
Prerequisites
You have set up the console client. For more information, see Setting Up the Console Client [page 52].
Context
In this scenario, you use the create-schema command to create two schemas, one associated with SAP MaxDB
and the other with the SAP HANA database. You then use the bind-schema command to bind both schemas to
the application. The following example data is used:
1. In the application's web.xml file, add resource definitions for the two data sources:
<resource-ref>
<res-ref-name>jdbc/dshana</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/dsmaxdb</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
2. Add JNDI lookups in the application code using the two data sources. This will allow the application to access
both the myhana and mymaxdb schemas:
# JNDI lookup
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/dshana");
...
InitialContext ctx = new InitialContext();
DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/dsmaxdb");
Example output:
Schema ID DB Type
myhana hana
mymaxdb maxdb
7. Bind the schemas to the application using the data source names jdbc/dshana and jdbc/dsmaxdb:
In both cases, a confirmation is displayed that the schema was successfully bound.
8. Optionally check as follows:
Related Information
You can migrate from an auto-created schema by unbinding the schema currently assigned to your application
and rebinding it to the required one. This step is necessary, for example, if you want to use more than one
database in parallel.
Prerequisites
You have set up the console client. For more information, see Setting Up the Console Client [page 52].
Context
In this scenario you migrate from the auto-bound schema by unbinding and then rebinding the same schema. This
allows you to retain the schema and all its artifacts. The following example data is used:
1. Open the command window in the <SDK>/tools folder and use the list-application-datasources
command to obtain the name of the schema currently assigned to the application (you need the schema ID in
step 3):
Example output:
2. Unbind the current schema from the application. Since the application has a default binding, you do not need
to specify a data source name:
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
Note
If you prefer, you can obviously change this name, but then you will also need to change the JNDI lookup in
the application code and redeploy the application.
4. Rebind the application to the same schema using the data source name from the previous step, for example,
jdbc/DefaultDB:
Example output:
6. The application will continue to use the old schema and default data source until it is restarted. Restart the
application so that it uses the new binding to the schema.
Schemas can normally only be used by applications within the same account. You can, however, allow
applications belonging to other accounts controlled access to your account’s schemas. The other account might
be one of your own accounts or a third-party account.
When an external application, that is, an application that does not belong to your account, requests access to one
or more of your schemas, you can specifically grant access permission to that application by generating an access
token. It uniquely identifies the access permission based on the following:
The access token is used by the consumer account to bind the schema to the application. It can be used once
only. An unbind operation does not require an access token.
● Always applies to one schema and one application and is not transferrable
● Has an unlimited validity period
● Can be revoked whenever you wish, irrespective of whether the schema has already been bound to the target
application
Restriction
This functionality is not available for SAP MaxDB.
Related Information
As an account member with the Administrator or Developer role, you can grant applications in other accounts
access to any of your account’s schemas.
Prerequisites
You have set up the console client. For more information, see Setting Up the Console Client [page 52].
Context
To allow access, you generate a one-time access token that permits the requesting application to access your
schema from its account.
Procedure
Open the command window in the <SDK>/tools folder and enter the following command:
Next Steps
The generated access token can now be used by the consumer account to bind the schema to the application.
● When the target application binds the schema to which it has been granted access, a new technical database
user is created automatically (name: DEV_<guid>) that has access permission only for the specified schema
(technical name: NEO_<guid>).
Related Information
To bind a schema contained in another account to your application, you use a remote access token that indicates
that access to this specific schema has been permitted.
Prerequisites
You have set up the console client. For more information, see Setting Up the Console Client [page 52].
Context
To prevent misuse, the remote access token can be used once only and is not transferrable to other applications
in your account. Note that it is possible for the owner account to revoke access to the schema at any point in time.
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command:
Since the schema does not belong to your account, the schema ID is prefixed with the owner account’s name
(account:schemaID), as shown in the example output below:
Related Information
A grant applies to a specific schema and specific application and is identified by an access token. It is valid until it
is revoked by a member of the owner account.
Context
Procedure
1. Open the command window in the <SDK>/tools folder and enter the following command to list all grants for
the specified schema:
Table 313:
Access Token Schema ID Granted To Bound
2. To revoke the grant, enter the following command and copy across the access token obtained in the previous
step:
If the access token has already been used to bind the schema, then revoking the access permission will also
unbind the schema. If the application is running, it will continue to use the schema until it is restarted.
3. Optionally check that the access token has been revoked by listing all grants again as described in step 1 or
using the display-schema-info command.
Related Information
The persistence service provides a set of console client commands for managing schemas. These allow you to
create schemas with specific properties, bind and unbind schemas, delete schemas, and display information
about schemas.
Related Information
You have two different options for programming with databases: JPA or plain JDBC.
Java Persistence API (JPA) offers two main types of persistence, container-managed persistence and application-
managed persistence, which differ in terms of the management and life cycle of the entity manager.
Although JPA is suited for most application development scenarios and is the recommended approach on SAP
Cloud Platform, there might be cases where the low-level control provided by Java Database Connectivity (JDBC)
is more appropriate.
Database instances in the cloud are protected by a firewall, in other words, they are not directly accessible.Before
you programm with a database, you need to connect to the database by opening a database tunnel, which
provides a secure connection from your local machine and bypasses the firewall.
If an application uses the default data source and runs locally on Apache Derby, provided as standard for local
development, it can be tested on the local runtime without any further configuration.
The SQL trace provides a log of selected SQL statements with details about when a statement was executed and
its duration, allowing you to identify inefficient SQL statements used in your applications and investigate
performance issues. SQL trace records are integrated in the standard trace log files written at runtime.
Related Information
Database instances in the cloud are protected by a firewall, in other words, they are not directly accessible.
Access to remote database instances is therefore only possible through a database tunnel, which provides a
secure connection from your local machine and bypasses the firewall.
A database tunnel allows you to use database tools, such as the SAP HANA studio or Eclipse Data Tools Platform,
to connect to the remote database instance. It provides you with direct access to a schema and allows you to
manipulate it at database level.
The SAP HANA studio provides the most convenient option for connecting to the remote database, since it
automatically opens the database tunnel for you and closes it when you disconnect. It is therefore the
recommended tool to use. Bear in mind that if you choose to use another tool you will have to explicitly open the
database tunnel yourself.
To connect to the remote database using the SAP HANA studio (Eclipse with appropriate plugins), proceed as
described in:
Connecting to SAP HANA Schemas via the Eclipse IDE [page 935]
In the wizard, select the Trial instances radio button and then select your database schema from the dropdown
box.
For continuous integration and test automation, you can open a database tunnel using scripting or as part of a
Maven build. See Automating the Use of Database Tunnels [page 928].
If you are working with SAP ASE databases, you need to explicitly open a database tunnel:
SAP MaxDB
If you are working with SAP MaxDB, you need to explicitly open a database tunnel:
Restriction
For SAP MaxDB, the functionality described in this section is available as a beta version and can be used on the
trial landscape only.
Related Information
A database tunnel allows you to connect to a remote database instance through a secure connection. To open a
tunnel, use the open-db-tunnel command. When you open the tunnel, you will obtain the connection details
required for the remote database instance, including a user and password.
Prerequisites
You have set up the console client. For more information, see Setting Up the Console Client [page 52].
Procedure
Note
For more information on required parameters, see open-db-tunnel [page 246].
Now that you have opened the database tunnel, you can connect to the remote database instance using the
connection details you have just obtained.
Note
The database tunnel must remain open while you work on the remote database instance. Close it only when
you have completed the session.
Related Information
You want to access data from a productive SAP HANA or SAP ASE database in another account and you need the
required permissions. The account prodiving the permission gives you access by providing you with a token and a
database user, which you use to open a tunnel to the database that is owned by this account.
The table below lists the tasks and the person responsible for providing access to the database in another
account:
Table 314:
Task Responsible Commands User
Giving Other Accounts Permission to Administrator in the account that owns grant-db-tunnel-access [page 192]
Open a Database Tunnel [page 923]
the database
Opening Tunnels to Databases in Other Member of the account that has re open-db-tunnel [page 246]
Accounts [page 924]
quested permission to open a tunnel to a
database owned by another account
Revoking Tunnel Access to Databases Administrator in the account that owns revoke-db-tunnel-access [page 261]
for Other Accounts [page 926]
the database
You can allow other accounts to open a tunnel to a productive SAP ASE or SAP HANA database in your account.
Prerequisites
● The database you would like to share has been provisioned in an account. See Creating Databases [page
857].
● You have the administrator role in that account.
● You have set up the console client. See Setting Up the Console Client [page 52] and Using the Console Client
[page 102].
Context
To give another account permission to open a tunnel to your database, you create a database user for that
account and provide these credentials, together with an access token, to a member of the account that requested
permission to open a database tunnel. This allows this account member to open a database tunnel to the
database in your account. All members of the account receiving the permission can access the database in your
account.
Provide the following information to a member of the account that requested permission to open a database
tunnel:
● To check if the database access has been given successfully, you can view a list of all currently active
database access permissions to other accounts, which exist for a specified account, by using the The token is
Note
Only the provider account can revoke the access permission. When you revoke the access permission, we
highly recommend that you disable the database user and password created for the access permission on
the database itself and that you close any open sessions on the SAP HANA database.
If an account member has already used the access token and there are open database tunnels, they remain open
until they are closed, even though the user has been disabled.
We highly recommend that you create a dedicated database user on the database for each access permission.
Procedure
If the permission has been given successfully, the access token is displayed. As a database administrator, you
create a database user with the needed permissions. Provide the database user and password together with
the access token to a member of the account that has requested permission to open a tunnel to your
database.
Related Information
Prerequisites
● You have set up the console client. For more information, see Setting Up the Console Client [page 52].
● The account that owns the database has given you an access token and a database user and password. See
Giving Other Accounts Permission to Open a Database Tunnel [page 923].
Context
Once you have received the token and the database credentials, you can open the database tunnel. You use the
access token parameter for the open-db-tunnel command instead of the database ID parameter. Then you can
use a database tool of your choice to connect to the database in another account. Log on to the database with the
user and password that you received from the provider. You can then work on the remote database instance. This
works just like the open-db-tunnel command, except that you use the access token instead of the database ID.
Note
All members of the consumer account have permission to access the database in the provider account.
Procedure
Next Steps
Once you have opened the tunnel, you can connect to the database. See:
Related Information
You can revoke the permission to open database tunnels to a productive SAP HANA adatabase in your account
for other accounts.
Prerequisites
● You have given another account permission to use a database in your account. See Giving Other Accounts
Permission to Open a Database Tunnel [page 893].
● You have the administrator role in that account.
● You have set up the console client. See Setting Up the Console Client [page 52] and Using the Console Client
[page 102].
Context
Note
You can revoke the permission to use a database in your account for other accounts at any time.
1. Open the command window in the <SDK>/tools folder and enter the following command to list all
permissions for the specified database:
Example output:
Table 315:
Database ID Granted to Access Token
2. To revoke the permission, enter the following command and copy across the access token obtained in the
previous step:
Note
Only the provider account can revoke the access permission. When you revoke the access permission, we
highly recommend that you disable the database user and password created for the access permission on
the database itself and that you close any open sessions on the SAP HANA database.
You have revoked the permission to open tunnels to a database in your account for other accounts.
3. Optional: Check that the access token has been revoked by listing all permissions again as described in step 1.
Related Information
For the purposes of continuous delivery and automated tests, the open-db-tunnel command supports a
background mode, which allows a database tunnel to be opened by automated scripts or as part of a Maven build.
The example below shows how to automatically execute an SQL statement on an SAP HANA database via a
database tunnel.
Prerequisites
● You have a continuous integration (CI) server that can execute Bash scripts, for example, Jenkins running on
Linux.
● You have set up the console client on the CI server. For more information, see Setting Up the Console Client
[page 52].
● You have installed the SAP HANA client on the CI server. For more information, see SAP HANA Client
Installation Guide.
Procedure
#!/bin/bash -ex
PATH=$PATH:~/sap/neo/tools:~/sap/hdbclient # add console client and HANA client
to PATH
Results
You have set up a CI job that automatically executes an SQL statement on your SAP HANA database instance.
Depending on what you would like to achieve, you could now modify the job to execute different SQL statements.
Related Information
Prerequisites
You have set up the console client on the CI server. For more information, see Setting Up the Console Client [page
52].
Procedure
To open or close the database tunnel in a Maven build, use the following goals of the SAP Cloud Platform Maven
plugin:
○ open-db-tunnel
○ close-db-tunnel
Tip
Take a look at the following samples delivered with the SAP Cloud Platform SDK:
○ persistence-with-ejb
○ persistence-with-jpa
Each sample includes a test that opens a database tunnel in background mode within the Maven build and
executes some SQL statements.
You use the Eclipse Data Tools Platform (DTP) to connect to the SAP ASE database in the cloud. To do this, you
require the connection details you obtained when you opened the database tunnel.
Procedure
Note
Make sure you use the latest version of the SDK for Java Web Tomcat 7 runtime. You can download the
SDK from the tools page.
9. On the Properties tab, change the value for the Driver Class property from com.sybase.jdbc3.jdbc.SybDriver
to com.sybase.jdbc4.jdbc.SybDriver and choose Ok.
If the value is set to true, enter the following on the Other Properties tab: ENABLE_SSL=true. If the value
is set to false or if the parameter does not appear at all in the tunnel response, enter ENABLE_SSL=false.
Next Steps
The new database connection is now shown in the Data Source Explorer view in the database list.
Connect to a dedicated SAP HANA database using SAP HANA Tools via the Eclipse IDE.
Prerequisites
You have installed and set up all the necessary tools. For more information, see Installing SAP HANA Tools for
Eclipse [page 68].
Procedure
Note
Make sure that you specify the landscape host correctly.
b. Specify the account name, e-mail or SCN user name, and your SCN password.
If you have previously entered an account and user name for the selected landscape host, the names are
prompted to you in dropdown lists.
Note
Make sure that you specify the database user and password correctly.
If you select the Save password box, the entered password for a given user name is remembered and kept
in the secure store.
A dropdown list is displayed for previously entered database user names. Database passwords can be
remembered and stored in the principle mentioned above.
Results
Related Information
Follow the procedure below to make a direct connection to a shared SAP HANA schema via the Eclipse IDE, using
SAP HANA Tools.
Prerequisites
You have installed and set up all the necessary tools. For more information, see Installing SAP HANA Tools for
Eclipse [page 68].
Procedure
Note
○ If you have previously entered an account and user name for your landscape host, these names will be
prompted to you in dropdown lists.
○ A dropdown list will be displayed as well for previously entered landscapes hosts.
○ If you select the Save password box, the entered password for a given user name will be remembered
and kept in the secure store.
You must have created a schema previously to be able to select it in this step.
9. Choose Finish.
10. You are now connected to a shared SAP HANA schema.
You use the Eclipse Data Tools Platform (DTP) to connect to the SAP MaxDB database in the cloud. To do this,
you require the connection details you obtained when you opened the database tunnel.
Prerequisites
You have the connection details available that you obtained when you opened the database tunnel.
Restriction
For SAP MaxDB, this functionality is available on the trial landscape only. Do not use it in productive scenarios
and/or with any personal data.
Procedure
Java Web or Java EE 6 1. Open your locally saved SDK folder and extract \repository\plugins
Web Profile
\com.sap.core.persistence.osgi.dbtech<version>.jar. to a new folder
within your SDK folder.
2. In Eclipse, choose Add JAR/Zip... on the JAR list tab and select the folder to which you just
extracted the JAR file. Select the driver JAR lib\com.sap.dbtech-7.8.2.31.jar
and open it.
3. Remove the predefined driver JAR.
Note
For more information on SDKs for Java development offered by SAP Cloud Platform, see Installing the SDK
[page 44].
8. Choose OK to confirm.
9. In the URL field, enter the JDBC URL from your connection details.
These are the connection details you obtained when you opened the database tunnel.
10. Enter the user name and password shown in your connection details.
11. Choose Finish.
The new database connection is now shown in the Data Source Explorer view. You can find your schema in the
schema list under your schema user name.
Tip
To locate your schema, filter the list:
1. Select the database connection and from the context menu choose Properties.
2. Select Default Schema Filter and deselect the Disable filter checkbox.
3. In the Name field, enter your user (NEO_<string>) and choose OK.
Open the schema and navigate down to your Web application’s database tables, where you can display their
properties and data and use the SQL Scrapbook editor.
Related Information
JPA offers two main types of persistence, container-managed persistence and application-managed persistence,
which differ in terms of the management and life cycle of the entity manager.
The main features of each scenario are shown in the table below. We recommend that you use container-
managed persistence (Java EE 6 Web Profile runtime), which is the model most commonly used by Web
applications:
Table 316:
JPA Scenario Java Web SDK Java EE 6 Web Profile SDK
You are advised to download the latest version of EclipseLink. Note that EclipseLink versions as of 2.5 contain the
SAP HANA database platform.
Table 317:
JPA Scenario SDK EclipseLink JARs
For details about importing the files into your Web application project and specifying the JPA implementation
library EclipseLink, see the tutorials Adding Application-Managed Persistence with JPA (Java Web SDK) [page
807] and Adding Container-Managed Persistence with JPA (Java EE 6 Web Profile SDK) [page 795].
The SAP HANA database platform is not part of EclipseLink versions prior to 2.5. If you use an earlier EclipseLink
version, you should bear in mind that additional steps are required if you want to deploy applications on the SAP
HANA database.
Note
In individual cases, issues have been observed with the SAP HANA database version SPS6 in combination with
EclipseLink versions prior to 2.5. If you experience problems, you are advised to consider switching to
EclipseLink 2.5 or later.
Note
The SAP HANA database is available in the cloud only. The persistence service does not provide the SAP HANA
database for local deployment.
EclipseLink versions prior to 2.5 do not contain the SAP HANA database platform. To deploy applications on the
SAP HANA database, you need to specify it as the target database and, for application-managed persistence,
import the corresponding JAR file into your project.
Container-Managed Persistence
<properties>
<property name="eclipselink.target-database"
value="com.sap.persistence.platform.database.HDBPlatform"/>
</properties>
Application-Managed Persistence
Specify the target database as shown above or directly in the servlet code, as shown in the example below:
ds = (DataSource) ctx.lookup("java:comp/env/jdbc/DefaultDB");
connection = ds.getConnection();
Map properties = new HashMap();
properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);
properties.put("eclipselink.target-database",
"com.sap.persistence.platform.database.HDBPlatform");
General Points
The target database property should be set before you deploy the application on the SAP HANA database,
otherwise an error will occur. If this happens, you need to re-create the table with the correct definitions by setting
the DDL generation type to Drop and Create Tables and then redeploy the application. Afterwards, set it
back to Create Tables so that you do not lose your data once you deploy again.
A JPA model contains a persistence configuration file, persistence.xml, which describes the defined
persistence units. A persistence unit in turn defines all entity classes managed by the entity managers in your
application and includes the metadata for mapping the entity classes to the database entities.
JPA Provider
The persistence.xml file is located in the META-INF folder within the persistence unit src folder. The JPA
persistence provider used by the persistence service is org.eclipse.persistence.jpa.PersistenceProvider.
Example
In the persistence.xml file in the tutorial Adding Container-Managed Persistence with JPA (Java EE 6 Web
Profile SDK), the persistence unit is named persistence-with-ejb, the transaction type is JTA (default
setting), and the DDL generation type has been set to Create Tables, as shown below:
The persistence service uses the EclipseLink capabilities for generating database tables. The following values are
valid for generating the DDL for the entity specified in the persistence.xml file:
Note
This option will often be used during the development phase, when there are frequent changes to the
schema or data needs to be deleted. Don't forget to change it to create-tables before the application
goes live, since all data is lost when a table is dropped.
Transaction Type
JTA transactions are used for container-managed persistence, and resource-local transactions for application-
managed persistence. Note that the Java Web SDK supports resource-local transactions only.
Related Information
Adding Container-Managed Persistence with JPA (Java EE 6 Web Profile SDK) [page 795]
Container-managed entity managers are the model most commonly used by Web applications. Container-
managed entity managers require JTA transactions and are generally used with stateless session beans and
transaction-scoped persistence contexts, which are thread-safe.
Context
The scenario described in this section is based on the Java EE 6 Web Profile runtime. You use a stateless EJB
session bean into which the entity manager is injected using the @PersistenceContext annotation.
1. Configure the persistence units in the persistence.xml file to use JTA data sources and JTA transactions.
2. Inject the entity manager into an EJB session bean using the @PersistenceContext annotation.
Related Information
To use container-managed entity managers, you need to configure JTA data sources in the persistence.xml
file. JTA data sources are managed data sources and are associated with JTA transactions.
Context
To configure JTA data sources, you set the transaction type attribute (transaction-type) to JTA and specify
the names of the JTA data sources (jta-data-source), unless the application is using the default data source.
Procedure
The example below shows the persistence units defined for two data sources, where each data source is
associated with a different database:
<persistence>
<persistence-unit name="hanadb" transaction-type="JTA">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<jta-data-source>jdbc/hanaDB</jta-data-source>
Related Information
EJB session beans, which typically perform the database operations, can use the @PersistenceContext
annotation to directly inject the entity manager. The corresponding entity manager factory is created
transparently by the container.
Procedure
1. In the EJB session bean, inject the entity manager as follows. Note that a persistence context type has not
been explicitly specified in the example below and is therefore, by default, transaction-scoped:
@PersistenceContext
private EntityManager em;
To use an extended persistence context, the value of the persistence context type has to be set to EXTENDED
(@PersistenceContext(type=PersistenceContextType.EXTENDED)) and the session bean declared as
stateful. An extended persistence context allows a session bean to maintain its state across multiple JTA
transactions. Bear in mind that an extended persistence context is not thread-safe.
2. If you have more than one persistence unit, inject the required number of entity managers by specifying the
persistence unit name as defined in the persistence.xml file:
@PersistenceContext(unitName="hanadb")
private EntityManager em1;
...
@PersistenceContext(unitName="maxdb")
private EntityManager em2;
The persistence context made available is based on JTA and provides automatic transaction management.
Each EJB business method automatically has a managed transaction, unless specified otherwise. The entity
manager life cycle, such as its instantiation and closing, is controlled by the container. Methods designed for
resource-local transactions, such as em.getTransaction().begin(),
em.getTransaction().commit(), and em.close(), must therefore not be used.
Related Information
Application-managed entity managers are created manually using the EntityManagerFactory interface.
Application-managed entity managers require resource-local transactions and non-JTA data sources, which need
to be declared as JNDI resource references.
Context
The scenario described in this section is based on the Java Web runtime, which only supports manual creation of
the entity manager factory.
Procedure
Related Information
An application can use one or more data sources. A data source can be a default data source or an explicitly
named data source. Before a data source can be used, it needs to be declared as a JNDI resource reference in the
web.xml deployment descriptor.
Procedure
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource1</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource2</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
○ The data source name is the JNDI name used for the lookup.
○ The same name must be used for the schema binding.
4. Save the file.
Related Information
To use application-managed entity managers, you need to configure resource-local transactions in the
persistence.xml file. Resource-local transactions are associated with non-JTA data sources (that is,
unmanaged data sources) and are explicitly controlled by the application through the EntityTransaction
interface of the entity manager.
Context
To use resource-local transactions, the transaction type attribute has to be set to RESOURCE_LOCAL, indicating
that the entity manager factory should provide resource-local entity managers. When you work with a non-JTA
data source, the non-JTA data source element also has to be set in the persistence unit properties in the
application code.
Procedure
The example below shows the persistence units defined for two data sources, where each data source is
associated with a different database:
<persistence>
<persistence-unit name="hanadb" transaction-type="RESOURCE_LOCAL">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<class>com.sap.cloud.sample.persistence.Person</class>
<properties>
<property name="eclipselink.ddl-generation" value="create-tables"/>
</properties>
</persistence-unit>
<persistence-unit name="maxdb" transaction-type="RESOURCE_LOCAL">
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<class>com.sap.cloud.sample.persistence.Person</class>
<properties>
<property name="eclipselink.ddl-generation" value="create-tables"/>
</properties>
</persistence-unit>
</persistence>
Related Information
In the application code, you can obtain an initial JNDI context by creating a javax.naming.InitialContext
object, and then retrieve the data source by looking up the naming environment through the InitialContext.
Alternatively, you can directly inject the data source.
Procedure
1. To create an intitial JNDI context and look up the data source, add the following code to your application and
make sure that the JNDI name matches the one specified in the web.xml file:
Note that according to the Java EE Specification, the prefix java:comp/env should be added to the JNDI
resource name (as specified in the web.xml) to form the lookup name. For more information about defining
and referencing resources according to the Java EE standard, see the Java EE Specification.
2. If the application uses multiple data sources, create the lookup in a similar manner:
3. Alternatively, to directly inject the data source, use the @Resource annotation:
○ Default data source
@Resource
private javax.sql.DataSource ds;
@Resource(name="jdbc/datasource1")
private javax.sql.DataSource ds1;
@Resource(name="jdbc/datasource2")
private javax.sql.DataSource ds2;
Related Information
Java EE Specification
You use the EntityManagerFactory interface to manually create and manage the entity managers in your Web
application.
Procedure
In the code above, the non-JTA data source element has been set in the persistence unit properties, and the
persistence unit name is the name of the persistence unit declared in the persistence.xml file.
Note
You are advised to include the above code in the servlet init() method, as illustrated in the tutorial
Adding Application-Managed Persistence with JPA (Java Web SDK), since this method is called only once
during initialization when the servlet instance is loaded.
2. If the application uses multiple data sources, create an entity manager factory for each data source:
3. Use the entity manager factory obtained above to create an entity manager as follows:
EntityManager em = emf.createEntityManager();
Next Steps
Application-managed entity managers are always extended and therefore retain the entities beyond the scope of
a transaction. You should therefore close an entity manager when it is no longer needed by calling
EntityManager.close() or alternatively EntityManager.clear() wherever appropriate, such as at the end
of a transaction. Bear in mind that an entity manager must not be used concurrently by multiple threads, so
design your entity manager handling in such a way that concurrent access of entity managers is prevented.
Related Information
When working with a resource-local entity manager, the transaction boundaries need to be set manually in your
application code using the EntityTransaction API. You can obtain the entity transaction attached to the entity
manager by calling EntityManager.getTransaction().
To create and update data in the database, you require an active transaction. The EntityTransaction API provides
the begin() method for starting a transaction, and the commit() and rollback() methods for ending a
transaction. When a commit is executed, all changes are synchronized with the database.
Example
The tutorial code (Adding Application-Managed Persistence with JPA (Java Web SDK)) shows how to create and
persist an entity:
Related Information
Adding Application-Managed Persistence with JPA (Java Web SDK) [page 807]
The data source is determined dynamically at runtime and does not need to be defined beforehand in the
web.xml or persistence.xml file. This allows you to bind additional schemas to an application and obtain the
corresponding data source, without having to modify the application code or redeploy the application.
Context
A dynamic JNDI lookup is applied as follows, depending on whether you are using an unmanaged or a managed
data source:
● Unmanaged
This is supported in the Java Web, Java EE 6 Web Profile, and Java Web Tomcat 7 runtimes.
● Managed
Note
For the Java Web and Java EE 6 Web Profile runtimes only, but not for the Java Web Tomcat 7, you can
continue to use the earlier variants of the JNDI lookup:
● Unmanaged
● Managed
The steps described below are based on JPA application-managed persistence using the Java Web runtime.
1. Create the persistence unit to be used for the dynamic data source lookup:
a. In the Project Explorer view, select <project>/Java Resources/src/META-INF/persistence.xml,
and from the context menu choose Open With Persistence XML Editor .
b. Switch to the Source tab of the persistence.xml file and create a persistence unit, as shown in the
example below. Note that the corresponding data source is not defined in either the persistence.xml
or web.xml file:
2. In the servlet code, implement a JNDI data source lookup. In the example below, the data source name is
"mydatasource":
ds = (DataSource) context.lookup("unmanageddatasource:mydatasource");
3. Create an entity manager factory in the normal manner. In the example below, the persistence unit is named
"mypersistenceunit", as defined in the persistence.xml file:
4. Use the console client to create a schema binding with the same data source name. To do this, open the
command window in the <SDK>/tools folder and enter the bind-schema [page 120] command:
Note
Note that you need to use the same data source name you have defined in step 2.
To declare a class as an entity and define how that entity maps to the relevant database table, you can either
decorate the Java object with metadata using Java annotations or denote it as an entity in the XML descriptor.
The Dali Java Persistence Tools provided as part of the Eclipse IDE for Java EE Developers allow you to use a JPA
diagram editor to create, edit, and display entities and their relationships (your application’s data model) in a
graphical environment.
package com.sap.cloud.sample.persistence;
import javax.persistence.*;
@Entity
@Table(name = "T_PERSON")
@NamedQuery(name = "AllPersons", query = "select p from Person p")
public class Person {
@Id
@GeneratedValue
private long id;
@Basic
private String FirstName;
@Basic
private String LastName;
Related Information
Adding Application-Managed Persistence with JPA (Java Web SDK) [page 807]
Dali Java Persistence Tools User Guide
The SAP HANA database allows tables to be created with row-based storage or column-based storage. By default,
tables are created with row-based storage, but you can change the type of table storage you have applied, if
necessary.
The example below shows the SQL syntax used by the SAP HANA database to create different table types. The
first two SQL statements both create row-store tables, the third a column-store table, and the fourth changes the
table type from row-store to column-store:
When using EclipseLink JPA for data persistence, the table type applied by default in the SAP HANA database is
row-store. To create a column-store table or alter an existing row-store table, you can manually modify your
database using SQL DDL statements, or you can use open source tools, such as Liquibase (with plain SQL
statements), to handle automated database migrations.
Due to the limitations of the EclipseLink schema generation feature, you will need to use one of the above options
anyway to handle the life cycle management of your database objects.
This section shows how you can use the ALTER TABLE statement to change a row-store table created by default
in the SAP HANA database to a column-store table. The example is based on the Adding Application-Managed
Persistence with JPA (Java Web SDK) tutorial and provides a solution designed specifically for this tutorial and use
case.
The example allows you to take advantage of the automatic table generation feature provided by JPA EclipseLink.
You merely alter the existing table at an appropriate point, when the schema containing the relevant table has just
been created. The applicable code snippet is added to the init() method of the servlet
(PersistenceWithJPAServlet). The main changes to the servlet code are outlined below:
1. Since the table must already exist when the ALTER statement is called, a small workaround is introduced in
the init() method. An entity manager is created at an earlier stage than in the original version of the tutorial
in order to trigger the generation of the schema:
2. The SAP HANA database table SYS.M_TABLES contains information about all row and column tables in the
current schema. A new method is added to the servlet which uses this table to check that T_PERSON is not
already a column-store table.
3. Another new method alters the table using the SQL statement ALTER TABLE <table name> COLUMN.
To apply the solution, replace the entire servlet class PersistenceWithJPAServlet with the following content:
package com.sap.cloud.sample.persistence;
import java.io.IOException;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.persistence.EntityManager;
import javax.persistence.EntityManagerFactory;
import javax.persistence.Persistence;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.sql.DataSource;
Related Information
Adding Application-Managed Persistence with JPA (Java Web SDK) [page 807]
EclipseLink provides weaving as a means of enhancing JPA entities and classes for performance optimization. At
present, SAP Cloud Platform supports static weaving only. Static weaving occurs at compile time and is available
in both the Java Web and Java EE 6 Web Profile environments.
Note that dynamic weaving is currently not supported on SAP Cloud Platform.
Prerequisites
For static weaving to work, the entity classes have to be listed in the persistence.xml file.
EclipseLink Library
To use the EclipseLink weaving options in your web applications, you need to add the EclipseLink library to the
classpath:
Java EE 6 Web Profile SDK: Adding the EclipseLink Library to the Classpath
1. In the Eclipse IDE in the Project Explorer view, select the web application and from the context menu choose
Properties.
2. In the tree, select JPA.
3. In the Platform section, select the correct EclipseLink version from the dropdown list. It should match the
version available in the SDK.
1. In the Eclipse IDE in the Project Explorer view, select the web application and from the context menu choose
Properties.
2. In the tree, select JPA EclipseLink .
3. In the Static weaving section, select the Weave classes on build checkbox.
4. Leave the default values for the source classes, target classes, and persistence XML root. You might need to
adapt them if you have a non-standard web application project layout. Choose OK to complete the step.
Note
If you change the target class settings, make sure you deploy these classes.
Your web application project will be rebuilt so that the JPA entity class files contain weaving information. This will
also occur on each (incremental) project build. The woven entity classes will be used whenever you publish the
web application to the cloud.
More Information
For information about using an ant task or the command line to perform static weaving, see the EclipseLink User
Guide .
Although JPA is suited for most application development scenarios and is the recommended approach on SAP
Cloud Platform, there might be cases where the low-level control provided by JDBC is more appropriate.
Bear in mind that working with JDBC entails manually writing SQL statements to read and write objects from and
to the database.
An application can use one or more data sources. A data source can be a default data source or an explicitly
named data source. Before a data source can be used, it needs to be declared as a JNDI resource reference.
You declare a JNDI resource reference to a JDBC data source in the web.xml deployment descriptor located in
the WebContent/WEB-INF directory as shown below. Note that the resource reference name is just an example:
<resource-ref>
<res-ref-name>jdbc/DefaultDB</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
● Name: The JNDI name of the resource. The Java EE Specification recommends that the data source reference
be declared in the jdbc subcontext (jdbc/NAME).
● Type: The type of resource that will be returned during the lookup.
The <resource-ref> elements should be added after the <servlet-mapping> elements in the deployment
descriptor.
If the application uses multiple data sources, you need to add a resource reference for each data source:
<resource-ref>
<res-ref-name>jdbc/datasource1</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
<resource-ref>
<res-ref-name>jdbc/datasource2</res-ref-name>
<res-type>javax.sql.DataSource</res-type>
</resource-ref>
You can obtain an initial JNDI context from Tomcat by creating a javax.naming.InitialContext object, and
then consume the data source by looking up the naming environment through the InitialContext, as follows:
Note that according to the Java EE Specification, the prefix java:comp/env should be added to the JNDI
resource name (as specified in web.xml) to form the lookup name. For more information about defining and
referencing resources according to the Java EE standard, see the Java EE Specification.
If the application uses multiple data sources, the lookup is constructed in a similar manner:
You can directly inject the data source using annotations as shown below:
@Resource
private javax.sql.DataSource ds;
● If the application uses explicitly named data sources, these must be declared in the web.xml file and injected
as shown in the example below:
@Resource(name="jdbc/datasource1")
private javax.sql.DataSource ds1;
@Resource(name="jdbc/datasource2")
private javax.sql.DataSource ds2;
JDBC Connection
The data source that you have retrieved in the section above allows you to create a JDBC connection to the
database. You can use the resulting Connection object to instantiate a Statement object and execute SQL
statements, as shown in the example below.
private static final String STMT_SELECT_ALL = "SELECT ID, FIRSTNAME, LASTNAME FROM
" + TABLE_NAME;
Connection conn = dataSource.getConnection();
try {
PreparedStatement pstmt = conn.prepareStatement(STMT_SELECT_ALL);
ResultSet rs = pstmt.executeQuery();
...
Database Tables
You use plain SQL statements to create the tables you require. Since there is currently no tool support available,
you have to manually maintain the table life cycles. The exact syntax to be used may differ depending on the
underlying database. The Connection object provides metadata about the underlying database and its tables and
fields, which can be accessed as shown in the code below:
To create a table in the Apache Derby database, you could use the following SQL statement executed with a
PreparedStatement object:
Note that the equivalent statement for SAP MaxDB differs as follows:
See the tutorial Adding Persistence Using JDBC for an example of how to execute SQL statements and apply the
Data Access Object (DAO) design pattern in your Web application.
Note
Remember that the persistence service only supports SAP MaxDB and the SAP HANA database in the cloud. If
you use Apache Derby for local development, bear in mind that the syntax of the SQL statements is not
identical on these databases.
Related Information
If an application uses the default data source and runs locally on Apache Derby, provided as standard for local
development, it can be tested on the local runtime without any further configuration. To use explicitly named data
sources or a different database, you need to configure the connection.properties file appropriately.
Related Information
To test an application on the local server, you need to define any data sources the application uses as connection
properties for the local database. This step is not necessary if the application uses the default data source.
Prerequisites
The local server has already been started at least once (with or without the application), otherwise the relevant
folder won’t exist.
Procedure
1. In the Project Explorer view, open the folder Servers/SAP Cloud Platform local runtime/
config_master/connection_data and select connection.properties.
2. From the context menu, choose Open With Properties File Editor .
3. Add the connection parameter com.sap.cloud.persistence.dsname to the block of connection
parameters for the local database you are using, as shown in the example below:
com.sap.cloud.persistence.dsname=jdbc/datasource1
javax.persistence.jdbc.driver=org.apache.derby.jdbc.EmbeddedDriver
javax.persistence.jdbc.url=jdbc:derby:memory:DemoDB;create=true
javax.persistence.jdbc.user=demo
javax.persistence.jdbc.password=demo
eclipselink.target-database=Derby
If the application has been bound to the data source based on an explicitly named data source instead of
using the default data source, ensure the following:
○ Provide a data source name in the connection properties that matches the name used in the data source
binding definition.
○ Add prefixes before each property in a property group for each data source binding you define. If an
application is bound only to the default data source, this configuration is considered the default no matter
which name you specified in the connection properties. The application can address the data source by
any name.
4. Repeat this step for all data sources that the application uses.
5. For the Java EE 6 Web Profile runtime, add the connection parameter
com.sap.cloud.persistence.dsname twice, once for the managed data source and once for the
unmanaged data source, with the names given below. Each entry has to be added to its own block of
connection properties:
com.sap.cloud.persistence.dsname=jdbc/defaultManagedDataSource
com.sap.cloud.persistence.dsname=jdbc/defaultUnmanagedDataSource
6. To indicate that a block of parameters belong together, add a prefix to the parameters, as shown in the
example below. Note that the prefix is freely definable and the dot is not mandatory:
1.com.sap.cloud.persistence.dsname=jdbc/datasource1
You have the option of replacing your local embedded Derby instance with SAP MaxDB.
Context
An application developed for SAP Cloud Platform may be executed in different environments, where development
and testing typically occur on a developer's PC, regression testing on a build server, and deployment in the cloud.
The persistence service allows an application to abstract from the different execution environments by
externalizing the connection data and automatically establishing the connections to the relevant databases.
Procedure
1. In the Project Explorer view, open the folder Servers/SAP HANA Cloud local runtime/
config_master/connection_data and select connection.properties.
2. From the context menu, choose Open With Properties File Editor .
3. Comment out the connection parameters for the local Derby database connection and instead comment in
those for SAP MaxDB. (This also changes the target database for EclipseLink.)
Note
Since the SAP Cloud Platform SDK includes the MaxDB JDBC driver, you do not need to explicitly add the
JDBC driver JAR to the WEB-INF/lib folder of your Web application project.
Related Information
The SQL trace provides a log of selected SQL statements with details about when a statement was executed and
its duration, allowing you to identify inefficient SQL statements used in your applications and investigate
performance issues. SQL trace records are integrated in the standard trace log files written at runtime.
Context
The SQL trace is disabled by default. Generally, you enable it when you require SQL trace information for a
particular application and disable it again once you have completed your investigation. It is not intended for
general performance monitoring.
You can use the cockpit to enable the SQL trace by setting the log level of the logger
com.sap.core.persistence.sql.trace to the log level DEBUG in the application’s log configuration. SQL
trace information can subsequently be viewed in the log files.
Procedure
1. Log onto the cockpit and choose Applications Java Applications in the navigation area.
2. Click the relevant application to go to the dashboard.
Note
You can only set log levels when an application is running. Loggers are not listed if the relevant application
code has not been executed.
The new log setting takes effect immediately. Note that log settings are saved permanently and do not revert
to their initial values when an application is restarted.
See the application's trace logs, which contain the SQL trace records, either in the Most Recent Logging panel on
the application dashboard or on the Logging page by navigating to Monitoring Logging in the navigation
area.
Procedure
To display the contents of a particular log file, choose (Show). Note that you can also download the file by
choosing (Download).
In the log file, you can identify the SQL trace information by the logger name
com.sap.core.persistence.sql.trace.The entries written by the logger include the following details:
○ Date and time when written
○ System time in nanoseconds
○ The name of the interface and method that produced the log entry, for example,
java.sql.Connection.prepareStatement (sql)
○ The status of the method call (begin and end)
○ The database connection ID, for example, conn=[3d194ab9]
○ The text of the SQL statement, for example, "INSERT INTO T_PERSONS (ID, FIRSTNAME, LASTNAME)
VALUES (?, ?, ?)". Note that for security reasons parameter values are not shown.
Example
The SQL-specific information from the default trace is shown below in plain text format:
Besides the cockpit, the SQL trace can be enabled from the Eclipse IDE and using the console client. Whichever
tool you use, you need to set the log level of the logger com.sap.core.persistence.sql.trace to the log
level DEBUG.
Eclipse
You can set the log level for applications deployed locally or in the cloud.
Console Client
You can use the console client to set the log level as a logging property for one or more loggers. To do so, use the
command neo set-log-level with the log parameters logger <logger_name> and level <log_level>.
Related Information
This page shows lists of commands for different tasks and database types.
Table 318:
Task Commands for SAP ASE da Commands for SAP HANA Commands for SAP HANA
tabases databases schemas
Listing databases for a spe list-dbs [page 222] list-dbs [page 222] -
cific account
Listing database systems for list-dbms [page 221] list-dbms [page 221] -
a specific account
1.4.9.8.2 Create
Table 319:
Task Commands for SAP ASE da Commands for SAP HANA Commands for SAP HANA
tabases databases schemas
Task Commands for SAP ASE databases Commands for SAP HANA databases
Task Commands for SAP ASE databases Commands for SAP HANA databases
Table 323:
Task Commands for SAP ASE da Commands for SAP HANA Commands for SAP HANA
tabases databases schemas
1.4.9.8.7 Delete
Table 324:
Task Commands for SAP ASE da Commands for SAP HANA Commands for SAP HANA
tabases databases schemas
Table 325:
Task Commands for SAP ASE databases Commands for SAP HANA databases
Giving another account in the same grant-db-access [page 191] grant-db-access [page 191]
global account access to a database
Listing all database access permissions list-db-access-permissions [page 220] list-db-access-permissions [page 220]
given to another account in the same
global account
Revoking database access for another revoke-db-access [page 260] revoke-db-access [page 260]
account in the same global account
Giving any other account access to a da grant-schema-access [page 193] grant-schema-access [page 193]
tabase
Listing all database access permissions list-schema-access-grants [page 236] list-schema-access-grants [page 236]
given to another account with the grant-
schema-access [page 193] command
Revoking database access given to an revoke-schema-access [page 262] revoke-schema-access [page 262]
other account with the grant-schema-
access [page 193] command
Table 326:
Task Commands for SAP ASE databases Commands for SAP HANA databases
Giving another account permission to grant-db-tunnel-access [page 192] grant-db-tunnel-access [page 192]
open a database tunnel
Listing all tunnel permissions given to list-db-tunnel-access-grants [page 224] list-db-tunnel-access-grants [page 224]
other accounts
Revoking tunnel access given to other revoke-db-tunnel-access [page 261] revoke-db-tunnel-access [page 261]
accounts
Answers to some of the most commonly asked questions about the persistence service.
SAP Cloud Platform offers SAP ASE and SAP HANA databases. For more information, see Overview of Database
Systems and Databases [page 843].
How often does a backup occur? How much data can I lose in the worst case?
For productive databases, a full data backup is done once a day. Log backup is triggered at least every 30
minutes. The corresponding data or log backups are replicated to a secondary location every two hours. Backups
are kept (complete data and log) on a primary location for the last two backups and on a secondary location for
the last 14 days. Backups are deleted afterwards. Recovery is therefore only possible within a time frame of 14
days. Restoring the system from files on a secondary location might take some time depending on the availability.
For more information, see Restoring Database Systems [page 851] and Restoring Databases [page 874].
SAP offers to back up and recover shared and dedicated database systems only as a whole.
For new database offerings such as SAP ASE and SAP HANA databases with multitenant database container
(MDC) support (beta), you can operate several databases in the same database system and recover them
individually. Thus, when binding applications to databases, you can achieve a fine grained control of the backup
and recovery.
No. Backup and restore activities are currently handled by SAP Operations.
Due to the EclipsLink bug 317597 , the @Lob annotation is ignored when the corresponding table column is
created in the database. To enforce the creation of a CLOB column, you have to additionally specify
@Column(length=4001) for the property concerned. In fact, any value may be chosen as long as it is at least 4001
for SAP MaxDB or 2001 for the SAP HANA database.
I tested my app locally with the Apache Derby database, so why do I run into
SQL exceptions when deploying it in the cloud?
Different database systems use different system tables and reserved words. As an application developer, make
sure that the application does not use any of these reserved words for its own table and column names.
JPA does also not shield the application from this. If, for example, your entity class contains an attribute named
"date", this will clash with the reserved word DATE on SAP MaxDB and cause schema creation to fail upon
deployment. In such a case, the attribute should either be renamed to something else, or be mapped to another
column name in the database. This can be done using the @Column annotation like this:
@Column(name="THEDATE")
private String date;
Tips:
● Check the root cause in the application log. (A link to the log is provided in the application overview in the
cockpit. For more information, see Using Logs in the Cockpit [page 1177].)
● For a complete list of reserved words, refer to the relevant database documentation (SAP MaxDB SQL
Reference Manual , Apache Derby Documentation ).
Context
The Remote Data Sync service provides bi-directional synchronization of complex structured data between many
remote databases at the edge and SAP Cloud Platform databases at the center. The service is based on SAP SQL
Anywhere and its MobiLink technology.
● Using Remote Data Sync you can create occasionally-connected applications at the edge. These include
applications that are not suitable or economical to have a permanent connection, or applications that must
continue to operate in the face of unexpected network failures.
A single cloud database may have hundreds of thousands of data collection and action endpoints that operate in
the real world over sometimes unreliable networks. Remote Data Sync provides a way to connect all of these
remote applications and to synchronize all databases at the edge into a single cloud database.
The figure below illustrates a typical IoT scenario using the Remote Data Sync service: Sensors or smart meters
create data that is sent and stored decentrally in small embedded databases, such as SQL Anywhere or SQL
Anywhere UltraLite. To get a consolidated view on the data of all remote locations, this data is synchronized in
the following:
● SAP HANA database on the cloud via SQL Anywhere MobiLink clients, running on the edge devices;
● SQL Anywhere MobiLink servers, which are provided in the cloud by the Remote Data Sync service.
New insights can be later gained by analytics and data mining on the consolidated data in the cloud.
Sizing
Before you start working with the service, you might want to check its sizing requirements in order to choose the
optimal hardware features for fluent run of your applications. For more information, see Performance and
Scalability of the MobiLink Server [page 995].
Prerequisites
● You have an account in a productive SAP Cloud Platform landscape (e.g. hana.ondemand.com,
us1.hana.ondemand.com, ap1.hana.ondemand.com, eu2.hana.ondemand.com).
● Your SAP Cloud Platform account has an SAP HANA instance associated to it. The Remote Data Sync service
is currently only supported with SAP HANA database as target database in the cloud.
● On the edge side, you need to install SAP SQL Anywhere Remote Database Client version 16. You can
get a free Developer Edition . See also the existing production packages: Overview
Context
The procedure below helps you to make the Remote Data Sync service available in your SAP Cloud Platform
account. As the service is not available for your SAP Cloud Platform account by default, you need to first fulfill the
prerequisites above and then follow the procedure described below to request the Remote Data Sync service for
your account.
Note
Before you start working with the service, you might want to check its sizing requirements in order to choose
the optimal hardware features for fluent run of your applications. For more information, see Performance and
Scalability of the MobiLink Server [page 995].
To get access to the Remote Data Sync service, you need to extend your standard SAP Cloud Platform license
with an a-la-carte license for Remote Data Sync in one of two flavors:
1. Remote Data Sync, Standard: MobiLink server on 2 Cores / 4GB RAM (Price list material number: 8003943 )
2. Remote Data Sync, Premium: MobiLink sever on 4 Cores / 8 GB RAM (Price list material number: 8003944 )
Next Steps
Prerequisites
● You have received the needed licences and have enabled the Remote Data Sync service for your account. For
more information, see Getting Access to the Remote Data Sync Service [page 977].
● You have installed and configured the console client. For more information, see Using the Console Client
[page 102].
Context
To use the Remote Data Sync service, a MobiLink server must be started and bound to the SAP HANA database of
your account. This can be done by the following steps (they are described in detail in the procedure below):
1. Deploy the MobiLink server on a compute unit of your account using the console client.
2. Bind the MobiLink server to your SAP HANA database to connect the MobiLink server to the database.
3. Start the MobiLink server within the console client.
Note
To provision a MobiLink server in your account, you need a free compute unit of your quota. The Remote Data
Sync service license includes an additional compute unit for the MobiLink server.
Procedure
1. Deploy the MobiLink server on a compute unit of your account using the deploy command. You can
configure the MobiLink server to be started with customized server options (see MobiLink Server Options ).
You can do this either during deployment using the --ev parameter, or later on using the set-
application-property command. You can also specify the compute unit by using the --size parameter
of the deploy command.
○ Exemplary MobiLink options configuration during development and starting MobiLink server on a
premium compute unit:
2. Bind the MobiLink server to your SAP HANA database. This is needed to connect the MobiLink server to the
database.
Note
Prerequisite: You have created a SAP HANA database user dedicated to the MobiLink server instance. For
more information, see Guidelines for Creating Database Users [page 1083].
Hint: In case your SAP HANA instance is configured to create database users with a temporary password
(the user is forced to reset it on first logon), you need to do it before creating the binding.
Note
In case you find the log message below, your binding step is missed or unsuccessfully executed:
5. You can stop or undeploy your MobiLink server. For more information, see stop [page 284] or undeploy [page
295].
Next Steps
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Getting
Access to the Remote Data Sync Service [page 977].
● A MobiLink server is running in your account. For more information, see Provisioning a MobiLink Server in
Your Account [page 978].
Context
This page provides a simple example that demonstrates how to synchronize data from a remote SQL Anywhere
database into the SAP HANA database, using the Remote Data Sync service and the underlying SQL Anywhere
MobiLink technology. For more information on MobiLink synchronizations, see Quick start to MobiLink
(Synchronization) .
Tip
The SQL Anywhere database running on the client side is called remote database. The central SAP HANA
database running on SAP Cloud Platform is called consolidated database.
Procedure
Sample Code
4. Create a publication
4. Choose the Back button in the toolbar menu to get back to the root task level.
9. Run a synchronization
Next Steps
Related Information
Context
You can access the MobiLink server logs both in the cockpit and the console client.
Procedure
4. In the Most Recent Logging section, click the icon to view the logs, or the icon to download them.
Related Information
This page helps you to achieve end-to-end traceability of all synchronizations done via the Remote Data Sync
service of SAP Cloud Platform. This way, you can track who made what changes during work on the SAP HANA
target database in the cloud.
To monitor and record which users performed selected actions on SAP HANA database, you can use the SAP
HANA Audit Activity with Database Table as trail target. To use this feature, it must first be activated for your SAP
HANA database, which can be done via SAP HANA Studio by a database user with role HCP_SYSTEM.
● Use an SAP HANA database table as the trail target makes it possible to query and analyze auditing
information quickly. It also provides a secure and tamper-proof storage location.
● Audit entries are only accessible through the public system view AUDIT_LOG. Only SELECT operations can be
performed on this view by users with the system privilege AUDIT OPERATOR or AUDIT ADMIN.
For more information about how to configure audit policy, see SAP HANA Administration Guide and SAP HANA
Security Guide.
Note
These links point to the latest release of SAP HANA Administration Guide and SAP HANA Security Guide. Refer
to the SAP Cloud Platform Release Notes to find out which HANA SPS is supported by SAP Cloud Platform.
Find the list of guides for earlier releases in the Related Links section below.
Additionally to the SAP HANA audit logs, you might want to use the MobiLink server logs to achieve end-to-end
traceability.
● We recommend that you set the log level of the MobiLink server to a value that produces logs in granularity
useful for end-to-end traceability of the performed synchronization operations. For example, the log level -
vtRU. For more information about this log level configuration, see -v parameter documentation .
● To configure the log level, use the deploy command in the console client. For more information, see
Provisioning a MobiLink Server in Your Account [page 978].
Remember
SAP Cloud Platform retains the MobiLink server log files for only a week. To fulfill the legal requirements
regarding retention of audit log files, make sure you download the log files regularly (at least once a week), and
keep them for a longer period of time according to your local laws.
Related Information
Context
This section provides information about security-related operations and configurations you can perform in a
Remote Data Sync scenario.
Currently, as part of SAP Cloud Platform, the MobiLink servers support only basic authentication. For more
information, see User Authentication Architecture .
Tasks
There are different options how to configure the HTTPS connection, depending on the SQL Anywhere
synchronization tool that is used to trigger synchronizations:
○ When using SQL Anywhere dbmlsync command line tool to trigger client-initiated synchronizations,
trusted certificates can be specified using the trusted_certificates parameter as described here
.
○ When using the Sybase Central UI to trigger client-initiated synchronizations, you can specify Trusted
certificates as described here .
Related Information
MobiLink Users
MobiLink Security
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Getting
Access to the Remote Data Sync Service [page 977].
● A MobiLink server is running in your account. For more information, see Provisioning a MobiLink Server in
Your Account [page 978].
Context
The page describes how existing tools of SQL Anywhere (SQL Anywhere Monitor and MobiLink Profiler)
can be connected and used with the Remote Data Sync service running on SAP Cloud Platform.
Related Information
MobiLink Profiler
Context
SQL Anywhere Monitor comes as part of the standard SQL Anywhere installation. You can find it under
Administrative Tools of SQL Anywhere 16. The tool provides basic information about the health and availability of a
SQL Anywhere and MobiLink landscape. It also gives basic performance information and overall synchronization
statistics of the MobiLink server.
Procedure
1. To start the SQL Anywhere Monitor tool, open the SQL Anywhere 16 installation and go to Administrative
Tools.
2. Open the SQL Anywhere Monitor dashboard via URL: http://<host_name>:4950, where <host_name> is
the host of the computer where SQL Anywhere Monitor is running.
3. Log in with the default credentials: user= admin , password= admin .
○ MobiLink server:
○ As Host, specify the fully qualified domain name of the MobiLink server running in your SAP Cloud
Platform account.
○ As Port, specify 8443.
○ As Connection Type, specify HTTPS. Leave the rest unchanged.
Next Steps
SQL Anywhere Monitor also allows you to configure e-mail alerts for synchronization problems. For more
information, see Alerts .
Related Information
Context
MobiLink Profiler comes as part of the standard SQL Anywhere installation. You can find it under Administrative
Tools of SQL Anywhere 16. The tool collects statistical data about all synchronizations during a profiling session,
and provides performance details of the single synchronizations, down to the detailed level of a MobiLink event. It
also provides access to the synchronization logs of the MobiLink server. Therefore, the tool is mostly used to
troubleshoot failed synchronizations or performance issues, and during the development phase to further analyze
synchronizations, errors or warnings.
Procedure
1. Start the MobiLink Profiler under Administrative Tools of SQL Anywhere 16. The tool is a desktop client and
does not run in a Web browser.
2. Open File Begin Profiling Session to connect to the MobiLink server of your cloud account.
3. In the Connect to MobiLink Server window, provide the appropriate connection details, such as:
○ Host: specify the fully qualified domain name of the MobiLink server running in your SAP Cloud Platform
account.
Next Steps
To learn more about the UI of the MobiLink Profiler, see MobiLink Profiler Interface .
Prerequisites
● An SQL Anywhere version 16 installation is available on the client side. For more information, see Getting
Access to the Remote Data Sync Service [page 977].
● A MobiLink server is running in your account. For more information, see Provisioning a MobiLink Server in
Your Account [page 978].
Context
This page describes how you can configure an availability check for your MobiLink server and subscribe recipients
to receive alert e-mail notifications when your server is down or responds slowly. Furthermore, recommended
actions are listed in case of issues.
Procedure
Example:
Example:
Tip
To add multiple e-mail addresses, separate them with commas. We recommend that you use distribution
lists rather than personal e-mail addresses. Keep in mind that you will remain responsible for handling of
personal e-mail addresses with respect to data privacy regulations applicable.
Next Steps
● Check the logs. In case of synchronization errors, use the MobiLink Profiler tool to drill down into the
problem for root cause analysis.
● In case of crude server startup parameters, reset the MobiLink server.
● If your MobiLink server hangs, restart it.
Related Information
Configuring Availability Checks for Java Applications from the Console Client [page 1195]
This page provides sizing information for applications using the Remote Data Sync service.
Although the only realistic answers to optimal resource planning are “It depends” and “Testing will show what you
need”, this section aims to help you choose the right hardware parameters.
Synchronization Phases
The figure below shows the major phases of a synchronization session. Though not complete, it covers many
common use cases.
1. Synchronization is initiated by a remote database client. It uploads any changes made at the remote database
to the server.
2. MobiLink applies the changes to the database.
Roughly, the MobiLink server uses two thread pools – one for database connections, and one for the network side.
These can be controlled by command-line options, although, by default, the Remote Data Sync service
dynamically tunes the size of the worker thread pool to accommodate load changes.
Database Capacity
When the Remote Data Sync server applies changes to the consolidated database and prepares changes to be
sent to the remote database client, it typically does so by executing SQL statements or stored procedures that are
invoked by MobiLink events. For example, to apply an upload MobiLink may execute insert, update, and delete
statements for each table being synchronized; to prepare a download MobiLink may execute a query for each
table being synchronized.
Database tuning is outside the scope of this document, but the load on the database can be substantial. Think of
MobiLink as a concentrator of database load. All the operations that are carried out against the remote database
while disconnected, in addition to the requests for updates to be downloaded to the remote database, are
executed in two transactions (1 upload, 1 download) against the consolidated database. This can place a heavy
load on the database.
You should know the number of concurrent synchronizations as a starting point, and from there on, calculate
back on the required resources. Typically, this number is limited by RAM requirements. To estimate, you need a
typical upload and download data volume as a starting point.
A machine with N MB of RAM can have C clients each with about V MB of upload or download data volume, where
C = N/V.
Remote Data Sync servers are not typically CPU intensive, and typically require less than half the processing that
is required by the consolidated database. When selecting the appropriate compute units for MobiLink, memory is
more likely to limit the maximum sustainable throughput for a Remote Data Sync server than CPU.
Example:
1. Let's assume the database can process the target load of L synchronizations per second (and that is a matter
for testing).
2. At this throughput, one database thread will come open every 1/L seconds. To keep throughput high, a
synchronization request should be ready, with data uploaded and available to pass to the database thread.
3. To keep the database busy, if a synchronization request takes t seconds to upload (which will depend on
network speed and data volume, and which should be determined by testing), then the Remote Data Sync
server must be able to hold (L x t) client data uploads in memory.
4. The Remote Data Sync server must also be able to download the data to the client to prevent the database
threads having to wait for a network connection to download. In the case, this volume is similar to the uploads
we end up with: MobiLink should be able to support (2 x L x t) simultaneous synchronizations to maintain a
throughput of L synchronizations per second.
Note
For example, to support a peak sustained throughput of 50 synchronizations per second, with a client that
takes 0.5 seconds to upload and download data, then the Remote Data Sync server should be able to support
50 simultaneous synchronizations in RAM to sustain this rate as a peak throughput. Assuming data transfer
volumes per client are less than 80 MB (which is a very high number for data synchronization), a Standard
machine would be a good choice to start with.
The SAP Cloud Platform Git service can be used to store and version source code of applications, for example
HTML5 and Java applications, in Git repositories.
Git is a widely used open source system for revision management of source code that facilitates distributed and
concurrent large-scale development workflows.
Features
● Highly distributed. Every clone of a repository contains the complete version history.
● Cheap and simple creation and merging of branches supporting a multitude of development styles.
● Almost all operations are performed on a local clone of a repository and therefore are very fast.
● No need to be permanently online, only when synchronizing with the Git service.
● Only differences between versions are recorded allowing for very compact storage and efficient transport.
● Widely used and supported by many tools.
Restrictions
While Git can manage and compare text files very efficiently, it was not designed for processing large files or files
with binary content, such as libraries, build artifacts, multimedia files (images or movies), or database backups.
Consider using the document service or some other suitable storage service for storing such content.
To ensure best possible performance and health of the service, the following restrictions apply:
● The size of an individual file must not exceed 20 MB. Pushes of changes that contain a larger file will be
rejected.
● The overall size of the bare repository stored in the Git service must not exceed 500 MB.
● The number of repositories per account is not currently limited. Note, however, that SAP may take measures
to protect the Git service against misuse.
This product makes use of the Git-Icon-1788C image made available by Git (https://git-scm.com/downloads/
logos ) under the Creative Commons Attribution 3.0 Unported License (CC BY 3.0) http://
creativecommons.org/licenses/by/3.0 .
Related Information
In the cockpit, you can create and delete Git repositories, as well as lock and unlock repositories for write
operations. In addition, you can monitor the current disk consumption of your repositories and perform garbage
collections to clean up and compact repository content.
Related Information
In the SAP Cloud Platform cockpit, you can create Git repositories for your accounts.
Prerequisites
Context
Note
To create a repository for the static content of an HTML5 application, see Creating an HTML5 Application [page
1115].
1. Log on to the SAP Cloud Platform cockpit, and select the required account.
Table 327:
Field Entry
Name Mandatory. Enter a unique name starting with a lowercase letter, followed by digits and lower
case letters. The name is restricted to 30 characters.
Description Optional. Enter a descriptive text for the repository. You can change this description later on.
Create empty commit Select this checkbox if you want to have an initial empty commit in the history of the repository.
This might be useful if you want to import the content of another repository.
4. Choose OK.
Results
The URL of the Git repository is displayed under Source Location on the detail page of the repository. You can use
this URL to access the repository with a standard-compliant Git client. Note that you cannot use this URL in a
browser to access the Git repository.
Related Information
Permissions for Git repositories are granted based on the account member roles of users. To grant an account
member access to a Git repository, assign one of these roles: Administrator, Developer, or Support User.
Prerequisites
Context
For details about the permissions associated with the individual roles, see Security [page 1008].
Procedure
Make sure that you assign at least one of these roles: Administrator, Developer, or Support User.
Related Information
In the SAP Cloud Platform cockpit, you can change the state of a Git repository temporarily to READ ONLY to
block all write operations. Read access will still be possible.
Prerequisites
1. Log on to the SAP Cloud Platform cockpit, and select the required account.
2. In the list of Git repositories, locate the repository you want to work with and follow the link on the repository's
name.
3. On the details page of the repository, choose Set Read Only.
Results
The state flag of the repository changes from ACTIVE to READ ONLY and all further write operations on this
repository are prohibited.
Note
To unlock the repository again and allow write access, choose Set Active on the details page of the repository.
In the SAP Cloud Platform cockpit, you can delete a Git repository unless it is associated with an HTML5
application. In this case, delete the HTML5 application.
Prerequisites
Context
Caution
Be very careful when using this command. Deleting a Git repository also permanently deletes all data and the
complete history. Clone the repository to some other storage before deleting it from the Git service in case you
need to restore its content later on.
1. Log on to the SAP Cloud Platform cockpit, and select the appropriate account.
In the SAP Cloud Platform cockpit, you can trigger a garbage collection for a repository to clean up unnecessary
objects and compact the repository content aggressively.
Prerequisites
Context
Perform this operation from time to time to ensure the best possible performance for all Git operations. In
addition, the Git service runs normal garbage collections periodically.
Note
This operation might take a considerable amount of time and might impact the performance of some Git
operations while it is running.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required account.
The garbage collection runs in the background. You can use the Git repository without restrictions while the
process is running.
The following assumes that you are familiar with the concepts of Git and that you have access to a suitable Git
client, for example SAP Web IDE, to perform Git operations.
If you are new to Git, we strongly recommend that you read a text book about Git and consult the Best Practices
guide before using the Git service. The Troubleshooting Guide helps solve some common issues you may
encounter when working with the Git service.
Related Information
The URL of the Git repository is displayed under Source Location on the details page of the repository. You can
use this URL to access the repository using a Git client.
Prerequisites
In the account where the repository resides, you are an account member with the role Administrator, Developer,
or Support User.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required account.
You need to clone the Git repository of your application to your development environment.
Procedure
1. In the cockpit, copy the link to the Git repository of your application.
a. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
b. Choose the Applications HTML5 Applications in the navigation area.
c. Click your newly created application.
d. Switch to the Versioning tab.
e. Under Source Location, copy the link that points to the Git repository of your application.
2. You can either use Eclipse or the Git command line tool to execute this step.
○ To use Eclipse:
1. Start the Eclipse IDE.
2. Open the Git Repositories view in the JavaScript perspective.
3. Choose the Clone a Git repository icon.
4. Paste the link that points to the Git repository of your application.
5. If prompted, enter your SCN user and password.
6. Choose Next.
○ To use the Git command line tool:
1. Enter the following line:
$ git clone <repository URL>.
2. If prompted, enter your SCN user ID and password.
Related Information
EGit/User Guide
Web IDE: Cloning a Repository
The Git fetch operation transfers changes from the remote repository to your local repository.
Prerequisites
● You are an account member with the role Administrator, Developer, or Support User.
● You have cloned the repository to your workspace, see Cloning a Repository [page 1005].
Context
Refer to the SAP Web IDE documentation if you want to fetch changes to SAP Web IDE. Otherwise, see the
documentation of your Git client to learn how to fetch changes from a remote Git repository.
Procedure
Related Information
The Git push operation transfers changes from your local repository to a remote repository.
Prerequisites
Refer to the SAP Web IDE documentation if you want to push changes from SAP Web IDE. Otherwise, see the
documentation of your Git client to learn how to push changes to a remote Git repository.
Procedure
Related Information
The Git service offers a web-based repository browser that allows you to inspect the content of a repository.
Prerequisites
In the account where the repository resides, you are an account member with the role Administrator, Developer,
or Support User.
Context
The repository browser gives read-only access to the full history of a Git repository. This includes its branches and
tags as well as the content of the files. Moreover, it allows you to download specific versions as ZIP files.
The repository browser automatically renders *.md Markdown files into HTML to make it easier to create
documentation.
Procedure
1. Log on to the SAP Cloud Platform cockpit, and select the required account.
1.4.11.3 Security
Access to the Git service is protected by SAP Cloud Platform roles and granted only to members of an account.
Restrictions
The Git service cannot be used to host public repositories or repositories with anonymous access.
Authentication
Access to a Git repository is only granted to users authenticated by the SAP ID service. When sending requests,
users must authenticate with SAP ID service credentials.
Permissions
The permitted operations depend on the account member role of the user.
Read access is granted to all users with the Administrator, Developer, or Support User role. They have permission
to:
● Clone a repository.
● Fetch commits and tags.
Write access is granted to all users with the Administrator or Developer role. They have permission to:
● Create repositories.
● Push commits.
● Push tags.
Note
If the repository is associated with an HTML5 application, pushing a tag defines a new version for the
HTML5 application. The version name will be the same as the tag name.
● Delete repositories.
● Run garbage collection on repositories.
● Lock and unlock repositories.
● Delete remote branches.
● Delete tags.
● Push commits committed by other users (forge committer identity).
● Forcefully push commits, for example to rewrite the history of a Git repository.
● Forcefully push tags, for example to move the version of an HTML5 application to a different commit.
Related Information
If you are new to Git, we strongly recommend reading a text book about Git, search the Internet for
documentation and guides available online, or get in touch with the large worldwide community of developers
working with Git.
Note
The only valid exception to this rule is if you accidentally pushed a secret, for example a password, to the
Git service.
1.4.11.5 Troubleshooting
While working with the Git service, you might encounter these common problems and error messages. Note that
the actual error messages and their presentation depend on the Git client you are using for communication with
the Git service.
General Issues
● All remote operations on a repository fail with Authentication failed for ....
Make sure that you enter your correct SAP ID credentials. Check that you can log on to the SAP Cloud
Platform, for example to the cockpit. If that fails as well, your account may have been locked temporarily due
to too many failed logon attempts. If the problem persists, contact SAP Support for help.
● A remote operation on a repository fails with Git access forbidden.
You don't have permission to access the repository at all or to perform the requested Git operation. Ensure
that you are member of the account that owns the repository. For read access (clone, fetch, pull), you
must have the role Administrator, Developer, or Support User. For write access (push, push tags), you must
have the Administrator or Developer role. For more information about required roles for certain Git
operations, see Security [page 1008].
● Pushes of changes fail with a message similar to this one: You are not allowed to perform this
operation. To push into this reference you need 'Push' rights. ... HEAD -> master
(prohibited by Gerrit).
You don't have the account member role Developer or Administrator or the repository is currently locked for
write operations. Check your roles in the SAP Cloud Platform cockpit or ask an account administrator to
assign the necessary roles. Check the state of the repository in the cockpit and unlock it to enable write
operations.
● Pushes of changes fail with You are not committer ....
The Git service verifies that the e-mail address of the committer associated with a commit matches the e-mail
address you registered with the SAP ID service.
Users with the account member role Developer are not allowed to submit changes in the name of other users
(forge committer identity). This error might indicate that your Git client is not properly configured to use the
e-mail address registered with the SAP ID service. To check your client configuration, use the git config
command:
$ git config -l ... user.name=John Doe user.email=john.doe@example.com ...
To submit changes in the name of another user, for example when transferring changes between different
repositories, you must have the account member role Administrator.
● Deleting a tag or remote branch fails.
Users with the account member role Developer are not allowed to delete or move tags or to delete remote
branches. You must have the account member role Administrator to do this.
● Pushes of changes fail with Pack exceeds the limit of ..., rejecting the pack.
This error message indicates that the maximum size of your Git repository would be exceeded by accepting
this change. The Git service imposes a hard limit of 500 MB as the maximum size of repositories to ensure the
best possible performance and health of the service. You can see this limit in the SAP Cloud Platform cockpit
as well as your current repository size.
Run a garbage collection in the SAP Cloud Platform cockpit to clean up unnecessary objects and compact the
repository content. If this does not significantly reduce the size of the repository, this usually indicates that
the repository contains build artifacts or some other binary data that cannot be compressed efficiently and
not just source code. Remove such files from the history of the repository and consider storing them outside
the Git service.
Related Information
Using YaaS together with SAP Cloud Platform you develop business services that are consumed in your cloud
applications.
Overview
With YaaS you can develop business services, publish and sell them through the YaaS Market, and consume them
in your cloud applications. The core design principle behind YaaS is a microservice architecture, which enables
you to build a flexible and scalable platform. A microservice architecture is another method of bundling
components into services. The approach is to develop a single application as a suite of small services, each
running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These
services are built around business capabilities and are independently deployable by fully automated deployment
machinery.
In YaaS, you develop the following component types when providing new services or consuming existing ones:
● Business services
A business service is simply a microservice that provides a specific business functionality, such as products,
loyalty, or orders. A set of business services is grouped into a package that gets published on the YaaS
Market.
A business service is a Web application with a RESTful API that exposes the resources and functions of the
service. The service implementation should follow the guidelines for microservices so that the service has a
clearly defined scope, is highly scalable, resilient against failures, and self-contained. We recommend using
the YaaS Service SDK to create new business services, as it provides a lightweight framework that helps
with the API definition and implementation of the service. The SDK uses RAML (RESTful API Modeling
Language) as an API modeling language and provides code generators to create the JAX-RS compliant Java
● Builder modules
A builder module is a user interface in the YaaS Builder application, in which the backoffice functionality of a
business service is managed; for example, an administration UI for a service published in YaaS.
Builder modules are the backoffice clients of YaaS. They allow users to manage the service data from the user
interface. Typically, a builder module is an HTML5 application calling the service APIs. As such, it is very easy
to deploy it as a Java Web Tomcat 7 application on SAP Cloud Platform: create a builder module according to
the tutorial on YaaS Dev Portal , add Cross-origin resource sharing (CORS) configuration to the web.xml,
and build a WAR file.
● Applications
An application is able to consume the business services to which it is subscribed. Subscribing to existing
packages is possible via the YaaS Market.
Table 328:
This is where Yaas and SAP Cloud Platform stand in the whole
picture.
The Services are the building blocks of YaaS. They are small,
isolated applications that are responsible for one single piece
of functionality.
The Builder SDK helps you create this UI. This is a command-
line interface that runs the Builder in developer mode. This
mode implements a builder module faster and more effi
ciently.
To try out an example business service, follow the steps in the Tutorial: Creating a Wishlist Service [page 1015].
For example, once subscribed to the Wishlist package, you can use the wishlist service API to create and manage
wishlists. As a Dev Team member, you can develop and register the Wishlist package.
The Wishlist package contains the services and the builder modules, and you can make them available in the YaaS
Market. Other users can subscribe to this package and use these services and builder modules.
Context
Using YaaS, you can build business services and builder modules that run on SAP Cloud Platform. Then, you can
use those services in cloud applications, which again can run on SAP Cloud Platform. The example used here
refers to the Wishlist service example described in the YaaS Dev Portal .
Procedure
In the first step of this tutorial, you will learn how to create and start a business service, and then register it in the
YaaS Builder.
Prerequisites
● You have set up Maven in order to use the YaaS Service SDK.
The YaaS Service SDK uses Maven to resolve all additional software dependencies that are necessary to
create, build, test, run, and debug your new service. See Set up Maven .
Note
If you work in a proxy environment, make sure you set the proxy host and port correctly using the following
command:
Procedure
Create the Wishlist service and test it locally using, for example, Jetty Web server. See Create a Whishlist
Service .
2. (Optional) Import an existing Wishlist service.
If you want to use existing SAP Cloud Platform services, you can import the Wishlist service you have already
created in Eclipse.
a. Run the mvn eclipse:m2eclipse command. Before importing the project, you can run the mvn
eclipse:m2eclipse command.
b. Import the project in Eclipse. To do that, choose File Import Existing Maven Projects . In the Root
Directory field, browse to your project and choose OK. Then, choose Finish.
3. Use a HANA database on SAP Cloud Platform to persist the Wishlists service.
You can use the persistence service to store data in the HANA database on SAP Cloud Platform, use the
document service to store and retrieve BLOBs, or use the connectivity service to fetch or push data to an on-
premise system. You can find an implementation of the Wishlist service that uses a HANA database on SAP
Cloud Platform at GitHub . The most important parts of the implementation are the following:
Open the pom.xml file, select all the code in the <dependencies> tags and replace it in the pom.xml
file in your project in Eclipse. The logging libraries are provided by default in the SAP Cloud Platform
runtime environments. You also have to add the following code in the pom.xml filel:
<plugin>
<artifactId>maven-war-plugin</artifactId>
<configuration>
<packagingExcludes>
WEB-INF/lib/logback-classic-.jar,
WEB-INF/lib/logback-core-.jar,
WEB-INF/lib/slf4j-api-*.jar
</packagingExcludes>
</configuration>
</plugin>
b. Configure the persistence using JPA, specify the database connection and the Wishlist and Wishlist item
entity classes.
First you need to create a persistence.xml file in your project. You can automatically do that by adding
the JPA 2.0 facet in the project from Properties Project Facets . Then, open the META-INF/
persistence.xml in GitHub and copy and paste the code in your persistence.xml file in Eclipse.
c. Configure the Spring framework that is used in the Wishlist service implementation.
For each of the RESTful resources, which have been defined in the RAML definition of the service, a
separate method has been generated into the API implementation class. The implementation of the
service, that is the wiring with the persistence service in this example, goes in here. Please note that the
generated methods get a parameter of type YaasAwareParameters as input. This class contains
methods to retrieve information propagated from YaaS to the service, such as getHybrisTenant(),
which is used to provide a multitenant enabled service implementation. Open the
com.sample.wishlist.api.generated package and copy and paste the code into the respective files.
4. Build the project.
Build the project using the mvn clean install command in the console client. This creates a WAR file in
the target directory of the project.
5. Deploy the WAR file created in step 4.
Deploy the WAR file into your SAP Cloud Platform account using the deploy UI of the SAP Cloud Platform
cockpit or the neo command in the console client. Choose Java Web Tomcat 7 as Java runtime for the service.
6. Start the Wishlist service.
Start the service in a Web browser via the application URL shown in the SAP Cloud Platform cockpit. This
opens the built-in RAML API Console in a browser, which shows the REST API documentation, and provides a
console that allows you to interact with your API from within that documentation.
7. Register the Wishlist service.
Once the implementation is finished, register the service in the YaaS Builder. See Register a Service in the
Builder .
Next Steps
In the second step of this tutorial, you will learn how to create a Builder module.
Context
To create a builder module for the Wishlist service, follow these steps. You can find a simple builder module for
the Wishlist service at GitHub .
Following the Create a Builder Module tutorial you create a module with default content using the Builder
SDK: builder createModule wishlistModule.
2. Enable CORS requests.
To use CORS, a simple option is to enable the built-in CORS servlet filter that comes with Java Web Tomcat 7.
To enable static content as well, configure the Default servlet, because servlet filters are only applied to
servlets configured in the web.xml. A typical wishlistModule/WEB-INF/web.xml looks like this:
Next Steps
In the third step of this tutorial, you will learn how to build and deploy an application using the YaaS Storefront
template.
Context
The YaaS Storefront is a ready-to-use template that is integrated with the Commerce service package and other
third-party services such as search, payment, and tax. The Storefront application template is a pure HTML5
application. The easiest way to deploy and run this application on SAP Cloud Platform is to create a WAR file. See
Set Up a Storefront Application .
Add the content of the dist/public to a zip file and name the archive ROOT.WAR so that the application is
deployed to the root context. Otherwise, the application interprets the first path segments as a tenant.
3. Deploy the application.
You can deploy the application to SAP Cloud Platform using one of the following options:
1.5 Applications
Table 329:
To learn about See
How to develop SAP Cloud Platform applications Develop Applications [page 1020]
How to operate SAP Cloud Platform applications Operate Applications [page 1136]
Table 330:
To learn about See
How to develop, deploy and manage Java applications in a Java: Development [page 1021]
cloud environment
How to create comprehensive analytical models and build ap SAP HANA: Development [page 1078]
plications with SAP HANA's programmatic interfaces and in
tegrated development environment
How to develop and run lightweight HTML5 applications in a HTML5: Development [page 1111]
cloud environment
How to use the UI development toolkit for HTML5 (SAPUI5) to UI development toolkit for HTML5 (SAPUI5)
build and adapt client applications based on SAP Cloud
Platform
Note
If your application uses a platform service and that service becomes temporarily unavailable due to a restart or
a temporary problem, you need to make sure that you develop your application in such a way that it could
resume its normal running state automatically when the service becomes available again.
This can be done by wrapping the calls to the service in a way that an erroneous state is expected and calls can
be retried later. Applications should not fall into an unrecoverable state as this will mandate application restart.
In addition, applications could mitigate the fact that a specific functionality is temporarily missing by displaying
data in their user interface only partially.
Note
For information about platform services, go to Services [page 307].
SAP Cloud Platform enables you to develop, deploy and use Java applications in a cloud environment.
Applications run on a runtime container where they can use the platform services APIs and Java EE APIs
according to standard patterns.
The SAP Cloud Platform Runtime for Java enables the provisioning and running applications on the platform. The
runtime is represented by Java Virtual Machine, Application Runtime Container and Compute Units. Cloud
applications interact at runtime with the containers and services via the platform APIs.
During and after development, you can configure and operate an application using the cockpit and the console
client.
Appropriate for
● Developing and running Java Web applications based on standard JSR APIs
● Executing Java Web applications which include third-party Java libraries and frameworks supporting standard
JSR APIs
● Supporting Apache Tomcat Java Web applications.
Related Information
The SAP Cloud Platform Runtime for Java comprises the components which create the environment for
provisioning and running applications on SAP Cloud Platform. The runtime is represented by Java Virtual
Machine, Application Runtime Container and Compute Units. Cloud applications can interact at runtime with the
containers and services via the platform APIs.
Components
Related Information
SAP Cloud Platform infrastructure runs on SAP's own implementation of a Java Virtual Machine - SAP Java Virtual
Machine (JVM).
SAP JVM
The SAP JVM is a standard compliant certified JDK, supplemented by additional supportability and developer
features and extensive monitoring and tracing information. All these features are designed as interactive, on-
demand facilities of the JVM with minimal performance impact. They can be switched on and off without having to
restart the JVM (or the application server that uses the JVM).
Debugging on Demand
With SAP JVM debugging on demand, Java developers can activate and deactivate Java debugging directly –
there is no need to start the SAP JVM (or the application server on top of it) in a special mode. Java debugging in
the SAP JVM can be activated and deactivated using the jvmmon tool, which is part of the SAP JVM delivery. This
feature does not lower performance if debugging is turned off. The SAP JVM JDK is delivered with full source code
providing debugging information, making Java debugging even more convenient.
Profiling
To address the root cause of all performance and memory problems, the SAP JVM comes with the SAP JVM
Profiler, a powerful tool that supports the developer in identifying runtime bottlenecks and reducing the memory
footprint. Profiling can be enabled on-demand without VM configuration changes and works reliably even for very
large Java applications.
The user interface – the SAP JVM Profiler – can be easily integrated into any Eclipse-based environment by using
the established plugin installation system of the Eclipse platform. It allows you to connect to a running SAP JVM
and analyze collected profiling data in a graphical manner. The profiler plug-in provides a new perspective similar
to the debug and Java perspective.
A number of profiling traces can be enabled or disabled at any point in time, resulting in snapshots of profiling
information for the exact points of interest. The SAP JVM Profiler helps with the analysis of this information and
provides views of the collected data with comprehensive filtering and navigation facilities.
● Memory Allocation Analysis – investigates the memory consumption of your Java application and finds
allocation hotspots
● Performance Analysis – investigates the runtime performance of your application and finds expensive Java
methods
The SAP JVM provides comprehensive statistics about threads, memory consumption, garbage collection, and
I/O activities. For solving issues with SAP JVM, a number of traces may be enabled on demand. They provide
additional information and insight into integral VM parts such as the class loading system, the garbage collection
algorithms, and I/O. The traces in the SAP JVM can be switched on and off using the jvmmon tool, which is part of
the SAP JVM delivery.
Further Information
Thread dumps not only contain a Java execution stack trace, but also information about monitors or locks,
consumed CPU and memory resources, I/O activities, and a description of communication partners (in the case
of network communication).
Related Information
SAP Cloud Platform applications run on a modular and lightweight application runtime container where they can
use the platform services APIs and Java EE APIs according to standard patterns.
Depending on the runtime type and corresponding SDK you are using, SAP Cloud Platform provides the following
profiles of the application runtime container:
Java Web Some of the standard Java EE 6 7 (default); 6 If you need a small standalone Java Web container.
APIs (Servlet, JSP, EL, Web
socket)
Java Web Some of the standard Java EE 6 7 (default); 8 If you need a simplified Java Web application runtime
Tomcat 7 APIs (Servlet, JSP, EL, Web container based on Apache Tomcat 7.
socket)
Java EE 6 Java EE 6 Web Profile APIs 7 (default); 6 If you need an application runtime container to
Web Profile gether with all containers defined by the Java EE 6
Web Profile specification.
Java Web Some of the standard Java EE 7 8 (default); 7 If you need a simplified Java Web application runtime
Tomcat 8 APIs (Servlet, JSP, EL, Web container based on Apache Tomcat 8.
socket)
For the complete list of supported APIs, see Supported Java APIs [page 1031]
Related Information
Java Web is a minimalistic application runtime container in SAP Cloud Platform that offers a subset of Java EE
standard APIs typical for a standalone Java Web Container.
This runtime container is suitable for SAP Cloud Platform applications that need a small, low memory consuming
container. The default supported Java version for Java Web is 6; you can also use Java version 7.
The current version 1 of the Java Web application runtime container (neo-java-web 1.x) provides implementation
for the following set of Java Specification Requests (JSRs):
Table 332:
Specification version JSR
Development Process
The Java Web enables you to easily create your applications for SAP Cloud Platform utilizing standard defined
APIs suitable for a Web Container in addition to SAP Cloud Platform services APIs.
For more information, see SAP Cloud Platform SDK Java Docs.
Related Information
Java Web Apache Tomcat 7 (Java Web Tomcat 7) is the next simplified edition of Java Web application runtime
container providing optimized performance particularly in the area of startup time and memory footprint.
This container leverages Apache Tomcat 7 without modifications and adds a subset of SAP Cloud Platform
services client APIs. Applications running in the Apache Tomcat 7 container are portable on Java Web Tomcat 7.
Existing applications running on the first edition of Java Web application runtime container can run unmodified on
Java Web Tomcat 7 in case they share same set of enabled APIs.
The default supported Java version for Java Web Tomcat 7 is 7; you can also use Java version 8.
The current version of Java Web Tomcat 7 application runtime container (neo-java-web 2.x) provides
implementation for the following set of Java Specification Requests (JSRs) defined specifications:
Table 333:
Specification version JSR
The Java EE 6 Web Profile application runtime container of SAP Cloud Platform is Java EE 6 Web Profile certified.
The lightweight Web Profile of Java EE 6 is targeted at next-generation Web applications. Developers benefit from
productivity improvements with more annotations and less XML configuration, more Plain Old Java Objects
(POJOs), and simplified packaging.
The default supported Java version for Java EE 6 Web Profile is 7; you can also use Java version 6.
The current version 2 of Java EE 6 Web Profile application runtime container (neo-javaee6-wp 2.x) provides
implementation for the following Java Specification Requests (JSRs):
Contexts and Dependency Injection for Java EE platform 1.0 JSR - 299
For more information about the differences between EJB 3.1 and EJB 3.1 Lite, see the Java EE 6 specification, JSR
318: Enterprise JavaBeans, section 21.1.
The Java EE 6 Web Profile enables you to easily create your applications for SAP Cloud Platform.
For more information, see Using Java EE 6 Web Profile [page 1036].
Related Information
Java EE at a Glance
Java Web Apache Tomcat 8 (Java Web Tomcat 8) is the next edition of the Java Web application runtime
container that has all characteristics and features of its predecessor Java Web Tomcat 7.
This container leverages Apache Tomcat 8.5 Web container without modifications and also adds the already
established set of SAP Cloud Platform services client APIs. Applications running in the Apache Tomcat 8.5 Web
container are portable to Java Web Tomcat 8. Existing applications running in Java Web and Java Web Tomcat 7
application runtime containers can run unmodified in Java Web Tomcat 8 in case they share the same set of
enabled APIs.
Restriction
HTTP2 protocol is not supported at SAP Cloud Platform.
The default supported Java version for Java Web Tomcat 8 is 8; you can also use Java version 7.
The current version of Java Web Tomcat 8 application runtime container (neo-java-web 3.x) provides
implementation for the following set of Java Specification Requests (JSRs) defined specifications:
Table 334:
Specification version JSR
The following subset of APIs of SAP Cloud Platform services are available within Java Web Tomcat 8: document
service APIs, mail service APIs, connectivity service APIs (destination configuration and authentication header
provider), persistence service JDBC APIs, and security APIs.
A compute unit is the virtualized hardware resources used by an SAP Cloud Platform application.
After being deployed to the cloud, the application is hosted on a compute unit with certain central processing unit
(CPU), main memory, disk space and an installed OS.
SAP Cloud Platform offers four standard sizes of compute units according to the provided resources.
Depending on their needs, customers can choose from the following compute unit configurations:
Table 335:
Compute Unit Configuration Size Parameter Value
The third column in the table shows what value of the -z or --size parameter you need to use for a console
command.
Note
For developer accounts, only the Lite edition is available. So on the trial landscape, you can run only one
application at a time.
For customer accounts, all sizes of compute units are available. During deployment, customers can specify the
compute unit on which they want their application to run.
Related Information
The basic tools of the SAP Cloud Platform development environment, the SAP Cloud Platform Tools, comprise the
SAP Cloud Platform Tools for Java and the SAP Cloud Platform SDK.
The focus of the SAP Cloud Platform Tools for Java is on the development process and enabling the use of the
Eclipse IDE for all necessary tasks: creating development projects, deploying applications locally and in the cloud,
and local debugging. It makes development for the platform convenient and straightforward and allows short
development turn-around times.
The SDK, on the other hand, contains everything you need to work with the platform, including a local server
runtime and a set of command line tools. The command line capabilities enable development outside of the
Eclipse IDE and allow modern build tools, such as Apache Maven, to be used to professionally produce Web
applications for the cloud. The command line is particularly important for setting up and automating a headless
continuous build and test process.
Related Information
When you develop applications that run on SAP Cloud Platform, you can rely on certain Java EE standard APIs.
These APIs are provided with the runtime of the platform. They are based on standards and are backward
compatible as defined in the Java EE specifications. Currently, you can make use of the APIs listed below:
● javax.activation
● org.slf4j.Logger
● org.slf4j.LoggerFactory
If you are using the SAP Cloud Platform SDK for Java EE 6 WebProfile, you can have access to the following Java
EE APIs as well:
● javax.faces
● javax.validation
● javax.inject
● javax.ejb
● javax.interceptor
● javax.transaction
● javax.enterprise
● javax.decorator
The table below summarizes the Java Request Specifications (JSRs) supported in the two SAP Cloud Platform
SDKs for Java.
Table 336:
Supported Java EE 6 Specification SAP Cloud Platform SDK for Java Web SAP Cloud Platform SDK for Java EE 6
WebProfile
The table below summarizes the Java Request Specifications (JSRs) supported in the SAP Cloud Platform SDK
for Java Web Tomcat 8 .
Table 337:
Supported Java EE 7 Specification SAP Cloud Platform SDK for Java Web Tomcat 8
In addition to the standard APIs, SAP Cloud Platform offers platform-specific services that define their own APIs
that can be used from the SAP Cloud Platform SDK. The APIs of the platform-specific services are listed in the
table below
The SAP Cloud Platform SDK contains a platform API folder for compiling your Web applications. It contains the
above content, that is, all standard and third-party API JARs (for legal reasons provided "as is", that is, they also
have non-API content on which you should not rely) and the platform APIs of the SAP Cloud Platform services.
You can add additional (pure Java) application programming frameworks or libraries and use them in your
applications. For example, you can include Spring Framework in the application (in its application archive) and use
it in the application. In such cases, the application should handle all dependencies to such additional frameworks
or libraries and you should take care for the whole assembly of such additional frameworks or libraries inside the
application itself.
SAP Cloud Platform also provides numerous other capabilities and APIs that might be accessible for applications.
However, you should rely only on the APIs listed above.
Related Information
You can develop applications for SAP Cloud Platform just like for any application server. SAP Cloud Platform
applications can be based on the Java EE Web application model. You can use programming logic that is well-
known to you, and benefit from the advantages of Java EE, which defines the application frontend. Inside, you can
embed the usage of the services provided by the platform.
Development Environment
SAP Cloud Platform development environment is designed and built to optimize the process of development and
deployment.
It includes the SAP Cloud Platform Tools for Java, which integrate the standard capabilities of Eclipse IDE with
some extended features that allow you to deploy on the cloud. You can choose between three types of SAP Cloud
Platform SDK for Java applications:
● SDK for Java Web - provides support for some of the standard Java EE 6 APIs (Servlet, JSP, EL, Websocket)
● SDK for Java Web Tomcat 7 - provides support for some of the standard Java EE 6 APIs (Servlet, JSP, EL,
Websocket)
● SDK for Java EE 6 Web Profile - certified to support Java EE 6 Web Profile APIs
● SDK for Java Web Tomcat 8 - provides support for some of the standard Java EE 7 APIs (Servlet, JSP, EL,
Websocket)
In the Eclipse IDE, create a simple HelloWorld application with basic functional logic wrapped in a Dynamic Web
Project and a Servlet. You can do this with both SDKs.
For more information, see Creating a HelloWorld Application [page 56] or watch the Creating a HelloWorld
application video tutorial.
SAP Cloud Platform is Java EE 6 Web Profile certified so you can extend the basic functionality of your application
with Java EE 6 Web Profile technologies. If you are working with the SDK for Java EE 6 Web Profile, you can equip
the basic application with additional Java EE features, such as EJB, CDI, JTA.
For more information, see Using Java EE 6 Web Profile [page 1036]
Create a fully-fledged application benefiting from the capabilities and services provided by SAP Cloud Platform. In
your application, you can choose to use:
● Authentication [page 1324] - by default, SAP Cloud Platform is configured to use SAP ID service as identity
provider (IdP), as specified in SAML 2.0. You can configure trust to your custom IdP, to provide access to the
cloud using your own user database.
● UI development toolkit for HTML5 (SAPUI5) - use the platform's official UI framework.
● Persistence Service [page 793] - provide relational persistence with JPA and JDBC via our persistence
service.
● Connectivity Service [page 313] - use it to connect Web applications to Internet, make on-demand to on-
premise connections to Java and ABAP on-premise systems and configure destinations to send and fetch e-
mail.
● Document Service [page 609] - use the service to store unstructured or semistructured data in your
application.
● Logging [page 1168]- implement a logging API if you want to have logs produced at runtime.
● Cloud Environment Variables [page 1040]- use system environment variables that identify the runtime
environment of the application.
Deploy
First, deploy and test the ready application on the local runtime and then make it available on SAP Cloud Platform.
For more information, see Deploying and Updating Applications [page 1043]
You can speed up your development by applying and activating new changes on the already running application.
Use the hot-update command.
Manage all applications deployed in your account from a single dedicated user interface - SAP Cloud Platform
cockpit.
Monitor
SAP Cloud Platform is certified to support Java EE 6 Web Profile. If you want to use it in your applications, you
have to develop them using SAP Cloud Platform SDK for Java EE 6 Web Profile.
Prerequisites
● You have downloaded SAP Cloud Platform Tools. Make sure you download the SDK for Java EE 6 Web Profile.
For more information, see Setting Up the Tools and SDK [page 43].
● If you have a previously installed version of SAP Cloud Platform Tools, make sure you update them to the
latest version. For more information, see Updating the Tools and SDK [page 53].
● The SDK brings all required libraries. In case you get an error with the import of a library, for example,
javax.ejb.localbean, make sure you have set the SAP Cloud Platform Tools and the Web Project
correctly.
Procedure
Create a servlet
1. On the HelloWorld project node, open the context menu and choose New Servlet . Window Create
Servlet opens.
2. Enter hello as the Java package and HelloWorldServlet as the class name. Choose Next.
3. In the URL mappings field, select /HelloWorldServlet and choose Edit.
4. In the Pattern field, replace the current value with just "/" and choose OK. In this way, the servlet will be
mapped as a welcome page for the application.
5. Choose Finish to generate the servlet. The Java Editor with the HelloWorldServlet opens.
6. Change the doGet(…) method so that it contains:
response.getWriter().println("Hello World!");
Create a JSP
1. On the HelloWorld project node, open the context menu and choose New JSP file . Window New JSP file
opens.
2. Enter the name of your JSP file and choose Finish.
1. On the HelloWorld project node, choose File New Other EJB Session Bean . Choose Next.
2. In the Create EJB session bean wizard, еnter test as the Java package and HelloWorldBean as the name of
your new class. Choose Finish.
package test;
import javax.ejb.LocalBean;
import javax.ejb.Stateless;
/**
* Session Bean implementation class HelloWorldBean
*/
@Stateless
@LocalBean
public class HelloWorldBean {
/**
* Default constructor.
*/
public HelloWorldBean() {
// TODO Auto-generated constructor stub
}
@EJB
private HelloWorldBean helloWorldBean;
<%@page import="javax.naming.InitialContext"%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<%@ page import = "test.HelloWorldBean" %>
<%@ page import = "javax.ejb.EJB" %>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Insert title here</title>
You can test the application on the local runtime and then deploy it on SAP Cloud Platform.
For more information, see Deploying an Application on SAP HANA Cloud [page 1043].
You can now use JPA together with EJB to persist data in your application
For more information, see Adding Container-Managed Persistence with JPA (Java EE 6 Web Profile SDK) [page
795]
Overview
SAP Cloud Platform runtime sets several system environment variables that identify the runtime environment of
the application. Using them, an application can get information about its application name, account and URL, as
well as information about the landscape it is deployed on and landscape specific parameters. All SAP Cloud
Platform specific environment variables names start with the common prefix HC_.
The following SAP Cloud Platform environment variables are set to the runtime environment of the application:
Table 338:
Key Sample Value Description
HC_REGION EU_1 / US_1 Region of the data center where the ap
plication is deployed
SAP Cloud Platform environment variables are accessed as standard system environment variables of the Java
process - for example via System.getenv("...").
Note
Environment variables are not set when deploying locally with the console client or Eclipse IDE.
Example
<html>
<head>
<title>Display SAP Cloud Platform Environment Platform variables</title>
</head>
Related Information
Prerequisites
In the Eclipse IDE you have developed or imported a Java application that is running on a cloud server.
Context
In the Server editor of your local Eclipse IDE, you can use the Advanced tab and the Environment Variables table to
add, edit, select and remove environment variables for the cloud virtual machine.
Note
The Advanced tab is only available for cloud servers.
Procedure
1. In the Eclipse IDE go to the Servers view and select the cloud server you want to configure.
2. Double click on it to open the Server Editor.
3. Open the Advanced tab.
4. (Optional) Add an environment variable.
Note
The changes made by someone else will be loaded once you reopen the editor.
Table 339:
Content
Deploying Applications
After you have created your Java application, you need to deploy and run it on SAP Cloud Platform. We
recommend that you first deploy and test your application on the local runtime before deploying it on the cloud.
Use the tool that best fits your scenario:
Eclipse IDE Deploying Locally from Eclipse IDE [page You have developed your application using SAP Cloud
1045] Platform Tools in the Eclipse IDE.
Console Client Deploying Locally with the Console Cli You want to deploy an application in the form of one or more
ent [page 1051] WAR files.
Cockpit Deploying on the Cloud with the Cockpit You want to deploy an application in the form of a WAR file.
[page 1055]
Application properties are configured during deployment with a set of parameters. To update these properties,
use one of the following approaches:
Table 341:
Console Client deploy [page 166] Deploy the application with new WAR file(s) and make
changes to the configuration parameters.
Command: deploy
Console Client set-application-property [page 269] Change some of the application properties you defined during
deployment without redeploying the application binaries.
Command: set-application-property
Cockpit Deploying on the Cloud with the Cockpit Update the application with a new WAR file or make changes
[page 1055] to the configuration parameters.
If you want to quickly see your changes while developing an application, use the following approaches:
Table 342:
Eclipse IDE Deploying on the Cloud from Eclipse IDE Republish the application. The cloud server is not restarted,
[page 1047] and only the application binaries are updated.
Console Client hot-update [page 210] Apply and activate changes. Use the command to speed up
development and not for updating productive applications.
Command: hot-update
If you are an application operator and need to deploy a new version of a productive application or perform
maintenance, you can choose among several approaches:
Table 343:
Zero Downtime Updating Applications with Zero Down Use when the new application version is backward compatible
time [page 1160] with the old version. Deploy a new version of the application
and disable and enable processes in a rolling manner, or, do it
rolling-update [page 264]
at one go with the rolling-update command.
Planned Down Using Maintenance Mode for Planned Use when the new application version is backward incompati
time Downtimes [page 1162] ble. Enable maintenance mode for the time of the planned
downtime.
(Maintenance
Mode)
Soft Shutdown Soft Shutdown [page 1165] Supports zero downtime and planned downtime scenarios.
Disable the application or individual processes in order to shut
down the application or processes gracefully.
Related Information
Follow the steps below to deploy your application on a local SAP Cloud Platform server.
Prerequisites
● You have set up your runtime environment in Eclipse IDE. For more information, see Setting Up the Runtime
Environment [page 48].
Procedure
1. Open the servlet in the Java Editor and from the context menu, choose Run As Run on Server .
2. Window Run On Server opens. Make sure that the Manually define a new server option is selected.
3. Expand the SAP node and, as a server type, choose between:
○ Java Web Server
○ Java Web Tomcat 7 Server
○ Java Web Tomcat 8 Server
○ Java EE 6 Web Profile Server
4. Choose Finish.
5. The local runtime starts up in the background and your application is installed, started and ready to serve
requests.
Note
If this is the first server you run in your IDE workspace, a folder Servers is created and appears in the
Project Explorer navigation tree. It contains configurable folders and files you can use, for example, to
change your HTTP or JMX port.
6. The Internal Web Browser opens in the editor area and shows the application output.
7. Optional: If you try to delete a server with an application running on it, a dialog appears allowing you to choose
whether to only undeploy the application, or to completely delete it together with its configuration.
Next Steps
After you have deployed your application, you can additionally check your server information. In the Servers view,
double-click on the local server and open the Overview tab. Depending on your local runtime, the following data is
available:
● If you have run your application in Java Web or Java EE 6 Web Profile runtime, you see the standard
server data (General Info, Publishing, Timeouts, Ports).
● If you have run your application in Java Web Tomcat 7 or Java Web Tomcat 8 runtime, you see some
additional Tomcat sections, default Tomcat ports, and an extra Modules page, which shows a list of all
applications deployed by you.
Related Information
Prerequisites
● You have set up your runtime environment in the Eclipse IDE. For more information, see Setting Up the
Runtime Environment [page 48].
● You have developed or imported a Java Web application in Eclipse IDE. For more, information, see Developing
Java Applications [page 1034] or Importing Samples as Eclipse Projects [page 62]
● You have an active Developer Account. For more information, see Signing Up for a Developer Account [page
17].
Procedure
1. Open the servlet in the Java editor and from the context menu, choose Run As Run on Server .
2. The Run On Server dialog box appears. Make sure that the Manually define a new server option is selected.
Note
○ If you have previously entered an account and user name for your landscape host, these names will be
prompted to you in dropdown lists.
○ A dropdown list will be displayed as well for previously entered landscapes hosts.
○ If you select the Save password box, the entered password for a given user name will be remembered
and kept in the secure store.
9. Choose Finish. This triggers the publishing of the application on SAP Cloud Platform.
Note
You cannot deploy multiple applications on the same application process. Deployment of a second
application on the same application process overwrites any previous deployments. If you want to deploy
several applications, deploy each of them on a separate application process.
Next Steps
● If, during development, you need to redeploy your application, after choosing Run on Server or Publish, the
cloud server will not be restarted but only the binaries of the application will be updated.
You can see all applications deployed in your account within the Eclipse Tools, or change the current runtime. For
more information, see Advanced Application Configurations [page 1049].
Related Information
SAP Cloud Platform Tools provide options for advanced server and application configurations from the Eclipse
IDE, as well as direct reference to the cockpit UI.
Prerequisites
You have developed or imported a Java Web application in the Eclipse IDE. For more, information, see Developing
Java Applications [page 1034] or Importing Samples as Eclipse Projects [page 62].
Alternatives
There are alternative ways to open the cockpit (1) and the application URLs (2).
1. In the Servers view, open the context menu and choose Show In Cockpit .
2. In the Servers view, expand the cloud server node and, from the context menu of the relevant application,
choose Application URL Open . It will be opened in a new browser tab.
Tip
● If the application is published on the cloud server, besides the Open option you can also choose Copy to
Clipboard, which only copies the application URL.
● If the application has not been published but only added to the server, Copy to Clipboard will be disabled.
The Open option though will display a dialog which allows you to publish and then open the application in a
browser.
● If the cloud server is not in Started status, both Application URL options will be disabled.
After you have deployed your application, you can check and also change the server runtime. Proceed as follows:
Note
When you change the Runtime value so that it differs from the one in Runtime in use, after saving your
change, a link appears prompting you to republish the server.
From the server editor, you can configure additional application parameters, such as compute unit size, JVM
arguments, and others.
Note
If you make your configurations on a started server, the changes will take effect after server restart. You
can use the link Restart to apply changes.
Related Information
The console client allows you to install a server runtime in a local folder and use it to deploy your application.
Procedure
neo install-local
3. To start the local server, enter the following command and press ENTER :
neo start-local
This starts a local server instance in the default local server directory <SDK installation folder>/
server. Again, use the following optional command argument to specify another directory:
4. To deploy your application, enter the following command as shown in the example below and press ENTER :
This deploys the WAR file on the local server instance. If necessary, specify another directory as in step 3.
5. To check your application is running, open a browser and enter the URL, for example:
http://localhost:8080/hello-world
Note
The HTTP port is normally 8080. However, the exact port configurations used for your local server,
including the HTTP port, are displayed on the console screen when you install and start the local server.
6. To stop the local server instance, enter the following command from the <SDK installation folder>/
tools folder and press ENTER :
neo stop-local
Related Information
Deploying an application publishes it to SAP Cloud Platform. During deploy, you can define various specifics of the
deployed application using the deploy command optional parameters.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting
Up the Console Client [page 52]
● Depending on your account type, deploy the application on the respective landscape. For more information,
see Landscape Hosts [page 41]
Procedure
1. In the opened command line console, execute neo deploy command with the appropriate parameters.
You can define the parameters of commands directly in the command line as in the example below, or in the
properties file. For more information, see Using the Console Client [page 102].
2. Enter your password if requested.
3. Press ENTER and deployment of your application will start. If deployment fails, check if you have defined the
parameters correctly.
Note
The size of an application deployed on SAP Cloud Platform can be up to 1.5 GB. If the application is
packaged as a WAR file, the size of the unzipped content is taken into account.
Example
Next Steps
To make your deployed application available for requests, you need to start it by executing the neo start
command.
Then, you can manage the application lifecycle (check the status; stop; restart; undeploy) using dedicated
console client commands.
By using the delta deployment option, you can apply changes in a deployed application faster without uploading
the entire set of files tо SAP Cloud Platform.
Context
The delta parameter allows you to deploy only the changes between the provided source and the previously
deployed content - new content is added; missing content is deleted; existing content is updated if there are
changes. The delta parameter is available in two commands – deploy and hot-update.
Note
Use it to save time for development purposes only. For updating productive applications, deploy the whole
application.
Procedure
To upload only the changed files from the application WARs, use one of the two approaches:
Related Information
The cockpit allows you to deploy Java applications as WAR files and supports a number of deployment options for
configuring the application.
Procedure
○ Start: Start the application to activate its URL and make the application available to your end users.
○ Close: Simply close the dialog box if you do not want to start the application immediately.
You can update or redeploy the application whenever required. To do this, choose Update application to open the
same dialog box as in update mode. You can update the application with a new WAR file or change the
configuration parameters.
To change the name of a deployed application, deploy a new application under the desired name, and delete the
application whose name you want to change.
Related Information
After you have created a Web application and tested it locally, you may want to inspect its runtime behavior and
state by debugging the application in SAP Cloud Platform. The local and the cloud scenarios are analogical.
Context
The debugger enables you to detect and diagnose errors in your application. It allows you to control the execution
of your program by setting breakpoints, suspending threads, stepping through the code, and examining the
contents of the variables. You can debug a servlet or a JSP file on a SAP Cloud Platform server without losing the
state of your application.
Note
Currently, it is only possible to debug Web applications in SAP Cloud Platform that have exactly one application
process (node).
Tasks
Related Information
In this section, you can learn how to debug a Web application on SAP Cloud Platform local runtime in the Eclipse
IDE.
Prerequisites
You have developed a Web application using the Eclipse IDE. For more information, see Developing Java
Applications [page 1034].
Procedure
Related Information
In this section, you can learn how to debug a Web application on SAP Cloud Platform depending on whether you
have deployed it in the Eclipse IDE or in the console client.
Prerequisites
● You have developed a Web application using the Eclipse IDE. For more information, see Developing Java
Applications [page 1034].
● You have deployed your Web application either using the Eclipse IDE or via the console client. For more
information, see Deploying and Updating Applications [page 1043].
Note
Debugging can be enabled if there is only one VM started for the requested account or application.
Procedure
Note
Since cloud servers are running on SAP JVM, switching modes does not require restart and happens in real
time.
1. Deploy your Web application in the console client and start it.
2. Go to the Eclipse IDE, open the Servers view and choose New Server .
3. Choose SAP SAP Cloud Platform .
4. Enter the correct landscape host, according to your location. (For more information, see Landscape Hosts
[page 41].)
6. On page SAP Cloud Platform Application in the wizard, provide the same application data which you have
previously entered in the console client.
7. Choose Finish.
8. A new server is created and attached to your application. It should be in Started mode if your application is
started.
9. From the server's context menu, choose Restart in Debug. (This should not restart the application.)
10. Request your application.
11. Open the Debug perspective for your server.
12. Set breakpoints in your application.
Note
● If you have deployed an application on a running server, we recommend that you do not use Debug on
Server or Run on Server for this will republish (redeploy) your application.
● Also, bear in mind that if you have deployed two or more WAR files, only the debugged one will remain after
that.
● If the sources are not attached (Example: The application is deployed from CLI or you need to attach
additional sources), you may attach them as described here .
With SAP Cloud Platform you can develop and run multitenant (tenant-aware) applications, that is, applications
running on a shared compute unit that can be used by multiple consumers (tenants). Each consumer accesses
the application through a dedicated URL.
You can read about the specifics of each platform service with regards to multitenancy in the respective section
below:
● Isolate data
● Save resources by sharing them among tenants
● Perform updates efficiently, that is, in one step
Currently, you can trigger the subscription via the console client for testing purposes. For more information, see
Providing Subscriptions to Provider Applications for Testing [page 35].
When an application is accessed via a consumer specific URL, the application environment is able to identify the
current consumer. The application developer can use the tenant context API to retrieve and distinguish the tenant
ID, which is the unique ID of the consumer. When developing tenant-aware applications, data isolation for different
consumers is essential. It can be achieved by distinguishing the requests based on the tenant ID. There are also
some specifics in the usage of different services when you develop your multitenant application.
● Shared in-memory data such as Java static fields will be available to all tenants
● Avoid any possibility that an application user can execute custom code in the application JVM, as this may
give them access to other tenants' data
● Avoid any possibility that an application user can access a file system, as this may give them access to other
tenants' data.
For more information, see Multitenancy in SAP Cloud Platform Connectivity [page 464].
Persistence Service
Multitenant applications on SAP Cloud Platform have two approaches available to separate the data of the
different consumers:
Document Service
The document service automatically separates the documents according to the current consumer of the
application. When an application connects to a document repository, the document service client automatically
propagates the current consumer of the application to the document service. The document service uses this
information to separate the documents within the repository. If an application wants to connect to the data of a
dedicated consumer instead of the current consumer (for example in a background process), the application can
specify the tenant ID of the corresponding consumer when connecting to the document repository.
The Keystore Service provides a repository for cryptographic keys and certificates to tenant-aware applications
hosted on SAP Cloud Platform. Because the tenant defines a specific configuration of an application, you can
configure an application to use different keys and certificates for different tenants.
For more information about the Keystore Service, see Keys and Certificates [page 1358].
Access rights for tenant-aware application are usually maintained by the application consumer, not by the
application provider. An application provider may predefine roles in the web.xml when developing the application.
By default, predefined roles are shared with all application consumers, but could also be made visible only to the
provider account. Once a consumer is subscribed to this application, shared predefined roles become visible in
the cockpit of the application consumer. Then, the application consumer can assign users to these roles to give
them access to the provider application. In addition, application consumer accounts can add their own custom
roles to the subscribed application. Custom roles are visible only within the application consumer account where
they are created.
For more information about managing application roles, see Managing Roles [page 1394].
Trust configuration regarding authentication with SAML2.0 protocol is maintained by the application consumer.
For more information about configuring trust, see ID Federation with the Corporate Identity Provider [page 1406].
Related Information
Context
● Application Provider - an organizational unit that uses SAP Cloud Platform to build, run and sell
applications to customers, that is, the application consumers.
To use SAP Cloud Platform, both the application provider and the application consumer need to have an account.
The account is the central organizational unit in SAP HANA Cloud Plaftorm. It is the central entry point to SAP
Cloud Platform for both application providers and consumers. It may consist of a set of applications, a set of
account members and an account-specific configuration.
Account members are users who must be registered via the SAP ID service. Account members may have different
privileges regarding the operations which are possible for an account (for example, account administration,
deploy/start/stop applications). Note that the account belongs to an organization and not to an individual.
Nevertheless, the interaction with the account is performed by individuals, the members of the account. The
account-specific configuration allows application providers and application consumers to adapt their account to
their specific environment and needs.
An application resides in exactly one account, the hosting account. It is uniquely identified by the account name
and the application name. Applications consume SAP Cloud Platform resources, for instance, compute units,
structured and unstructured storage and outgoing bandwidth. Costs for consumed resources are billed to the
owner of the hosting account, who can be an application provider, an application consumer, or both.
Related Information
Overview
In a provider-managed application scenario, each application consumer gets its own access URL for the provider
application. To be able to use an application with a consumer-specific URL, the consumer must be subscribed to
the provider application. When an application is launched via a consumer-specific URL, the tenant runtime is able
to identify the current consumer of the application. The tenant runtime provides an API to retrieve the current
application consumer. Each application consumer is identified by a unique ID which is called tenantId.
Since the information about the current consumer is extracted from the request URL, the tenant runtime can only
provide a tenant ID if the current thread has been started via an HTTP request. In case the current thread was not
started via an HTTP request (for example, a background process), the tenant context API only returns a tenant if
the current application instance has been started for a dedicated consumer. If the current application instance is
shared between multiple consumers and the thread was not started via an HTTP request, the tenant runtime
throws an exception.
Note
The tenant context API is of interest to application providers only.
com.sap.cloud.account TenantContext
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
To get an instance of the TenantContext API, use resource injection the following way:
@Resource
private TenantContext tenantContext;
Note
When you use WebSockets, the TenantId and AccountName parameters, provided by the TenantContext
API, are correct only during processing of WebSocket handshake request. This is because what follows after
Account API
The Account API provides methods to get account ID, account display name, and attributes. For more
information, see the Javadoc.
Sample Code
Sample Code
Related Information
Below are listed tutorials describing end-to-end scenarios with multitenant demo applications:
Table 345:
If you want to Tutorial
Create a general demo application (servlet) Exemplary Provider Application (Servlet) [page 1068]
Create a general demo application (JSP file) Exemplary Provider Application (JSP) [page 1071]
Create a connectivity demo application Creating a Multitenant Connectivity Application [page 1073]
This tutorial explains how to create a sample application which makes use of the multitenancy concept. That is,
you can enable your application to be consumed by users, members of a tenant which is subscribed to this
application in a multitenant flavor.
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP HANA Cloud Tools for Java and SAP HANA SDK. For
more information, see Setting Up the Development Environment [page 43].
● You are an application provider. For more information, see Multitenancy Roles [page 1063].
Procedure
5. Choose Finish so that the TenantContext.java servlet is created and opened in the Java editor.
6. Go to /TenantContextApp/WebContent/WEB-INF and open the web.xml file.
7. Choose the Source tab page.
8. Add the following code block to the <web-app> element:
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
9. Replace the entire servlet class with the following sample code:
package tenantcontext.demo;
import java.io.IOException;
import java.io.PrintWriter;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.account.TenantContext;
/**
* Servlet implementation class TenantContextServlet
*/
public class TenantContextServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#HttpServlet()
*/
public TenantContextServlet() {
super();
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
try {
InitialContext ctx = new InitialContext();
Context envCtx = (Context)ctx.lookup("java:comp/env");
TenantContext tenantContext = (TenantContext)
envCtx.lookup("TenantContext");
10. Save the Java editor. The project compiles without errors.
You have successfully created a Web application containing a sample servlet and connectivity functionality.
To learn how to deploy your application, see Deploying on the Cloud from Eclipse IDE [page 1047].
Result
You have created a sample application that can be requested in a browser. Its output depends on the tenant
context.
Next Steps
● To test the access to your multitenant application, go to a browser and request it on behalf of your account.
Use the following URL pattern: https://
<application_name><provider_account>.<landscape_host>/<application_path>
Related Information
This tutorial explains how to create a sample application which makes use of the multitenancy concept. That is,
you can enable your application to be consumed by users, members of a tenant which is subscribed to this
application in a multitenant flavor.
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP HANA Cloud Tools for Java and SAP HANA SDK. For
more information, see Setting Up the Development Environment [page 43].
● You are an application provider. For more information, see Multitenancy Roles [page 1063].
Procedure
<resource-ref>
<res-ref-name>TenantContext</res-ref-name>
<res-type>com.sap.cloud.account.TenantContext</res-type>
</resource-ref>
1. Under the TenantContextApp project node, choose New JSP File in the context menu.
2. Enter index.jsp as the File name and choose Finish.
3. Open the index.jsp file using the text editor.
4. Replace the entire JSP file content with the following sample code:
<%@page
import="javax.naming.InitialContext,javax.naming.Context,com.sap.cloud.account.Te
nantContext" %>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>SAP Cloud Platform - Tenant Context Demo Application</title>
</head>
<body>
<h2> Welcome to the SAP Cloud Platform Tenant Context demo application</h2>
<br></br>
<%
try {
InitialContext ctx = new InitialContext();
Context envCtx = (Context) ctx.lookup("java:comp/env");
TenantContext tenantContext = (TenantContext) envCtx
.lookup("TenantContext");
String currentTenantId = tenantContext.getTenant().getId();
out.println("<p><font size=\"5\"> The application was accessed on
behalf of a tenant with an ID: <b>"
+ currentTenantId + "</b></font></p>");
} catch (Exception e) {
out.println("error at client");
}
%>
</body>
</html>
To learn how to deploy your application, see Deploying on the Cloud from Eclipse IDE [page 1047].
Result
You have successfully created a Web application containing a JSP file and tenant context functionality.
Next Steps
● To test the access to your multitenant application, go to a browser and request it on behalf of your account.
Use the following URL pattern: https://
<application_name><provider_account>.<landscape_host>/<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer account, follow the
steps in page: Consuming a Multitenant Connectivity Application [page 1077]
Related Information
Prerequisites
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java and SAP HANA SDK.
For more information, see Setting Up the Development Environment [page 43].
● You are an application provider. For more information, see Multitenancy Roles [page 1063].
Context
This tutorial explains how you can create a sample application which is based on the multitenancy concept, makes
use of the connectivity service, and can be later consumed by other users. That means, you can enable your
The application code is the same as for a standard HelloWorld consuming the connectivity service as the latter
manages the multitenancy with no additional actions required by you. The users of the consumer account, which
is subscribed to this application, can access the application using a tenant-specific URL. This would lead the
application to use a tenant-specific destination configuration. For more information, see Multitenancy in SAP
Cloud Platform Connectivity [page 464].
Note
As a provider, you can set your destination configuration on application and account level. They are the default
destination configurations in case a consumer has not configured tenant-specific destination configuration (on
subscription level).
Procedure
<resource-ref>
<res-ref-name>search_engine_destination</res-ref-name>
<res-type>com.sap.core.connectivity.api.http.HttpDestination</res-type>
</resource-ref>
1. Under the MultitenantConnectivity project node, choose New JSP File in the context menu.
2. Enter index.jsp as the File name and choose Finish.
3. Open the index.jsp file using the text editor.
4. Replace the entire JSP file content with the following sample code:
<%@page
import="javax.naming.InitialContext,javax.naming.Context,com.sap.core.connectivit
y.api.http.HttpDestination,java.util.Arrays"%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://
www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>SAP Cloud Platform - Multitenant Connectivity Demo Application</title>
</head>
<body>
<h2>Welcome to SAP Cloud Platform - multitenant connectivity demo
application</h2>
<br></br>
<%
try {
Context context = (Context) new InitialContext()
.lookup("java:comp/env");
// In this case you don't need to explicitly use the TenantContext
API
// because the Connectivity service handles the tenancy by itself.
// The retrieved HttpDestination object will be tenant-specific.
String destinationName = "search_engine_destination";
HttpDestination destination = (HttpDestination) context
.lookup(destinationName);
out.println("<p><font size=\"5\"> Retreived destination with name
<i>"
+ destination.getName()
+ "</i> and URI <b>"
+ destination.getURI() + "</b></font></p>");
} catch (Exception e) {
out.println("<b>An exception has been thrown: <i>" + e.getMessage()
+ "</i></b>");
out.println(Arrays.toString(e.getStackTrace()));
}
%>
</body>
</html>
You have successfully created a Web application containing a sample JSP file and consuming the connectivity
service via looking up a destination configuration.
To learn how to deploy your application, see Deploying on the Cloud from Eclipse IDE [page 1047].
You, as application provider, can configure a default destination, which is then used at runtime when the
application is requested in the context of the provider account. In this case, the URL used to access the
application is not tenant-specific.
Example:
Name=search_engine_destination
URL=https://www.google.com
Type=HTTP
ProxyType=Internet
Authentication=NoAuthentication
TrustAll=true
For more information on how to define a destination for provider account, see:
Result
You have created a sample application which can be requested in a browser. Its output depends on the tenant
name.
Next Steps
● To test the access to your multitenant application, go to a browser and request it on behalf of your account.
Use the following URL pattern: https://
<application_name><provider_account>.<landscape_host>/<application_path>
● If you want to test the access to your multitenant application on behalf of a consumer account, follow the
steps in page: Consuming a Multitenant Connectivity Application [page 1077]
Related Information
Prerequisites
Note
This tutorial assumes that your account is subscribed to the following exemplary application (deployed in a
provider account): Creating a Multitenant Connectivity Application [page 1073]
Context
This tutorial explains how you can consume a sample connectivity application based on the multitenancy concept.
That is, you are a member of an account which is subscribed for applications provided by other accounts. The
output of the application you are about to consume, displays a welcome page showing the URI of the tenant-
specific destination configuration. This means that the administrator of your consumer account may have been
previously set a tenant-specific configuration for this application. However, in case such configuration has not
been set, the application would use a default one, set by the administrator of the provider account.
Users of a consumer account, which is subscribed to an application, can access the application using a tenant-
specific URL. This would lead the application to use a tenant-specific destination configuration. For more
information, see Multitenancy in SAP Cloud Platform Connectivity [page 464].
Note
As a consumer, you can set a tenant-specific destination configuration on subscription level.
Procedure
You can consume a provider application if your account is subscribed to it. In this case, administrators of your
consumer account can configure a tenant-specific destination configuration, which can later be used by the
provider application.
To illustrate the tenant-specific consumption, the URL used in this example is diferent from the one in the
exemplary provider application tutorial.
Name=search_engine_destination
URL=http://www.yahoo.com
Type=HTTP
ProxyType=Internet
Authentication=NoAuthentication
TrustAll=true
Tip
The destination name depends on the provider application.
For more information on how to configure a destination for provider account, see:
Go to a browser and request the application on behalf of your account. Use the following URL pattern:
https://<application_name><provider_account>-<consumer_account>.<landscape_host>/
<application_path>
Result
The application is requested in a browser. Its output is relevant to your tenant-specific destination configuration.
Related Information
With SAP Cloud Platform, you can use the SAP HANA development tools to create comprehensive analytical
models and build applications with SAP HANA's programmatic interfaces and integrated development
environment.
Appropriate for
Related Information
You can open your SAP HANA XS applications in a Web browser directly from the cockpit.
Procedure
1. Log on to the cockpit, select an account and choose Applications HANA XS Applications .
2. In the HANA XS Applications table, click the application URL link to launch the application.
Note
If an HTTP status 404 (not found) error is shown, bear in mind that the cockpit displays only the root of an
application’s URL path. This means that you might have to either:
○ Add the application name to the URL address in the browser, for example, hello.xsjs.
{
"exposed" : true,
"default_file": "hello.xsjs"
}
Related Information
SAP Cloud Platform provides SAP HANA database systems designed for developing with SAP HANA in a
productive environment.
Prerequisites
You have an account on the productive landscape. For more information, see Purchasing a Customer Account
[page 18].
Performance/Scalability Recommendation
Before going live with an application for which a significant number of users and/or significant load is expected,
you should do a performance load test. This is best practice in the industry and we strongly recommend it for
HANA XS applications.
SAP Cloud Platform creates four users that it requires to manage the database: SYSTEM, BKPMON, CERTADM,
and PSADBA. These users are reserved for use by SAP Cloud Platform.
Caution
Do not delete or deactivate these users or change their passwords.
Each productive SAP HANA database system has a technical database user NEO_<guid>, which is created
automatically when the database system is assigned to an account. A technical database user is not the same as a
normal database user and is provided purely as a mechanism for enabling schema access.
Caution
Take care not to delete or change the technical database user in any way (password, roles, permissions, and so
on).
Features
A productive SAP HANA database system provides you with a database system reserved for your exclusive use,
allowing you to work with SAP HANA as with an on-premise system. You have full control of user management
and can use a range of tools. There are some obvious restrictions, such as no access to the operating system. See
the overview below for details about available features:
Note
Some of the links below point to the Administration, Developer, or Security Guide for the latest release of SAP
HANA. Refer to the SAP Cloud Platform Release Notes to check which HANA SPS is supported by SAP
Cloud Platform. You can find the link to Guides for earlier releases of SAP HANA in the Related Information
section at the bottom of the page.
Table 346:
Feature Description
See:
See:
See:
Connectivity destinations ● Connectivity for SAP HANA XS (Productive Version) [page 466]
● Maintaining HTTP Destinations
Monitoring Configuring Availability Checks for SAP HANA XS Applications from the Cockpit [page
1088]
Launch SAP HANA XS applica Launching SAP HANA XS Applications [page 1079]
tions
Note
For security reasons, some of the configuration properties of the SAP HANA database systems running on the
productive landscape are forbidden for configuration.
Related Information
Developer Guide for SAP HANA Studio for the latest release of SAP HANA
Developer Guide for SAP HANA Web Workbench for the latest release of SAP HANA
Administration Guide for the latest release of SAP HANA
Security Guide for the latest release of SAP HANA
Guides for earlier releases of SAP HANA
As an account administrator on SAP Cloud Platform, you are able to create your own SAP HANA database user
and, following this, set up user accounts in SAP HANA for the members of your development team.
Create your own SAP HANA database user using the database user feature in the cockpit. For more information,
see Creating a Database Administrator User [page 1084].
You will be assigned a database user with extensive rights, including system administration and monitoring. The
user ID is identical to your SCN user, and the password shown is an initial password and must be changed when
you log onto an SAP HANA system for the first time. You are responsible for choosing a strong password and
keeping it secure.
Your database user is initially assigned a minimal set of roles, which includes HCP_PUBLIC, HCP_SYSTEM, and
PUBLIC. The HCP_SYSTEM role contains the USER ADMIN and ROLE ADMIN system privileges, allowing you to
create database users and grant additional roles to your own and other database users.
Note
The initial set of roles also contains the sap.hana.xs.ide.roles::Developer role, allowing you to work with the SAP
HANA Web-based Development Workbench, but not the SAP HANA XS Administration tool.
Note
There may be some roles that you cannot assign to your own database user. In this case, we recommend that
you create a second database user (for example, ROLE_GRANTOR) and assign it the HCP_SYSTEM role. Then
log onto the SAP HANA system with that user and grant your database user the roles you require.
Open the cockpit on the productive landscape and assign the required development team members to your
account. For more information, see Managing Members [page 26].
In the SAP HANA system, create database users for the members of your account and assign them the required
developer roles. For more information, see the following:
Related Information
As an account administrator, you can use the database user feature provided in the cockpit to create your own
database user for your SAP HANA database.
Procedure
All database systems available in the account are listed with their details, including the database type, version,
memory size, state, and the number of associated databases.
3. To select a database system, in the list, click the link on its name.
The overview of the database system shows details, including the database version and state, and the number
of associated databases.
4. Choose Databases in the navigation area.
5. To go to the overview for a database, click the link on its name.
6. In the Development Tools section, click Database User.
A message confirms that you do not yet have a database user.
7. Choose Create User.
Your user (identical to your SCN user) and initial password are displayed. Change the initial password when
you first log on to an SAP HANA system, for example the SAP HANA Web-based Development Workbench.
Note
Your database user is assigned a set of permissions for administering the HANA database system
including user and role administration. For security reasons, only the role that provides access to the SAP
HANA Web-based Development Workbench is assigned as default. To be able to use other HANA tools like
8. To log on to the SAP HANA Web-based Development Workbench and change your initial password now
(recommended), copy your initial password and then close the dialog box.
You do not have to change your initial password immediately. You can open the dialog box again later to
display both your database user and initial password. Since this poses a potential security risk, however, you
are strongly advised to change your password as soon as possible.
9. In the Development Tools section, click SAP HANA Web-based Development Workbench.
10. On the SAP HANA logon screen, enter your database user and initial password.
11. Change your password when prompted. You are responsible for choosing a strong password and keeping it
secure. SAP cannot provide forgotten passwords.
Next Steps
In the SAP HANA system, you can now create database users for the members of your account and assign them
the required developer roles.
Related Information
SAP Cloud Platform supports the following Web-based tools: SAP HANA Web-based Development Workbench,
SAP HANA Studio, and SAP HANA XS Administration Tool.
Prerequisites
● You have a database user. See Guidelines for Creating Database Users [page 1083].
● Your database user is assigned the roles required for the relevant tool. See Roles Required for Web-based
Tools [page 1087].
You can access the SAP HANA Web-based tools using the Cockpit or the tool URLs. The following table
summarizes what each supported tool does, and how to acess it.
Table 347: Supported Web-Based Tools for SAP HANA Development and Administration
SAP HANA Web-based Devel Includes an all-purpose editor Development Tools section: https://<database
opment Workbench tool that enables you to main SAP HANA Web-based instance><account>.<
tain and run design-time ob Development Workbench landscape host>/sap/
jects in the SAP HANA reposi hana/xs/ide/
tory. It does not support mod
eling activities.
SAP HANA Cockpit Provides you with a single Administration Tools section: https://<database
point-of -access to a range of SAP HANA Cockpit instance><account>.<
Web-based applications for landscape host>/sap/
the online administration of hana/xs/admin/
SAP HANA.
cockpit
See Administration Guide for
the latest release of SAP
HANA or Administration
Guides for earlier releases of
SAP HANA.
Note
It is not possible to use the
SAP HANA database life
cycle manager (HDBLCM)
with the cockpit.
SAP HANA XS Administration Allows you, for example, to Administration Tools section: https://<database
Tool configure security options SAP HANA XS Administration instance><account>.<
and HTTP destinations. landscape host>/sap/
See Administration Guide for hana/xs/admin/
the latest release of SAP
HANA or Administration
Guides for earlier releases of
SAP HANA.
Related Information
To use the SAP HANA Web-based tools, you require specific roles.
Table 348:
Role Description
sap.hana.xs.ide.roles::EditorDeveloper or parent Use the Editor component of the SAP HANA Web-based Development
role sap.hana.xs.ide.roles::Developer Workbench.
sap.hana.xs.admin.roles::TrustStoreViewer Read-only access to the trust store, which contains the server's root cer
tificate or the certificate of the certification authority that signed the
server’s certificate.
sap.hana.xs.admin.roles::TrustStoreAdministrator Full access to the SAP HANA XS application trust store to manage the
certificates required to start SAP HANA XS applications.
Related Information
In the cockpit, you can configure availability checks for the SAP HANA XS applications running on your productive
SAP HANA database system.
Procedure
1. In the cockpit, choose Applications HANA XS Applications in the navigation area of the account and
open the application list of the productive SAP HANA database system.
2. Select an application from the list and in the Application Details panel choose the Create Check button.
3. In the dialog that appears, select the URL you want to monitor from the dropdown list and fill in values for
warning and critical thresholds if you want them to be different from the default ones. Choose Save.
Your availability check is created. You can view your application's latest HTTP response code and response
time as well as status icon showing whether your application is up or down. If you want to receive alerts when
your application is down, you need to configure alert recipients from the console client. For more information,
see the Subscribe recipients to notification alerts. step in Configuring Availability Checks for SAP HANA XS
Applications from the Console Client [page 1088].
Related Information
In the console client you can configure an availability check for your SAP HANA XS application and subscribe
recipients to receive alert e-mail notifications when it is down or responds slowly.
Prerequisites
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create the availability check.
Execute:
○ Replace "myaccount", "myhana:myhanaxsapp" and "myuser" with the names of your account,
productive SAP HANA database name and application, and user respectively.
○ The availability URL (/heartbeat.xsjs in this case) is not provided by default by the platform. Replace it
with a suitable URL that is already exposed by your SAP HANA XS application or create it. Keep in mind
the limitations for availability URLs. For more information, see Availability Checks [page 1204].
Note
In case you want to create an availability check for a protected SAP HANA XS application, you need to
create a sub-package, in which to create an .xsaccess file with the following content:
{
"exposed": true,
"authentication": null,
"authorization": null
}
○ The check will trigger warnings "-W 4" if the response time is above 4 seconds and critical alerts "-C 6" if
the response time is above 6 seconds or the application is not available.
○ Use the respective landscape host for your account type.
3. Subscribe recipients to notification alerts.
Execute:
○ Replace "myaccount", "myhana" and "myuser" with the names of your account, productive SAP HANA
database name, and user respectively.
○ Replace "alert-recipients@example.com" with the email addresses that you want to receive alerts.
Separate email addresses with commas. We recommend that you use distribution lists rather than
Note
Setting an alert recipient for an application will trigger sending all alerts for this application to the
configured email(s). Once the recipients are subscribed, you do not need to subscribe them again after
every new check you configure. You can also set the recipients on account level if you skip the -b
parameter so that they receive alerts for all applications and for all the metrics you are monitoring.
Related Information
Configuring Availability Checks for SAP HANA XS Applications from the Cockpit [page 1088]
Landscape Hosts [page 41]
Availability Checks Commands
list-availability-check [page 214]
create-availability-check [page 128]
delete-availability-check [page 147]
Alert Recipients Commands
list-alert-recipients [page 217]
set-alert-recipients [page 267]
clear-alert-recipients [page 122]
In the cockpit, you can view the current metrics of a selected database system to get information about its health
state. You can also view the metrics history of a productive database to examine the performance trends of your
database over different intervals of time or investigate the reasons that have led to problems with it. You can view
the metrics for all types of databases.
Procedure
1. In the cockpit, navigate to the Database Systems page either by choosing Persistence from the navigation
area or from the Overview page.
All database systems available in the selected account are listed with their details, including the database
version and state, and the number of associated databases.
2. Select the entry for the relevant database system in the list.
3. Choose Monitoring from the navigation area to get detailed information about the current state and the
history of metrics for a selected productive database system.
The Metrics History panel shows the metrics history of your database. You can view the graphics of the
different metrics and zoom in when you click and drag horizontally or vertically to get further details. If you
zoom in a graphic horizontally, all other graphics zoom in to the same level of details too. You can press
Shift and then drag to scroll all graphics simultaneously to the left or right. You can zoom out to the initial
state with a double-click.
You can select different time intervals for viewing the metrics. Depending on the selected interval, data is
aggregated as follows:
○ last 12 or 24 hours - data is collected each minute
○ last 7 days - data is aggregated from the average values for 10 minutes
○ last 30 days - data is aggregated from the average values for an hour
You can also select a custom time interval when you are viewing the history of metrics. Note that if you select
an interval in which the database has not been running, the graphics will not contain any data.
Related Information
You can only debug SAP HANA server-side JavaScript with the SAP HANA Tools plugin for Eclipse as of release
7.4. If you are working with lower plugin versions, use the SAP HANA Web-based Development Workbench to
perform your debugging tasks.
Prerequisites
1. Log onto the cockpit on the production landscape and choose Applications HANA XS Applications .
Note
We recommend that you use the Google Chrome browser.
2. In the HANA XS Applications table, select the application to display its details.
3. In the Application Details section, click Open in Web-based Development Workbench. Note that the SAP HANA
Web-based Development Workbench can also be opened directly at the following URL: https://<database
instance><account>.<host>/sap/hana/xs/ide/
4. Depending on whether you want to debug a .xsjs file or a more complex scenario (set a breakpoint in
a .xsjs file and run another file), do the following:
○ .xsjs file:
1. Set the breakpoints and then choose the Run on server (F8) button.
○ Complex scenario:
1. Set the breakpoint in the .xsjs file you want to debug.
2. Open a new tab in the browser and then open the other file on this tab by entering its URL (https://
<database instance><account>.<host>/<package>/<file>).
Note
If you synchronously call the .xsjs file in which you have set a breakpoint and then open the other file
in the SAP HANA Web-based Development Workbench and execute it by choosing the Run on server
(F8) button, you will block your debugging session. You will then need to terminate the session by
closing the SAP HANA Web-based Development Workbench tab.
Note
If you leave your debugging session idle for some time once you have started debugging, your session will
time out. An error in the WebSocket connection to the backend will be reported and your WebSocket
connection for debugging will be closed. If this occurs, reopen the SAP HANA Web-based Development
Workbench and start another debugging session.
Valid for SAP HANA instances running SP8 or lower only. Use this procedure to configure your HANA XS
applications to use Security Assertion Markup Language (SAML) 2.0 authentication. This is necessary if you want
to implement identity federation with your corporate identity providers.
Prerequisites
● You have the SAP HANA Tools installed in your Eclipse IDE. See Installing SAP HANA Tools for Eclipse [page
68].
● You have a user on the productive landscape of SAP Cloud Platform. See Purchasing a Customer Account
[page 18]
● You have a SAP HANA database user on the productive landscape of SAP Cloud Platform. See Creating a
Database Administrator User [page 1084].
● You have a corporate identity provider (IdP) configured with its own trust settings (key pair and certificates).
See the identity provider vendor’s documentation for more information.
Note
To establish successful trust with SAP HANA XS Engine on SAP Cloud Platform, the identity provider must
have the following features:
○ Supports unsigned SAML requests
○ Sends its signing certificate when sending a SAML response
● You have a SAP HANA XS engine configured with its key pair and certificates. See the SAP HANA
Administration Guide.
Context
Restriction
This procedure is valid for productive HANA instances running SAP HANA SP8 or lower. For SAP HANA SP9
instances, see theConfigure SSO with SAML Authentication for SAP HANA XS Applications section in the SAP
HANA Administration Guide.
Use this procedure to configure your HANA XS applications to use Security Assertion Markup Language (SAML)
2.0 authentication. This is necessary if you want to implement identity federation with your corporate identity
providers. See Identity and Access Management [page 1318].
Procedure
1. Download the identity provider metadata. See the identity provider vendor’s documentation for more
information.
2. Store the IdP signing certificate in a valid PEM or DER file, enclosing the certificate content in -----BEGIN
CERTIFICATE----- and -----END CERTIFICATE-----.
3. Upload the PEM or DER file to SAP Cloud Platform using the upload-hanaxs-certificates command.
Tip
: If you get an error message while uploading the certificates, try to fix the problem using the reconcile-
hanaxs-certificates command. See reconcile-hanaxs-certificates [page 250]
4. Restart the SAP HANA XS service so the upload takes effect. This is done using the restart-hana console
command.
Procedure
○ sap.hana.xs.admin.roles::HTTPDestAdministrator
○ sap.hana.xs.admin.roles::HTTPDestViewer
○ sap.hana.xs.admin.roles::RuntimeConfAdministrator
○ sap.hana.xs.admin.roles::RuntimeConfViewer
CREATE SAML PROVIDER <idp name> WITH SUBJECT '<certificate subject>' ISSUER
'<certificate issuer>' ENABLE USER CREATION;
Tip
Get the certificate subject and issuer from the IdP certificate. If you don’t have direct access to the
certificate, use a proper file viewer tool to view the certificate contents from the PEM or DER file.
Note
With this statement, you also enable the automatic user creation of a corresponding SAP HANA
database user at first login. Otherwise, you will have to do it manually if such does not exist. See the
SAP HANA Administration Guide.
b. To create a destination:
<uppercase idp name> Create a short name for this IdP in uppercase.
Note
You need to configure all four endpoints, executing all four statements.
5. Open the SAP HANA XS Administation tool (see SAP HANA Administration Guide). For the required
applications, configure SAML authentication to be using this identity provider:
a. Select the application.
b. Go to the SAML section.
c. Choose Identity Provider and set this identity provider as value.
Procedure
1. Download the SAP HANA service provider metadata from the following URL:
https://<SAP HANA url>/sap/hana/xs/saml/info.xscfunc
Tip
You can get the SAP HANA URL from the HANA XS Applications section in the cockpit.
2. Import the SAP HANA service provider metadata in the identity provider. See the identity provider vendor’s
documentation for more information.
4. Test
Open the required application and check if SAML authentication with the required identity provider works. You
should be redirected to the identity provider and prompted to log in. After successful login, you are shown the
application.
To be able to call SAP Cloud Platform services from SAP HANA XS applications, you need to assign a predefined
trust store to the HTTP destination that defines the connection details for a specific service. The trust store
contains the certificate required to authenticate the calling application.
Prerequisites
In the SAP HANA repository, you have created the HTTP destination (.xshttpdest file) to the service to be
called. The file must have the .xshttpdest extension and be located in the same package as the application that
uses it or in one of the application's subpackages.
Procedure
Related Information
SAP Cloud Platform provides the option to create and use SAP HANA databases in a trial environment.
You can use SAP HANA multitenant database containers (MDC, tenant database) on the trial landscape. Creating
trial SAP HANA instances is no longer possible because the support for the trial SAP HANA instances has ended.
You can continue to use the existing trial SAP HANA instances for a limited period of time.
For more information about using tenant databases in the trial landscape, see Overview of Database Systems and
Databases [page 843].
Caution
You should not use SAP Cloud Platform beta features in productive accounts, as any productive use of the beta
functionality is at the customer's own risk, and SAP shall not be liable for errors or damages caused by the use
of beta features.
Related Information
SAP Cloud Platform, streaming analytics is an SAP HANA component that provides the ability to build
applications that process streams of incoming event data in real time, and to collect and act on incoming
information.
Streaming analytics is ideally suited for situations where data arrives as events happen, and where there is value
in collecting, understanding, and acting on this data right away. Some examples of data sources that produce
streams of events in real time include:
● Sensors
● Smart devices
● Web sites (click streams)
● IT systems (logs)
● Financial markets (prices)
● Social media
You can actively monitor data arriving from various sources, and set alerts to be triggered when immediate
attention is warranted. For example, you can alert operations staff to imminent equipment failure, or target
marketing offers to customers based on context.
Caution
SAP Cloud Platform, streaming analytics is the cloud-based version of the on-premise product, SAP HANA
smart data streaming. Any references to "smart data streaming" refer to components located outside the SAP
Cloud Platform. Smart data streaming documentation fully applies to streaming analytics, unless otherwise
stated in this section, or in a smart data streaming topic.
Restrictions
● You must have an SAP HANA instance with a minimum size of 256GB associated with your SAP Cloud
Platform account. If you are using SP 10, it must be at least revision 102.04; if you are using SP 11, it must be
at least revision 112.05.
● For an SAP HANA SP 12 instance, SAP Cloud Platform, streaming analytics must be at least SP 11 revision
112.08, or SP 12 revision 122.07.
● Any on-premise smart data streaming components must be the same version as the streaming server on the
SAP Cloud Platform.
● SAP Cloud Platform, streaming analytics only supports single-tenant databases. You cannot use any version
of streaming analytics with a multi-tenant SAP HANA database on the SAP Cloud Platform.
● You can only connect to streaming analytics on the SAP Cloud Platform using one of two methods: through
the Streaming Web Service, and through the Web Services Provider (using REST connections). Each one is
responsible for different tasks. See Streaming Analytics Connectivity [page 1106].
● The Streaming Web Service and the Web Services Provider are preconfigured for you during setup. You can
customize their configuration properties through the SAP HANA cockpit. However, you cannot change the
preconfigured port numbers, as connections to the SAP Cloud Platform will no longer work.
● The Web Services Provider uses REST connections. In this implementation, it does not accept SOAP
requests.
● Only certain adapters can connect from an on-premise environment to the streaming analytics component.
See Adapters [page 1108] for more information.
● The streaming analytics web server does not support guaranteed delivery. If a project stops or rejects a
message for any reason, the message is not delivered, and there is no indication that the message is lost.
● Log stores are currently not backed up, and you cannot set a custom path for a log store. In the event of a disk
failure, all data in log stores is lost and cannot be recovered.
Related Information
Before you can enable streaming analytics on the SAP Cloud Platform, you need to create an SDSADMIN
database user.
Procedure
Note
Ensure that you name this user SDSADMIN only. If you do not create an SDSADMIN user, you cannot have
the streaming analytics component enabled on your account.
Next Steps
Prerequisites
● You have created an SAP Cloud Platform account. See SAP HANA: Getting Started [page 67].
● You have installed and provisioned an SAP HANA instance with a minimum size of 256GB, and associated this
instance with your SAP Cloud Platform account. For SP 10, it must be at least revision 102.04; for SP 11, it
must be at least revision 112.05; for SP 12, it must be at least revision 122.07. This SAP HANA instance cannot
be a multi-tenant system.
● Your account has the Administrator role.
● You have created a database user named SDSADMIN.
Context
SAP Cloud Platform, streaming analytics is an SAP HANA component. You can install it directly through the SAP
Cloud Platform cockpit.
Procedure
1. Contact your account executive to receive an SAP Cloud Platform, streaming analytics license.
2. Enable streaming analytics for your SAP Cloud Platform system. Go to Installing SAP HANA Components
[page 853] for instructions on installing any SAP HANA component.
Next Steps
Download and install [page 1103] smart data streaming for SAP HANA studio, and the smart data streaming
client package.
Although the streaming analytics server is located on the SAP Cloud Platform, you need to download and install
some on-premise components to connect to the server from the client side.
Prerequisites
● You have created an SAP Cloud Platform account. See SAP HANA: Getting Started [page 67].
● You have enabled the streaming analytics component on your SAP Cloud Platform account. See Enabling
Streaming Analytics as an SAP HANA Component [page 1102].
Context
To use SAP Cloud Platform, streaming analytics, you need to download two installation packages:
● The smart data streaming client package, which contains the set of provided adapters for connecting to other
data sources, the SDK, the streaming ODBC driver and driver manager, and the streaming command-line
tools.
● The smart data streaming studio package, which contains the smart data streaming plugin for the SAP HANA
studio. This plugin lets you develop streaming projects visually, or through a CCL editor.
If you do not already have SAP HANA studio installed, you need to download that as well. All of these packages
must correspond to the SAP HANA instance version.
Procedure
1. From the SAP Service Marketplace: Support Packages and Patches page, download:
○ The SAP HANA smart data streaming studio package
○ The SAP HANA smart data streaming client package
○ The SAP HANA studio package
○ The SAPCAR utility
Save all of these downloads in the same folder.
2. Unzip all packages using the SAPCAR utility:
a. From the command line, navigate to the location of the downloaded files.
b. Run the utility once for each package you are unzipping. For example, to unzip the streaming client
package:
<download-directory-filepath> sapcar xvf- streaming_client_1.0.112.07_winx64.sar
There are no further steps for the client package; you only need to unzip it. For the studio packages, go on to
the next step.
Next Steps
Related Information
Set the STREAMING_HOME environment variable so that you can use smart data streaming utilities, and run
streaming projects from SAP HANA studio.
Prerequisites
You have downloaded and installed the smart data streaming client package.
Procedure
set STREAMING_HOME=<streaming-client-directory>
Set the streaming client directory to the path where you saved the smart data streaming client files.
Next Steps
Create users and roles, and grant them permissions [page 1105].
Control a user's access to and control over streaming analytics by providing the permissions necessary to
complete specific tasks.
Prerequisites
● You have created an SAP Cloud Platform account. See SAP HANA: Getting Started.
● You have installed the SAP HANA smart data streaming on-premise components. See Downloading and
Installing Smart Data Streaming Components [page 1103].
● You have set the STREAMING_HOME environment variable.
Context
You need to grant permissions to users before they can connect to any web services, or use streaming in SAP
HANA studio.
When you enable streaming analytics, you have to create a database user named SDSADMIN. Use this database
user to perform policy administration functions, such as granting and revoking privileges.
Because the SDSADMIN user is intended to set up user authorization policies, the standard streaming analytics
user authorization commands do not work on SDSADMIN. For example, get users, which lists all users granted
authorization to use streaming analytics, will not list SDSADMIN because it was created at installation time with a
predefined set of permissions.
Procedure
1. Log in to the SAP HANA cockpit, and open the Assign Roles to Users tile.
2. Select the user SDSADMIN and click Edit.
3. Click Assign Roles, and select the following roles:
○ sap.hana.admin.roles::Monitoring
○ sap.hana.streaming.monitoring.roles::Monitoring
○ sap.hana.uis.db::SITE_DESIGNER
4. Click OK, then Save.
5. Start streamingclusteradmin in interactive mode for your SAP HANA instance:
$STREAMING_HOME/bin/streamingclusteradmin --uri=https://<https://<hana-instance-
name>wsp<HCP-account-name>.<landscape-name>.hana.ondemand.com:443 --
username=SDSADMIN --password=<password>
So, to grant permission to perform all actions, with no restrictions, to the user SDSADMIN, enter:
For more information on granting permissions in streaming analytics, see User Authorization Policies in the
SAP HANA Smart Data Streaming: Security Guide.
Next Steps
Access the Streaming Web Service and the Web Services Provider [page 1106] to begin administering streaming
analytics, and working with streaming projects.
Related Information
Streaming analytics provides two methods for connecting to the SAP Cloud Platform: the Streaming Web Service,
and the Web Services Provider. Each of these methods is responsible for different tasks.
● Use the Streaming Web Service to publish and subscribe to projects, and for connecting the Streaming Web
Output adapter to streaming analytics on the SAP Cloud Platform.
● Use the Web Services Provider REST connections for administrative and lifecycle management tasks, such as
starting and stopping projects, for monitoring project metadata, and for connecting external adapters to
streaming analytics on the SAP Cloud Platform.
Note
The Web Services Provider does not accept SOAP requests.
When setting up your system, enable autostart on both the Streaming Web Service and the Web Services
Provider. This starts the services automatically with the cluster. All other properties are preconfigured. To enable
autostart, and also customize any service configuration properties, log in to the SAP HANA cockpit, and access
the Streaming Cluster Configuration tile. See the Streaming Web Service and Web Services Provider sections in
the SAP HANA Smart Data Streaming: Adapters Guide for more information.
https://SDSHANAswsxyz123.US1.hana.ondemand.com
https://SDSHANAwspxyz123.US1.hana.ondemand.com
Next, Connect to the Web Services Provider [page 1107] from SAP HANA studio.
Related Information
Connect to the SAP Cloud Platform, streaming analytics using the SAP HANA studio Streaming Run-Test
perspective.
Prerequisites
● You have installed SAP HANA studio with the smart data streaming plugin.
● You have granted the necessary permissions to any required users or roles. See Granting Permissions [page
1105].
Context
You can use streaming perspectives in SAP HANA studio to connect to streaming analytics in the cloud. From
here, you can develop and test streaming projects using the visual editor, the CCL editor, or both.
7. From the studio menu, go to Window Preferences SAP HANA smart data streaming .
8. In the Default Server URL field, click Change and select the server from the dialog. Click OK.
9. In the Preferences dialog, click Apply, then OK.
1.5.1.3.4.6.2 Adapters
Streaming projects running on the SAP Cloud Platform can use adapters to connect to the local SAP HANA
database.
You can use the following adapters with the streaming analytics on the SAP Cloud Platform:
● SAP HANA Output adapter: use this adapter to direct the output from any stream or window into an SAP
HANA table.
● Database Input adapter: use this adapter to pull data from the SAP HANA database into a streaming project.
Note
These adapters must be associated with the default SAP HANA data service; they cannot be used to connect to
other databases.
You can also use any toolkit adapter in unmanaged mode to connect from an on-premise environment to
streaming analytics on the SAP Cloud Platform. Toolkit adapters are various preconfigured and ready-to-use
adapters that have been created using the adapter toolkit, which comes in the the smart data streaming client
package.
All toolkit adapters use the Web Services Provider to connect to streaming analytics on the SAP Cloud Platform.
See Streaming Analytics Connectivity [page 1106].
Streaming lite projects use a specialized adapter: the Streaming Web Output adapter. You can use this adapter to
connect from a streaming lite project to streaming analytics on the SAP Cloud Platform.
The Streaming Web Output adapter uses the Streaming Web Service to connect to the SAP Cloud Platform. See
Streaming Analytics Connectivity [page 1106].
Related Information
You can create projects in the SAP HANA studio, then deploy them to the cloud.
Once you have connected to the Web Services Provider through SAP HANA studio, you can follow the same
process for developing and running streaming projects as an on-premise installation.
You have a few options for getting yourself acquainted with streaming analytics projects:
● Follow the hands-on tutorial in the SAP HANA Smart Data Streaming: Developer Guide, which teaches you
how to set up and run a simple project.
● Load one of the sample projects provided with the smart data streaming plugin for SAP HANA studio.
● Look through the CCL examples in the SAP HANA Smart Data Streaming: Examples Guide.
● Watch some video tutorials from the SAP HANA Smart Data Streaming playlist on the SAP HANA Academy
YouTube channel.
Related Information
The SAP HANA database and SAP Cloud Platform, streaming analytics are located on the same host, and share
the host's memory resources. Understanding how they use and manage memory resources is crucial to the
understanding of your own system setup.
When you're setting up an SAP HANA system with streaming analytics, you need to allocate sufficient memory
resources, and set up various parameters to control memory consumption. SAP HANA provides multiple ways to
do this.
The SAP HANA database preallocates a pool of memory from the operating system over time, up to a predefined
global allocation limit. You can change this limit in the global.ini configuration file by editing the
global_allocation_limit parameter. This parameter limits the total amount of memory that can be used by
the database, and by all installed options.
At install time, SAP HANA removes 16GB of memory from the global allocation limit, and grants it to streaming
analytics. You can raise or lower the alloted memory by changing this parameter. When you're considering
memory allocation for both the SAP HANA database and streaming analytics, set this parameter first, before
handling any other memory settings. See Monitoring Memory Usage in the SAP HANA Administration Guide for
more information.
Streaming analytics controls memory on a per-project basis. You can customize memory consumption by setting
various project deployment options in SAP HANA studio. The parameters that affect memory use are:
● memory
● memory-reserve
● java-max-heap
When running multiple projects, ensure that the total memory assigned to the individual the projects does not
exceed the total memory allocated for streaming analytics.
To learn more about these options, see Project Deployment Options in the SAP HANA Smart Data Streaming:
Configuration and Administration Guide.
If you are using the SAP HANA Output adapter in a streaming project, you should also be aware of its memory
consumption, and size it according to your project's needs. See Performance and Tuning Tips for the SAP HANA
Adapter in the SAP HANA Smart Data Streaming: Adapters Guide for the specific properties you need to set, both
within the adapter configuration, and within your system setup.
Note
Scaling your SAP Cloud Platform, streaming analytics systems is not currently supported.
You can improve workload management by controlling CPU resources in SAP Cloud Platform, streaming
analytics.
If the physical hardware on a host needs to be shared with other processes, it may be useful to assign a set of
cores to an SAP HANA process. To do this, you can assign affinities to logical cores of the hardware. See
Controlling CPU Consumption in the SAP HANA Troubleshooting and Performance Analysis Guide.
Streaming analytics controlls CPU resources on a per-project basis. You can assign CPU affinities in a project's
configuration (.ccr) file, which you can do directly through SAP HANA studio. See Processor Affinities in the SAP
HANA Smart Data Streaming: Configuration and Administration Guide.
Related Information
SAP Cloud Platform enables you to easily develop and run lightweight HTML5 applications in a cloud environment.
HTML5 applications on SAP Cloud Platform consist of static resources and can connect to any existing on-
premise or on-demand REST services. Compared to a Java application, there is no need to start a dedicated
The static content of the HTML5 applications is stored and versioned in Git repositories. Each HTML5 application
has its own Git repository assigned. For offline editing, developers can directly interact with the Git service using a
Git client of their choice. They may use any Git client like EGit or a native Git implementation to perform Git
operations. A Git repository is created automatically when a new HTML5 application is created.
Lifecycle operations, for example, creating new HTML5 applications, creating new versions, activating, starting
and stopping or testing applications, can be performed using the SAP Cloud Platform cockpit. As the static
resources are stored in a versioned Git repository, not only the latest version of an application can be tested, but
the complete version history of the application is always available for testing. The version that is delivered to the
end users of that application is called the "active version". Each application can have only one active version.
Related Information
The developer’s guide introduces the development environment for HTML5 applications, a procedure on how to
create applications, and supplies details on the descriptor file that specifies how dedicated application URLs are
handled by the platform.
Related Information
The development workflow is initiated from the SAP Cloud Platform cockpit.
The cockpit provides access to all lifecycle operations for HTML5 applications, for example, creating new
applications, creating new versions, activating a version, and starting or stopping an application.
The SAP Cloud Platform Git service stores the sources of an HTML5 application in a Git repository.
For each HTML5 application there is one Git repository. You can use any Git client to connect to the Git service. On
your development machine you may, for example, use Native Git or Eclipse/EGit. The SAP Web IDE has a built-in
Git client.
Git URL
With this URL, you can access the Git repository using any Git client.
The URL of the Git repository is displayed under Source Location on the detail page of the repository. You can also
view this URL together with other detailed information on the Git repository, including the repository URL and the
latest commits, by choosing HTML5 Applications in the navigation area and then Versioning.
Authentication
Access to the Git service is only granted to authenticated users. Any user who is a member of the account that
contains the HTML5 application and who has the Administrator, Developer, or Support User role has access to the
Permissions
The permitted actions depend on the account member role of the user:
Any authenticated user with the Administrator, Developer, or Support User role can read the Git repository. They
have permission to:
Write access is granted to users with the Administrator or Developer role. They have permission to:
Related Information
Context
For each new application a new Git repository is created automatically. To view detailed information on the Git
repository, including the repository URL and the latest commits, choose Applications HTML5 Applications
in the navigation area and then Versioning.
Note
To create the HTML5 application in more than one landscape, create the application in each landscape
separately and copy the content to the new Git repository.
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
If you have already created applications using this account, the list of HTML5 applications is displayed.
3. To create a new HTML5 application, choose New Application and enter an application name.
Note
Adhere to the naming convention for application names:
○ The name must contain no more than 30 characters.
○ The name must contain only lowercase alphanumeric characters.
○ The name must start with a letter.
4. Choose Save.
5. Clone the repository to your development environment.
a. To start SAP Web IDE and automatically clone the repository of your app, choose Edit Online ( ) at the
end of the table row of your application.
b. On the Clone Repository screen, if prompted enter your user and password (SCN user and SCN
password), and choose Clone.
Results
Context
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
Results
You can now activate this version to make the application available to the end users.
Related Information
For more information on logging on, see the Logon section in Cockpit [page 97]
As end users can only access the active version of an application, you must create and activate a version of your
application.
Context
The developer can activate a single version of an application to make it available to end users.
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
Results
You can now distribute the URL of your application to the end users.
Related Information
For more information on logging on, see the Logon section in Cockpit [page 97]
Using the application descriptor file you can configure the behavior of your HTML5 application.
This descriptor file is named neo-app.json. The file must be created in the root folder of the HTML5 application
repository and must have a valid JSON format.
With the descriptor file you can set the options listed under Related Links.
{
"authenticationMethod": "saml"|"none",
"welcomeFile": "<path to welcome file>",
"logoutPage": "<path to logout page>",
"sendWelcomeFileRedirect": true|false,
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "destination | service | application",
"name": "<name of the destination> | <name of the service> | <name
of the application or subscription>",
"entryPath": "<path prepended to the request path>",
"version": "<version to be referenced. Default is active version.>"
},
"description": "<description>"
}
],
"securityConstraints": [
{
"permission": "<permission name>",
"description": "<permission description>",
"protectedPaths": [
"<path to be secured>",
...
],
"excludedPaths": [
"<path to be excluded>",
...
]
}
],
"cacheControl": [
{
"path": "<optional path of resources to be cached>",
"directive": "none | public | private",
"maxAge": <lifetime in seconds>
}
],
"headerWhiteList": [
"<header1>",
"<header2>",
...
]
}
All paths in the neo-app.json must be specified as plain paths, that is, paths with blanks or other special
characters must include these characters literally. These special characters must be URI-encoded in HTTP
requests.
Related Information
1.5.1.4.1.5.1 Authentication
Authentication is the process of establishing and verifying the identity of a user as a prerequisite for accessing an
application.
By default an HTML5 application is protected with SAML2 authentication, which authenticates the user against
the configured RDP. For more information, see ID Federation with the Corporate Identity Provider [page 1406].
For public applications the authentication can be switched off using the following syntax:
Example
An example configuration that switches off authentication looks like this:
"authenticationMethod": "none"
Note
Even if authentication is disabled, authentication is still required for accessing inactive application versions.
To protect only parts of your application, set the authenticationMethod to "none" and define a security
constraint for the paths you want to protect. If you want to enforce only authentication, but no additional
authorization, define a security constraint without a permission (see Authorization [page 1120]).
After 20 minutes of inactivity user sessions are invalidated. If the user tries to access an invalidated session, SAP
Cloud Platform returns a logon page, where the user must log on again. If you are using SAML as a logon method,
you cannot rely on the response code to find out whether the session has expired because it is either 200 or 302.
To check whether the response requires a new logon, get the com.sap.cloud.security.login HTTP header
and reload the page. For example:
jQuery(document).ajaxComplete(function(e, jqXHR) {
if(jqXHR.getResponseHeader("com.sap.cloud.security.login")) {
alert("Session is expired, page shall be reloaded.");
window.location.reload();
}
})
To enforce authorization for an HTML5 application, permissions can be added to application paths.
In the cockpit, you can create custom roles and assign them to the defined permissions. If a user accesses an
application path that starts with a path defined for a permission, the system checks if the current user is a
member of the assigned role. If no role is assigned to a defined permission only account members with developer
permission or administrator permission have access to the protected resource.
Permissions are only effective for the active application version. To protect non-active application versions, the
default permission NonActiveApplicationPermission is defined by the system for every HTML5 application.
This default permission must not be defined in the neo-app.json file but is available automatically for each
HTML5 application.
If only authentication is required for a path, but no authorization, a security constraint can be added without a
permission.
A security constraint applies to the directory and its sub-directories defined in the protectedPaths field, except
for paths that are explicitly excluded in the excludedPaths field. The excludedPath field supports pattern
matching. If a path specified ends with a slash character (/) all resources in the given directory and its sub-
directories are excluded. You can also specify the path to be excluded using wildcards, for example, the path
**.html excludes all resources ending with .html from the security constraint.
To define a security constraint, use the following format in the neo-app.json file:
...
"securityConstraints": [
{
"permission": "<permission name>",
"description": "<permission description>",
"protectedPaths": [
"<path to be secured>"
],
"excludedPaths": [
"<path to be excluded>",
...
]
}
]
...
Example
An example configuration that restricts a complete application to the accessUserData permission, with the
exception of all paths starting with "/logout", looks like this:
...
"securityConstraints": [
{
"permission": "accessUserData",
"description": "Access User Data",
"protectedPaths": [
"/"
],
"excludedPaths": [
"/logout/**"
]
}
]
Related Information
By default end users can access the application descriptor file of an HTML5 application.
To do so, they enter the URL of the application followed by the filename of the application descriptor in the
browser.
Tip
For security reasons we recommend that you use a permission to protect the application descriptor from being
accessed by end users.
A permission for the application descriptor can be defined by adding the following security constraint into the
application descriptor
...
"securityConstraints": [
{
"permission": "AccessApplicationDescriptor",
"description": "Access application descriptor",
"protectedPaths": [
"/neo-app.json"
]
}
]
...
After activating the application, a role can be assigned to the new permission in the cockpit to give users with that
role access to the application descriptor via the browser. For more information about how to define permissions
for an HTML5 application, see Authorization [page 1120].
To access SAPUI5 resources in your HTML5 application, configure the SAPUI5 service routing in the application
descriptor file.
To configure the SAPUI5 service routing for your application, map a URL path that your application uses to access
SAPUI5 resources to the SAPUI5 service:
...
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "service",
"name": "sapui5",
"version": "<version>",
"entryPath": "/resources"
},
"description": "<description>"
}
]
...
Example
This configuration example maps all paths starting with /resources to the /resources path of the SAPUI5
library.
...
"routes": [
{
"path": "/resources",
"target": {
"type": "service",
"name": "sapui5",
"entryPath": "/resources"
},
"description": "SAPUI5"
}
]
...
For more information about using SAPUI5 for your application, see SAPUI5: UI Development Toolkit for HTML5.
Example
This configuration example shows how to reference the SAPUI5 version 1.26.6 using the neo-app.json file.
...
"routes": [
{
"path": "/resources",
"target": {
"type": "service",
"name": "sapui5",
"version": "1.26.6",
"entryPath": "/resources"
},
"description": "SAPUI5"
}
}
...
Related Information
To connect your application to a REST service, configure routing to an HTTP destination in the application
descriptor file.
A route defines which requests to the application are forwarded to the destination. Routes are matched with the
path from a request. All requests with paths that start with the path from the route are forwarded to the
destination.
If you define multiple routes in the application descriptor file, the route for the first matching path is selected.
The HTTP destination must be created in the account where the application is running. For more information on
HTTP destinations, see Creating HTTP Destinations [page 347] and Assigning Destinations for HTML5
Applications [page 1214].
...
"routes": [
{
"path": "<application path to be forwarded>",
"target": {
Example
With this configuration, all requests with paths starting with /gateway are forwarded to the gateway
destination.
...
"routes": [
{
"path": "/gateway",
"target": {
"type": "destination",
"name": "gateway"
},
"description": "Gateway System"
}
]
...
The browser sends a request to your HTML5 application to the path /gateway/resource (1). This request is
forwarded by the HTML5 application to the service behind the destination gateway (2). The path is shortened
to /resource. The response returned by the service is then routed back through the HTML5 application so
that the browser receives the response (3).
Destination Properties
In addition to the application-specific setup in the application descriptor, you can configure the behavior of routes
at the destination level. For information on how to set destination properties, see You can enter additional
properties (step 9) [page 347].
Timeout Handling
A request to a REST service can time out when the network or backend is overloaded or unreachable. Different
timeouts apply for initially establishing the TCP connection (HTML5.ConnectionTimeoutInSeconds) and
reading a response to an HTTP request from the socket (HTML5.SocketReadTimeoutInSeconds). When a
timeout occurs, the HTML5 application returns a gateway timeout response (HTTP status code 504) to the
client.
While some long-running requests may require to increase the socket timeout, we do not recommend that you
change the default values. Too high timeouts may impact the overall performance of the application by blocking
other requests in the browser or blocking back-end resources.
Redirect Handling
By default all HTML5 applications follow HTTP redirects of REST services internally. This means whenever your
REST service responds with a 301, 302, 303, or 307 HTTP status code, a new request is issued to the redirect
target. Only the response to this second request reaches the browser of the end user. To change this behavior, set
the HTML5.HandleRedirects destination property to false. As a consequence, the 30X responses given above
are directly sent back without following the redirect.
We recommend that you set this property to false. This helps improve the performance of your HTML5
application because the browser stores redirects and thus avoids round trips. If you use relative links, the
automatic handling of redirects might break your HTML5 application on the browser side. However, certain
service types may not run with a value of false.
● Your application descriptor contains a route that forwards requests starting with the path /gateway, to
the destination named gateway as in the example above.
● The service redirects requests from /resource to the path ./servicePath/resource.
When the browser requests the path /gateway/resource (1), the HTML5 application forwards it to the path /
resource of the service (2). As the service responds with a redirect (3), the HTML5 application sends another
request to the new path /servicePath/resource (4). This second response contains the required resource
and is forwarded back to the browser (5).
For the same request to the path /gateway/resources (1), the HTML5 application again forwards the
request to the path /resources of the service (2). Now the redirect is directly forwarded back to the browser
(3). In this case it is the browser that sends another request to the path /gateway/servicePath/resource
(4), which the HTML5 application forwards to the service path /servicePath/resource (5). The requested
resource is then forwarded back to the browser (6).
The following destination properties have been deprecated and replaced by new properties. If the new and the old
properties are both set, the new property overrules the old one.
Table 350:
Security Considerations
When accessing a REST service from an HTML5 application, a new connection is initiated by the HTML5
application to the URL that is defined in the HTTP destination.
To prevent that security-relevant headers or cookies are returned from the REST service to the client, only
whitelisted headers are returned. While some headers are whitelisted per default, additional headers can be
whitelisted in the application descriptor file. For more information about how to whitelist additional headers, see
Header Whitelisting [page 1133].
Cookies that are retrieved from a REST service response are stored by the HTML5 application in an HTTP session
that is bound to the client request. The cookies are not returned to the client. If a subsequent request is initiated
to the same REST service, the cookies are added to the request by the application. Only those cookies are added
that are valid for the request in the sense of correct domain and expiration date. When the client session is
terminated, all associated cookies are removed from the HTML5.
Related Information
To access resources from another HTML5 application or a subscription to an HTML5 application, you can map an
application path to the corresponding application or subscription.
If the given path matches a request path, the resource is loaded from the mapped application or subscription. This
feature may be used to separate re-usable resources in a dedicated application.
...
"routes": [
{
"path": "<application path to be mapped>",
"target": {
"type": "application",
"name": "<name of the application or subscription>"
"version": "<version to be referenced. Default is active version>",
},
"description": "<description>"
}
]
...
Example
This configuration example maps all paths starting with /icons to the active version of the application named
iconlibrary.
...
"routes": [
{
"path": "/icons",
"target": {
"type": "application",
"name": "iconlibrary"
},
"description": "Icon Library"
}
]
...
Related Information
The user API service provides an API to query the details of the user that is currently logged on to the HTML5
application.
If you use a corporate identity provider (IdP), some features of the API do not work as described here. The
corporate IdP requires you to configure a mapping from your IdP’s assertion attributes to the principal attributes
usable in SAP Cloud Platform. See Configure User Attribute Mappings [page 1413].
...
"routes": [
{
"path": "<application path to be forwarded>",
"target": {
"type": "service",
"name": "userapi"
}
}
]
...
The route defines which requests to the application are forwarded to the API. The route is matched with the path
from a request. All requests with paths that start with the path from the route are forwarded to the API.
Example
With the following configuration, all requests with paths starting with /services/userapi are forwarded to
the user API.
...
"routes": [
{
"path": "/services/userapi",
"target": {
"type": "service",
"name": "userapi"
}
}
]
...
● /currentUser
● /attributes
The user API requires authentication. The user is logged on automatically even if the authentication property
is set to none in the neo-app.json file.
Calling the /currentUser endpoint returns a JSON object that provides the user ID and additional information of
the logged-on user. The table below describes the properties contained in the JSON object and specifies the
principal attribute used to compute this information.
Table 351:
The /currentUser endpoint maps a default set of attributes. To retrieve all attributes, use the /attributes
endpoint as described in User Attributes.
Example
A sample URL for the route defined above would look like this: /services/userapi/currentUser.
{
"name": "p12345678",
"firstName": "John",
"lastName": "Doe",
"email": "john@doeenterprise.com",
"displayName": "John Doe (p12345678)"
}
User Attributes
The /attributes endpoint returns the principal attributes of the current user as a JSON object. These attributes
are received as SAML assertion attributes when the user logs on. To make them visible, define a mapping within
the trust settings of the SAP Cloud Platform cockpit, see Configure User Attribute Mappings [page 1413].
Example
A sample URL for the route defined above would look like this: /services/userapi/attributes.
If the principal attributes firstname, lastname, companyname, and organization are present, an example
response may return the following user data:
{
"firstname": "John",
"lastname": "Doe",
"companyname": "Doe Enterprise",
"organization": "Customer sales and marketing"
}
For some endpoints, you can use query parameters to influence the output behavior of the endpoint. The
following table shows which parameters exist for the /attributes endpoint and how they impact the outputs.
Table 352:
multiValuesAsArray Boolean false true If set to true, multivalued attributes are formatted as JSON
s arrays. If set to false, only the first value of the entire value
range of the specific attribute is returned and formatted as a
simple string.
Note
If set to true for an attribute that is not multivalued, then
the value of the attribute is formatted as a simple string
and not a JSON array.
You can either display the default Welcome file or specify a different file as Welcome file.
If the application is accessed only with the domain name in the URL, that is without any additional path
information, then the index.html file that is located in the root folder of your repository is delivered by default. If
you want to deliver a different file, configure this file in the neo-app.json file using the WelcomeFile parameter.
With this additional parameter you specify whether a redirect is sent to the Welcome file or whether the Welcome
file is delivered without redirect. If this option is set, then instead of serving the Welcome file directly under /, the
HTML5 application will send a redirect to the WelcomeFile location. With that, relative links in a Welcome file that
is not located in the root directory will work.
To configure the Welcome file, add a JSON string with the following format to the neo-app.json file:
Example
An example configuration, which forwards requests without any path information to an index.html file in
the /resources folder would look like this:
"welcomeFile": "/resources/index.html",
"sendWelcomeFileRedirect": true
To trigger a logout of the logged-in user, you can configure a logout page in the application descriptor.
When executing a request to the configured logout page, the server triggers a logout. This results in a response
containing a logout request that is send to the identity provider (IdP) to invalidate the user's session on the IdP.
After the user is logged out from the IdP, the configured logout page is called again. Now, the content of the logout
page is served. The logout page is always unprotected, independent of the authentication method of the
application and independent of additional security constraints. In case additional resources, for example, SAPUI5,
are referenced from the logout page, those resources have to be unprotected as well.
For information on how to configure certain paths as unprotected, see Authentication [page 1119] and
Authorization [page 1120].
Because non-active application versions always require authentication, a logout is only triggered for the active
application version. For non-active application versions the logout page is served without triggering a logout.
To configure a logout page for your application, use the following format in the neo-app.json file:
...
"logoutPage": "<path to logout page>"
...
Example
An example configuration of a logout page looks like this:
...
"logoutPage": "/logout.html"
...
To improve the performance of your application you can control the Cache-Control headers, which are returned
together with the static resource of your application.
You can configure caching for the complete application, for dedicated paths, or resources of the application. If the
path you specify ends with a slash character (/) all resources in the given directory and its sub-directories are
matched. You can also specify the path using wildcards, for example, the path **.html matches all resources
ending with .html. Only the first caching directive that matches an incoming request is applied. The path **.css
hides, for example, other paths such as /resources/custom.css.
With the directive property, you specify whether public proxies can cache the resources. The possible values
for the directive property are:
● public
The resource can be cached regardless of your response headers.
● private
Your resource is stored by end-user caches, for example, the browser's internal cache only.
● none
This is the default value that does not send an additional directive
...
"cacheControl": [
{
"path": "<optional path of resources to be cached>",
"directive": "none | public | private",
"maxAge": <lifetime in seconds>
}
]
...
Example
An example configuration that caches all static resources for 24 hours looks like this:
...
"cacheControl": [
{
"maxAge": 86400
}
]
...
For security reasons not all HTTP headers are forwarded from the application to a backend or from the backend
to the application.
The following HTTP headers are forwarded automatically without any additional configuration because they are
part of the HTTP standard:
● Accept
● Accept-Charset
● Accept-Encoding
● Accept-Language
● Accept-Range
● Age
● Allow
● Authorization
● Cache-Control
● Content-Language
● Content-Location
● Content-Range
● Content-Type
Additionally the following HTTP headers are transferred automatically because they are frequently used by Web
applications and (SAP) servers:
● Content-Disposition
● Content-MD5
● DataServiceVersion
● DNT
● MaxDataServiceVersion
● Origin
● RequestID
● Sap-ContextId
● Sap-Message
● Sap-Metadata-Last-Modified
● SAP-PASSPORT
● Slug: For more information, see Atom Publishing Protocol .
● X-CorrelationID
● X-CSRF-TOKEN
● X-Forwarded-For
● X-HTTP-Method
● X-Requested-With
If you need additional HTTP headers to be forwarded to or from a backend request or backend response, add the
header names in the following format to the neo-app.json file:
Example
An example configuration that forwards the additional headers X-Custom1 and X-Custom2 looks like this:
Excluded Headers
● Cookie
● Cookie2
● Content-Length
Cookies are used for user session identification and therefore should not be shared. The system stores cookies
sent by a backend in the session and removes them from the response before forwarding to the user. With the
next request to the backend the stored cookies are added again.
The Content-Length header cannot be whitelisted as the value is re-calculated on demand matching the
content of the given request or response.
This document contains references to API documentation to be used for development with SAP Cloud Platform.
REST APIs
Monitoring API
Table 353:
To learn about See
How to configure and operate your deployed Java applications Java: Application Operations [page 1136]
How to monitor your SAP HANA applications SAP HANA: Application Operations [page 1202]
How to monitor the current status of the HTML5 applications HTML5: Application Operations [page 1209]
in your account
How to securely operate and monitor your cloud applications Cloud Connector Operator's Guide [page 566]
connected to on-premise systems
How to change the default SAP Cloud Platform application Configuring Application URLs [page 1221]
URL by configuring custom or platform domains.
How to enable transport of SAP Cloud Platform applications Change Management with CTS+ [page 1237]
via the CTS+.
After you have developed and deployed your Java application on SAP Cloud Platform, you can configure and
operate it using the cockpit, the console client, or the Eclipse IDE.
Table 354:
Content
Table 355:
Console Client Updating Application Properties [page Specify various configurations using commands.
1141]
Eclipse IDE Advanced Application Configurations Use the options for advanced server and application configu
[page 1049] rations as well as direct reference to the cockpit UI.
Table 356:
Cockpit Defining Application Details (Java Apps) Start, stop, and undeploy applications, as well as start, stop,
[page 1149] and disable individual application processes.
Console Client start [page 280]; stop [page 284]; re Manage the lifecycle of a deployed application or individual
start [page 257] application processes by executing the respective command.
Monitoring
Table 357:
Cockpit Viewing Monitoring Metrics of Java Ap View the current metrics of a selected process to check the
plications [page 1152] runtime behavior of your applications.
Console Client Configuring Availability Checks for Java To monitor whether a deployed application is up and running,
Applications from the Console Client register availability checks and JMX checks to receive notifica
[page 1195] tions if the application goes down
Profiling
Table 358:
Eclipse IDE Profiling Applications [page 1181] Analyze resource related problems in your application.
Logging
Table 359:
Cockpit Using Logs in the Cockpit [page 1177] View the logs and change the log settings of any applications
deployed in your account.
Eclipse IDE Using Logs in the Eclipse IDE [page View the logs and change the log settings of the applications
1170] deployed in your account or on you local server.
Table 360:
Cockpit Using Maintenance Mode for Planned Supports zero downtime and planned downtime scenarios.
Downtimes [page 1162] Disable the application or individual processes in order to shut
down the application or processes gracefully.
Soft Shutdown [page 1165]
Console Client Updating Applications with Zero Down Deploy a new version of a productive application or perform
time [page 1160] maintenance.
As an operator, you can configure an SAP Cloud Platform application according to your scenario.
When you are deploying the application using SAP Cloud Platform console client, you can specify various
configurations using the deploy command parameters:
You can scale an application to ensure its ability to handle more requests.
Using the cockpit, you can perform the following identity and access management configuration tasks:
Using the cockpit and the console client, you can configure HTTP, Mail and RFC destinations to make use of them
in your applications:
Using the cockpit and the console client, you can view and download log files of any applications deployed in your
account:
Related Information
You can update a property of an application running on SAP Cloud Platform without redeploying it.
Context
Application properties are configured during deployment with a set of deploy parameters in the SAP HANA Cloud
Platrform console client. If you want to change any of these properties (Java version, runtime version,
compression, VM arguments, compute unit size, URI encoding, minumum and maximum application processes)
without the need to redeploy the application binaries, use the set-application-property command. Execute
the command separately for each property that you want to set.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation folder>/
tools).
2. Execute set-application-property specifying the new value of one property that you want to change.
For example, to change the compute unit size to premium, execute:
3. For the change to take effect, restart your application using the restart command.
Related Information
Applications deployed on SAP Cloud Platform are always started on the latest version of the application runtime
container. This version contains all released fixes, critical patches and enhancements and is respectively the
recommended option for applications. In some special cases, you can choose the version of the runtime container
your application uses by specifying it with the parameter <--runtime-version> when deploying your
application. To change this version, you need to redeploy the application without specifying this parameter.
You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting Up
the Console Client [page 52].
Context
If you want to choose the version of the application runtime container, follow the procedure.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation folder>/
tools).
2. In the console client command line, execute the <list-runtime-versions> command to display all
recommended versions. We recommend that you choose the latest available version.
3. Redeploy your application with parameter <--runtime-version> set to the selected version number.
Caution
By selecting an older version of the application runtime, you do not have the latest released fixes and
critical patches as well as enhancements, which may affect the smooth operation and supportability of
your application. Consider updating the selected version periodically. Plan the updates to the latest version
of the application runtime and apply in your test environment first. Older application runtime versions will
be deprecated and expire. Refer to the <list-runtime-versions> command for information.
Related Information
You can choose the Java Runtime Environment (JRE) version used for an application.
Prerequisites
You have downloaded and configured SAP Cloud Platform console client.
For more information, see Setting Up the Console Client [page 52]
Context
The JRE version depends on the type of SAP Cloud Platform SDK you are using. By default the version is:
If you want to change this default version, you need to specify the --java-version parameter when deploying the
application using the SAP Cloud Platform console client. Only the version number of the JVM can be specified.
You can use JRE 8 with the Java Web Tomcat 7 runtime (neo-java-web version 2.25 or higher) in productive
accounts.
For applications developed using the SDK for Java Web Tomcat 7 (2.X), the default JRE is 7. If you are developing
a JSP application using JRE 8, you need to add a configuration in the web.xml that sets the compiler target VM
and compiler source VM versions to 1.8.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application specifying --java-version. For example, to use JRE 7, execute the following command:
Related Information
Usage of gzip response compression can optimize the response time and improve interaction with an application
as it reduces the traffic between the Web server and browsers. Enabling compression configures the server to
return zipped content for the specified MIME type and size of the response.
Prerequisites
You have downloaded and configured SAP Cloud Platform console client.
For more information, see Setting Up the Console Client [page 52]
Context
You can enable and configure gzip using some optional parameters of the deploy command in the console client.
When deploying the application, specify the following parameters:
Procedure
If you enable compression but do not specify values for --compressible-mime-type or --compression-min-size,
then the defaults are used: text/html, text/xml, text/plain and 2048 bytes, respectively.
If you specify values for --compressible-mime-type or --compression-min-size but do not enable compression,
then the operation passes, compression is not enabled and you get a warning message.
If you want to enable compression for all responses independently from MIME type and size, use only --
compression force.
Example
Once enabled, you can disable the compression by redeploying the application without the compression options
or with parameter --compression off.
Related Information
Using SAP Cloud Platform console client, you can configure the JRE by specifying custom VM arguments.
Prerequisites
For more information, see Setting Up the Console Client [page 52]
Context
● System properties - they will be used when starting the application process. For example {{-
D<key>=<value>}}
● Memory arguments - use them to define custom memory settings of your compute units. The supported
memory settings are:
-Xms<size> - set initial Java heap size
-Xmx<size> - set maximum Java heap size
-XX:PermSize - set initial Java Permanent Generation size
-XX:MaxPermSize - set maximum Java Permanent Generation size
Note
We recommend that you use the default memory settings. Change them only if necessary and note that this
may impact the application performance or its ability to start.
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application, specifying your desired configurations. For example, if you want to specify a currency
and maximum heap size 1 GiB, then execute the deploy with the following parameters:
Note
If you are deploying using the properties file, note that you have to use double quotation marks twice: vm-
arguments=""-Dcurrency=EUR -Xmx1024m"".
This will set the system properties -Dcurrency=EUR and the memory argument -Xmx1024m.
To specify a value that contains spaces (for example, -Dname=John Doe), note that you have to use single
quotation marks for this parameter when deploying.
Related Information
Each application is started on a dedicated SAP Cloud Platform Runtime. One application can be started on one or
many application processes, according to the compute unit quota that you have.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting
Up the Console Client [page 52].
● Your application can run on more than one application processes
Scaling an application ensures its ability to handle more requests, if necessary. Scalability also provides failover
capabilities - if one application process crashes, the application will continue to work. First, when deploying the
application, you need to define the minimum and maximum number of application processes. Then, you can scale
the application up and down by starting and stopping additional application processes. In addition, you can also
choose the compute unti size, which provides a certain central processing unit (CPU), main memory and disk
space.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Deploy the application, specifying --minimum-processes and --maximum-processes. The --minimum-
processes parameter defines the number of processes on which the application is started initially. Make sure
it is at least 2.
4. You can now scale the application up by executing the start command again. Each new start starts another
application process. You can repeat the start until you reach the maximum number of application process you
defined within the quota you have purchased.
5. If for some reason you need to scale the application down, you can stop individual application processes by
using soft shutdown. Each application process has a unique process ID that you can use to disable and stop
the process.
a. List all application processes with their attributes (ID, status, last change date) by executing neo status
and identify the application process you want to stop.
b. Execute neo disable for the application process you want to stop.
You can also scale your application vertically by choosing the compute unit size on which it will run after the
deploy. You can choose the compute unit size by specifying the --size parameter when deploying the
application.
For example, if you have a productive account and have purchased a package with Premium edition compute
units, then you can run your application on a Premium compute unit size, by executing
Related Information
For an overview of the current status of the individual applications in your account, use the cockpit. It provides key
information in a summarized form and allows you to initiate actions, such as starting, stopping, and undeploying
applications.
Related Information
You can view details about your currently selected Java application. By adding a suitable display name and a
description, you can identify the application more easily.
Context
In the overview of a Java application in the cockpit, you can add and edit the display name and description for the
Java application as needed.
● Display name - a human-readable name that you can specify for your Java application and change it later on, if
necessary.
● Description - a short descriptive text about the Java application, typically stating what it does.
Procedure
You can directly start, stop, and undeploy applications, as well as start, stop, and disable individual application
processes.
Context
An application can run on one or more application processes. The use of multiple processes allows you to
distribute application load and provide failover capability. The number of processes that you can start depends on
the compute unit quota available to your account and how an individual application has been configured.
Note
While an application name is assigned manually and is unique in an account, an application process ID is
generated automatically whenever a new process is started and is unique across the cloud platform.
1. Open the account in the cockpit and choose Applications Java Applications in the navigation area.
2. Select the relevant application in the list and proceed as follows:
Table 361:
To... Choose...
The application’s state continues to be shown as Started and an additional process ap
pears in the Processes panel.
Note
By default an application is started on one application process and is allowed to run on a maximum of one process. To
use multiple processes, an application must be deployed with the minimum-processes and maximum-
processes parameters set appropriately.
The running process is stopped and a new process started. A new process ID is gener
ated automatically.
The process state changes to Started (disabled). The process continues to handle
working sessions, but does not accept new connections, which allows you to shut it
down gracefully.
The process is stopped and removed from the list. If the application has no further
processes, it transitions to the Stopped state.
All running processes are stopped and the application transitions to the Stopped state.
The application is deleted from your account and disappears from the application list.
This also removes all data related to the application, such as configuration settings and
logs.
Data source bindings are not deleted. To delete all data source bindings created for this
application, select the checkbox.
Note
Bound databases and schemas will not be deleted. You can delete database and
schema bindings using the Databases & Schemas panel.
Related Information
The status of an individual process is based on values that reflect the process run state and its monitoring
metrics.
Procedure
1. In the cockpit, choose Applications Java Applications in the navigation area and then select an
application in the application list.
The Processes panel shows a status value about the current state of the available processes and the overall
state for the metrics as follows:
State
○ Started
○ Started (Disabled)
○ Starting
○ Stopping
Metric
○ OK
○ Warning (also shown for intermediate states)
○ Critical
○ Pending
2. Select the relevant process to go to the process overview to view the status summary and further details:
Table 362:
Panel Description
Status Summary Displays the current values of the two status categories and the runtime version. A short text
summarizes any problems that have been detected.
State Indicates whether the process has been started or is transitioning between the Started and
Stopped states. The Error state indicates a fault, such as server unavailability, timeout, or VM
failure.
Runtime Shows the runtime version on which the application process is running and its current status:
○ OK: Still within the first three months since it was released
○ No longer recommended: Has exceeded the initial three-month period
○ Expired: 15 months since its release date
Related Information
In the cockpit, you can view the current metrics of a selected process to check the runtime behavior of your
applications. You can also view the metrics history of an application or a process to examine the performance
trends of your application over different intervals of time or investigate the reasons that have led to problems with
it.
Metric Value
Used Disc Space What percent of the whole disc space is currently used.
Requests per Minute The number of HTTP requests processed by the Java applica
tion for the last minute.
CPU Load What percent of the CPU is used on average over the last one
minute.
Disk I/O How many bytes per second are currently read or written to
the disc.
Heap Memory Usage What percent of the heap memory is currently used.
Average Response Time The average response time in milliseconds of all requests
processed for the last minute.
Busy Threads The current number of threads that are processing HTTP re
quests.
Procedure
1. To view the current metrics for a process, open Applications Java Applications in the navigation area
for the account.
2. Choose an application in the list.
In the overview of the Java application, charts allow you to get a quick overview of the following metrics:
○ The number of HTTP requests processed by the Java application per hour over the last 24 hours
○ The maximum CPU consumption of the Java application per hour over the last 24 hours
3. In the overview of the application, choose the relevant process to go to the process dashboard.
4. Choose Monitoring in the navigation area.
Alternatively, choose the Metrics Details link in the Metrics tile of Status Summary.
The Current Metrics panel shows the current state of the metrics for the selected process. Details about two
groups of metrics are shown – those registered by the platform (default) like CPU usage or Average Response
Time and the custom ones registered by the user (user-defined). You can use the Metrics dropdown list to
select which group of metrics to be displayed in the panel.
5. To view the history of monitoring metrics, depending on whether you want to view them on an application or
process level, proceed as follows:
○ Application level - open the application whose history of metrics you want to see and choose
Monitoring Application Monitoring in the navigation area. All application processes, including those
that are currently stopped, are visualized on the same charts so you can compare them.
○ Process level - open an application and in the Processes section, choose a process and open Monitoring in
the navigation area. You can navigate to the metrics history of the whole application using the Display all
processes link.
You can select different time intervals for viewing the metrics. Depending on the selected interval, data is
aggregated as follows:
○ last 12 or 24 hours - data is collected each minute
○ last 7 days - data is aggregated from the average values for 10 minutes
○ last 30 days - data is aggregated from the average values for an hour
You can also select a custom time interval when you are viewing history of metrics both on application and
process level. Note that if you select an interval in which the application has not been running, the graphics
will not contain any data.
Related Information
Context
This page describes the format of the Default Trace file. You can view this file for your Web applications via the
cockpit and the Eclipse IDE.
For more information, see Investigating Performance Issues Using the SQL Trace [page 965] and Using Logs in
the Eclipse IDE [page 1170]
Parameter Description
RECORD_SEPARATOR ASCII symbol for separating the log records. In our case, it is
"|" (ASCII code: 124)
ESC_CHARACTER ASCII symbol for escape. In our case, it is "\" (ASCII code:
92)
FINEST|Information|FINER|Information|FINE|Information|
CONFIG|Information|DEBUG|Information|PATH|
Information|INFO|Information|WARNING|Warning|ERROR|
Error|SEVERE|Error|FATAL|Error
HEADER_END
Besides the main log information, the Default Trace logs information about the tenant users that have
accessed a relevant Web application. This information is provided in the new Tenant Alias column parameter,
which is automatically logged by the runtime. The Tenant Alias is:
● A human-readable string;
● For new accounts, it is shorter than the tenant ID (8-30 characters);
● Unique for the relevant SAP Cloud Platform landscape;
● Equal to the account name (for new accounts); might be equal to the tenant ID (for old accounts).
Example
In this example, the application has been accessed on behalf of two tenants - with identifiers 42e00744-
bf57-40b1-b3b7-04d1ca585ee3 and 5c42eee4-d5ad-494e-9afb-2be7e55d0f9c.
FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2
FILE_ID:1391169413918
ENCODING:[UTF8|NWCJS:ASCII]
RECORD_SEPARATOR:124
COLUMN_SEPARATOR:35
ESC_CHARACTER:92
COLUMNS:Time|TZone|Severity|Logger|ACH|User|Thread|Bundle name|JPSpace|JPAppliance|
JPComponent|Tenant Alias|Text|
SEVERITY_MAP:FINEST|Information|FINER|Information|FINE|Information|CONFIG|
Information|DEBUG|Information|PATH|Information|INFO|Information|WARNING|Warning|
ERROR|Error|SEVERE|Error|FATAL|Error
HEADER_END
2014 01 31 12:07:09#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
exec-1##myaccount#myapplication#web#null#null#myaccount#The app was accessed on
behalf of tenant with ID: '42e00744-bf57-40b1-b3b7-04d1ca585ee3'|
2014 01 31 12:08:30#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
Related Information
SAP Cloud Platform provides two productive application runtimes based on the set of supported Java EE APIs.
These are Java Web and Java EE 6 Web Profile.
Context
The runtime is assigned either by default or explicitly set when an application is deployed. If a version is not
specified during deployment, the major runtime version is determined automatically based on the SDK that is
used to deploy the application. By default, applications are deployed with the latest minor version of the
respective major version.
You are strongly advised to use the default version, since this contains all released fixes and critical patches,
including security patches. Override this behavior only in exceptional cases by explicitly setting the version, but
note that this is not recommended practice.
Procedure
1. In the cockpit, choose Java Applications in the navigation area and then select the relevant application in the
application list.
The Runtime panel provides the following information:
○ The exact runtime version on which the process has been started (major, minor, micro, and nano
versions).
Related Information
In the cockpit, information about the resources available to your account and their current and past usage is
provided at both account and application level. At account level, the values are aggregated for all applications in
the account.
Resource consumption is presented in the form of aggregate values, which depend on the resource type:
To view the resource consumption for the selected account, open the account in the cockpit and choose Resource
Consumption in the navigation area.
Note
By default, resource consumption is displayed for the current month. You can select an earlier month from the
dropdown box.
Each resource type is listed with the associated platform service and the measurements recorded for the selected
month, as well as the quota actually assigned to the account:
Table 364:
Service Resource Description
Runtime Compute Unit Size Maximum number of VMs used of the given size.
Sizes:
Network Data Transfer Total size of outgoing HTTP traffic (egress bandwidth)
Connectivity Connections to non-SAP on-prem Maximum number of connections to non-SAP on-premise sys
ise systems tems
Storage Content Size Maximum amount of user data for stored documents and ver
sions
Storage Metadata Size Maximum amount of metadata for stored documents (for exam
ple, properties, folders, ACLs)
Persistence DB Space (HANA DB) Maximum schema size, including table data and indexes
DB Space (MaxDB) Maximum schema size, including table data and indexes
To view the resource consumption for a specific application, open the account in the cockpit and choose
Applications Java Applications in the navigation area. Select the relevant application in the list and then
choose Resource Consumption in the navigation area.
Note
The same information is displayed as at the level of the account, except for the account quota.
Related Information
If you are an application operator and need to deploy a new version of a productive application or perform
maintenance, you can choose among several approaches.
Note
In all cases, first test your update in a non-productive environment. The newly deployed version of the
application overwrites the old one and you cannot revert to it automatically. You have to redeploy the old
version to revert the changes, if necessary.
Zero Downtime
Use: When your new application version is backward compatible with the old version - that is, the new version of
the application can work in parallel with the already running old application version.
Steps: Deploy a new version of the application and disable and enable processes in a rolling manner. For an
automated execution of the same procedure, use the rolling-update command.
See Updating Applications with Zero Downtime [page 1160] and rolling-update [page 264].
Description: Shows a custom maintenance page to end users. The application is automatically disabled.
Use: When the new version is backward incompatible - that is, running the old and the new version in parallel may
lead to inconsistent data or erroneous output.
Steps: Enable maintenance mode to redirect new connections to the maintenance application. Deploy and start
the new application version and then disable maintenance mode.
Soft Shutdown
Description: Supports zero downtime and planned downtime scenarios. Disabled applications/processes stop
accepting new connections from users, but continue to serve already running connections.
Use: As part of the zero downtime scenario or to gracefully shut down your application during a planned downtime
(without maintenance mode).
Steps: Disable the application (console client only) or individual processes (console client or cockpit) in order to
shut down the application or processes gracefully.
Related Information
The platform allows you to update an application in a manner in which the application remains operable all the
time and your users do not experience downtime.
Prerequisites
Context
Each application runs on one or more dedicated application processes. You can start one or many application
processes at any given time, according to the compute unit quota that you have. Each process has a unique
process ID that you can use to stop it. To update an application non-disruptively for users, you handle individual
processes rather than the application as a whole. The procedure below describes the manual steps to execute a
zero downtime update. Use it if you want to have more control on the respective steps, for example to have a
different timeout for the different application processes before stopping them. For an automated execution of the
same procedure, use the rolling-update command. For more information, see rolling-update [page 264].
Note
Not applicable to hanatrial.ondemand.com.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. List the status of the application which shows all its processes with their attributes (ID, status, last change
date) by executing <neo status>. Identify and make a note of the application process IDs, which you will
need to stop in the following steps. Application processes are listed chronologically by their last change date.
3. Deploy the new version of your application on SAP Cloud Platform by executing <neo deploy> with the
appropriate parameters.
Note that to execute the update, you need to start one additional application process with the new version.
Therefore, make sure you have configured a high enough number of maximum processes for the application
4. Start a new application process which is running the new version of the application by executing <neo
start>.
5. Use soft shutdown for the application process running the old version of the application:
a. Execute <neo disable> using the ID you identified in Step 2. This command stops the creation of new
connections to the application from new end users, but keeps the already running ones alive.
b. Wait for some time so that all working sessions finish. You can monitor user requests and used resources
by configuring JMX checks, or, you can just wait for a given time period that should be enough for most of
the sessions to finish.
c. Stop the application process by executing <neo stop> using the <application-process-id>
parameter.
6. (Optional) Make sure the application process is stopped by checking its status using the <application-
process-id> parameter.
7. If the application is running on more than one application processes, repeat steps 4 and 5 until all the
processes running the old version are stopped and the corresponding number of processes running the new
version are started.
Example
For example, if your application runs on two application processes, you need to perform the following steps:
Related Information
An operator can start and stop planned application downtime, during which a customized maintenance page for
that application is shown to end users.
Prerequisites
To redirect an application, you require a maintenance application. A maintenance application replaces your
application for a temporary period and can be as simple as a static page or have more complex logic. You need to
provide the maintenance application yourself and ensure that it meets the following conditions:
● It is a Java application.
● It is deployed in the same account as your application.
● It has been started, that is, it is up and running.
● It must not be in maintenance itself.
Context
Note
Not applicable to hanatrial.ondemand.com.
Context
You can enable the maintenance mode for an application from the application dashboard. An application can be
put into maintenance mode only if it is not being used as a maintenance application itself and is running (Started
state).
Procedure
1. Log on to the cockpit, select an account and choose Applications Java Applications in the navigation
area.
2. Click the applications name in the list to open the application dashboard and in the Application Maintenance
panel choose (Start Maintenance).
3. In the dialog box, select the application that will serve as the maintenance application and choose Set
Selected Application. In the application list, the application’s state is now shown as Started (In Maintenance).
From this point on, new connections will be redirected to the maintenance application. All active connections
will still be handled until the application is stopped.
4. Optional: To view the details in the State panel, select your application in the list.
The following details confirm that your application is in maintenance mode:
○ In Maintenance
○ A link to the assigned maintenance application: Click the link to open the application dashboard for this
application.
Results
The temporary redirect to the maintenance application remains effective until you take your application out of
maintenance. To disable the maintenance mode, choose (Switch maintenance mode off). Before doing so,
you should ensure that your application is up and running to avoid end users experiencing HTTP errors.
Console Client
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
4. Stop the planned application downtime by executing <neo stop-maintenance> in the command line. This
resumes traffic to the application and the maintenance page application stops handling incoming requests.
Related Information
Soft shutdown enables an operator to stop an application or application process in a way that no data is lost.
Using soft shutdown gives sufficient time to finish serving end user requests or background jobs.
Prerequisites
Context
Using soft shutdown, an operator can restart the application (for example, in order to update it) in a way that end
users are not disturbed. First, the application process is disabled. This means that requests by users that already
have open connections to this process will be processed, but new requests will not reach this application process
anymore. After the application process is disabled and remaining sessions processed, it can be stopped by the
operator.
Cockpit
Context
You can disable application processes in the Processes panel on the application dashboard or the State panel on
the process dashboard.
Procedure
1. Log on to the cockpit, select an account and choose Applications Java Applications in the navigation
area.
2. Select an application in the application list.
3. In the Processes panel, choose (Disable process) in the relevant row. The process state changes to Started
(disabled).
Note
You can also select the process and disable it from the process dashboard.
4. Wait for some time so that all working sessions finish and then stop the process.
Console Client
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Disable processing of requests from new users to the application by executing <neo disable> with the
appropriate parameters. If you want to stop requests to a specific application process only and not to the
whole application, add the <--application-process-id> parameter.
If you disable the entire application, or all processes of the application, then new users requesting the
application will not be able to access it and will get an error.
3. Wait for some time so that all working sessions finish.
You can monitor user requests and used resources by configuring JMX checks, or, you can just wait for a
given time period that should be enough for most of the sessions to finish.
4. Stop the application by executing <neo stop> with the appropriate parameters. If you want to terminate a
specific application process only and not the whole application, add the <--application-process-id
>parameter.
Related Information
In the event of unplanned downtime when there is no application process able to serve HTTP requests, a default
error is shown to users. To prevent this, an operator can configure a custom downtime page using a downtime
application, which takes over the HTTP traffic if an unplanned downtime occurs.
Prerequisites
Note
Not applicable to hanatrial.ondemand.com.
● You have downloaded and configured the console client. We recommend that you use the latest SDK. For
more information, see Setting Up the Console Client [page 52]
● You have deployed and started your own downtime application in the same SAP Cloud Platform account as
the application itself.
● The downtime application has to be developed in a way that it returns an HTTP 503 return code. That is
especially important if availability checks are configured for the original applications so that unplanned
downtimes are properly detected.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Configure the downtime application by executing neo set-downtime-app in the command line.
3. (optional) If the downtime page is no longer needed (for example, if the original application has been
undeployed), you can remove it by executing clear-downtime-app command.
Related Information
Overview
To produce logs that you can use for analysis and troubleshooting at runtime, use a logging API in your cloud
application.
For cloud applications, we support and the Simple Logging Facade for Java (SLF4J) API. This API is built
upon using the Logger class. All logs are placed in the default trace file of the server and can be seen at runtime in
the cockpit.
Note
The log file is rotated according to the following:
In both cases, the log file is archived into a GZ file, and it starts over collecting logs. The name of the newly
archived file contains the date it is created.
Prerequisites
● Create an application for SAP Cloud Platform. For more information, see Creating a HelloWorld Application
[page 56].
● Ensure that you are assigned a Developer or Administrator role. For more information, see Account Member
Roles [page 30].
Note
Cloud applications can directly access the SLF4J API without adding any references or packaging the library in
the application archive. For more information, see SLF4J API .
Note
SAP Cloud Platform provides a logging framework implementation that cannot be changed. Including an slf4j-
api library into a WAR causes conflicts. Exclude this library from your application and all its dependencies
recursively.
To construct a parameterized message, you can use one of the following ways:
if (logger.isInfoEnabled()) {
logger.info("Message logged for name " + name + " with level info");
}
● Pass the parameter as an argument to the respective methods (info, error, and so on):
Example
You can add an error log in your application using the following code:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class YourClass {
public static void main(String[] args){
Logger logger = LoggerFactory.getLogger(YourClass.class);
logger.error("message");
}
}
Log Retention
Log records are kept on the central log server for only seven days. For archival purposes, you can download any
kind of log file using any of the SAP Cloud Platform tools (Eclipse IDE, console client, cockpit).
Note
After the logs have been written by the application runtime, they are transported to the central log server. If,
however, the application is restarted during this transfer, part of the logs may be lost.
Level Description
ERROR Error events that might still allow the application to continue
running.
OFF This level has the highest possible rank and is intended to
turn off logging.
Related Information
Context
After deploying your Web applications, you can check their logs as well as configure their loggers settings. This
section describes the following logging tasks you can perform in the Eclipse IDE:
You can perform these operations both in the cloud and on a local server.
Also, persistence for loggers is enabled both on the cloud and on a local server level. Logger level settings are kept
and restored on a server restart, so you do not need to set them over again.
● You have downloaded and set up your Eclipse IDE, SAP Cloud Platform Tools for Java, and SDK.
For more information, see Setting Up the Development Environment [page 43].
● You have created and deployed a Web application that uses logging functionality on SAP Cloud Platform.
For more information, see Logging in Applications [page 1168].
● You are assigned a Developer or Administrator role. For more information about the roles, see Account
Member Roles [page 30].
1. After deploying an application in the Eclipse IDE on SAP Cloud Platform or SAP Cloud Platform local runtime,
open the Servers view and double-click the server.
2. Choose the Loggers tab.
3. When the server is in [Started] state, all the available loggers are listed in the Loggers table. If the server is
in [Starting], [Stopping], or [Stopped] state, the table is empty.
○ You can use the filter field to find particular loggers you need. You can filter by both the Name and the
Level columns.
○ You can also sort the loggers table by both the Name and the Level column. The Level column sorts the
fields by effective level, not alphabetically.
Note
● You can only set log levels when an application is running. Loggers are not listed if the relevant application
code has not been executed.
● If you set a new log level for a parent logger, such as com.sap.core.js.admin.operations, the child
loggers, for example, com.sap.core.js.admin.operations.AdminOperations and
com.sap.core.js.admin.operations.internal.ErrorQueueHandler, automatically inherit the
If you have changed the effective level of some or all loggers of an application, running on a particular server, you
have the option to reset the logger levels.
1. Make sure you can restart your server without causing data loss.
2. Click the Reset all loggers link. A dialog box appears, warning you that the resetting operation requires server
restart.
3. Choose Reset and Restart.
1. In the Servers view, go to the context menu of your server and choose Show In Server Logs .
Note
If the server has never been started, no logs are available and the Server Logs view is empty.
2. When the server is started, the Server Logs view displays all available Default Trace and HTTP Access
logs of the applications that you are running on this server.
Note
You can also reach the Server Logs view if you expand the server and double-click on the Server Logs node.
3. If you have more than one running servers, from the Server dropdown box, select the one you need to view its
logs.
Context
After you have deployed and started an application on SAP Cloud Platform, you can manage some of its logging
configurations using SAP Cloud Platform console client. For easier troubleshooting, you can use the commands
from the logging group to:
Persistence for loggers is enabled on both local and cloud level. Logger level settings are kept and restored on a
server restart, so you do not need to set them over again.
Prerequisites
● You have created and deployed a Web application which uses logging functionality on SAP Cloud Platform.
For more information, see Logging in Applications [page 1168].
● You have downloaded and set up the SAP Cloud Platform console client. For more information, see Setting Up
the Console Client [page 52]
● You are assigned a Developer or Administrator role. For more information about the roles, see Account
Member Roles [page 30].
Procedure
For more information about argument values usage, see Console Client [page 102].
You can list all log files of your application sorted by date in a table format, starting with the latest modified.
You can also use the --overwrite command argument so that in case a file with the same name already exists, it
will be overwritten. If a file with the same name already exists and you do not explicitly include --overwrite, you
will be notified and asked if you want to overwrite it.
If the directory you have specified in the command line does not exist, it will be created.
Listing Loggers
To list available loggers and their log levels, execute the following command:
Note
You can only list loggers when an application is running. Loggers are not listed if the relevant application code
has not been executed.
To set a log level for a single logger or for multiple loggers, execute the following command:
Note
● You can only set log levels when an application is running.
● If you set a new log level for a parent logger, such as com.sap.core.js.admin.operations, the child
loggers, for example, com.sap.core.js.admin.operations.AdminOperations and
com.sap.core.js.admin.operations.internal.ErrorQueueHandler, automatically inherit the
To reset all logger levels of your application to the their initial state, execute the following command:
Note
In order for the changes to take effect, restart your running application.
Example
Setting Log Levels
You can deploy a WAR file on SAP Cloud Platform and then change its loggers level to INFO.
1. Deploy the example.war file on SAP Cloud Platform, using the example_war.properties file.
2. Then execute the following command:
3. Request the example application in the browser and then download and open the ljs_trace.log file.
As a result, a new info message is logged indicating that the logger level has been changed successfully.
Related Information
You can view the logs and change the log settings of any applications deployed in your account. The cockpit
provides the following types of logs: default trace logs, HTTP access logs, and garbage collection logs.
Context
● If you are interested in the latest logs only, view the logs in the Most Recent Logging panel in the application
overview.
● To check the logs over the past few days, go to the Monitoring Logging page for a more comprehensive
listing.
● To debug applications, use the log level configuration option to switch the relevant loggers to debug mode.
For that operation, choose the Configure Loggers button.
View Logs
Procedure
1. Log on to the cockpit and go to the Applications Java Applications page of the account.
2. Choose the relevant application to go to the overview.
The latest logs are listed in the Most Recent Logging panel with the following information: type; process ID;
date and time of the last modification; and size.
3. To view a more extensive list of logs, choose Monitoring Logging in the navigation area.
This page lists all log files by log file type with the following information: process ID; date and time of the last
modification; and size.
4. To display the contents of a particular log file, choose (Display). Note that you can also download the file
by choosing (Download).
Set the log levels of the relevant loggers used by your application. Loggers include platform loggers and, if
configured, the application logger, named as follows: <package name>.<class name>, for example,
com.sap.cloud.sample.persistence.PersistenceWithJDBCServlet.
Prerequisites
You are assigned a Developer or Administrator role. For more information about the roles, see Account Member
Roles [page 30].
Procedure
Note
You can only set log levels for the default trace.
In the logger configuration dialog, all loggers used since the application was started are listed with the log
levels that are currently applicable.
Note
You can only set log levels when an application is running. Loggers are only listed if the relevant application
code has been executed.
3. Optionally filter the list by logger name to select only the loggers in which you are interested.
4. To set the log level for a logger, locate the relevant logger and in that row select the new log level from the
dropdown list.
5. To change the log level for all loggers contained in the list, enter the new log level in the Set log level to all
loggers in the list to: field and choose Set.
The log settings take effect immediately. Since log settings are saved permanently, they do not revert to their
initial values when the application is restarted.
Note
If you set a new log level for a parent logger, such as com.sap.core.js.admin.operations, the child
loggers, for example, com.sap.core.js.admin.operations.AdminOperations and
com.sap.core.js.admin.operations.internal.ErrorQueueHandler, automatically inherit the
same log level. Override this mechanism, if necessary, by explicitly assigning a new log level to the child
loggers.
The cockpit provides dedicated log viewers for showing default trace and HTTP access logs.
The log viewer comprises a header section with filter and search options and a content area with a table that
enables you to filter and sort the data for some of the columns.
Header Section
You can filter log entries based on the values of certain log fields:
● Default trace
○ Levels dropdown
Filters the log entries contained in the table according to log level.
○ Search text field
Filters by Logger, Tenant, and Text columns
● HTTP access log
○ Method dropdown:
Filters the log entries based on the HTTP method (OPTIONS, GET, HEAD, POST, PUT, DELETE, TRACE).
○ Status dropdown
Filters the log entries based on the HTTP status code: 1xx informational, 2xx success, 3xx redirection, 4xx
client error, 5xx server error.
○ Search text field
Filters by Client, User, Method, Resource, Status, Size, and Duration columns.
The log entries are, by default, not filtered (all log entries are selected). For some of the columns in the table,
you can filter the data by selecting the column header and entering the filter value in the text field.
Log Traffic
This section provides a log traffic overview and a slider for adjusting the time range:
● Log traffic
The log volume over the selected period is represented graphically, allowing you to identify time intervals with
high levels of activity. You can specify the time range after you choose Show Time Filter.
● Time range slider
Table 365:
Field Description
Time Date and time when the log entry was written
Table 366:
Field Description
Time Date and time when the log entry was written
Related Information
The SAP JVM Profiler helps you analyze resource-related problems in your Java application regardless of whether
the JVM is running locally or on the cloud.
Typically, you first profile the application locally. Then you may continue and profile it also on the cloud. The basic
procedure is the following:
Features
Table 367:
Allocation Trace Shows the number, size and type of the allocated objects and the
methods allocating them.
Performance Hotspot Trace Shows the most time-consuming methods and execution paths
Garbage Collection Trace Shows all details about the processed garbage collections
Synchronization Trace Shows the most contended locks and the threads waiting for or holding
them
File I/O Trace Shows the number of bytes transferred from or to files and the meth
ods transferring them
Table 368:
Heap Dump Shows a complete snapshot of the Java Heap
Class Statistic Shows the classes, the number and size of their objects currently resid
ing in the Java Heap generations
Tasks
Related Information
Overview
After you have created a Web application and verified that it is functionally correct, you may want to inspect its
runtime behavior by profiling the application. This helps you to:
● You have developed and deployed a Web application using the Eclipse IDE. For more information, see
Deploying and Updating Applications [page 1043].
● You have installed SAP JVM as the runtime for the local server. For more information, see Setting Up SAP
JVM in Eclipse IDE [page 50]
Procedure
Note
Since profiling only works with SAP JVM, if another VM is used, going to Profile will result in opening a
dialog that suggests two options - editing the configuration or canceling the operation.
Result
You have successfully started a profiling run of a locally deployed Web application. You can now trigger your work
load, create snapshots of the profiling data and analyze the profiling results.
When you have finished with your profiling session, you can stop it either by disconnecting the profiling session
from the Profile view or by restarting the server.
Related Information
Refer to the SAP JVM Profiler documentation for details about the available analysis options. The documentation
is available as part of the SAP JVM Profiler plugin in the Eclipse IDE and can be found via Help Help Contents
SAP JVM Profiler .
After you have created a Web application and verified that it is functionally correct, you may want to inspect its
runtime behavior by profiling the application on the cloud. It is best if you first profile the Web application locally.
Prerequisites
● You have developed and deployed a Web application using the Eclipse IDE. For more information, see
Deploying and Updating Applications [page 1043]
● Optional: You have profiled your Web application locally. For more information, see Profiling Applications
Locally [page 1182]
Note
Currently, it is only possible to profile Web applications on the cloud that have exactly one application process
(node).
Procedure
○ From the server context menu, choose Profile (if the server is stopped) or Restart in Profile (if the server is
running).
○ Go to the application source code and from its context menu, choose Profile As Profile on Server .
3. Open the Profiling perspective.
Results
You have successfully initiated a profiling run of a Web application on the cloud. Now, you can trigger your
workload, create snapshots of the profiling data and analyze the profiling results.
When you have finished with your profiling session, you can stop it either by disconnecting the profiling session
from the Profile view or by restarting the server.
Refer to the SAP JVM Profiler documentation for details about the available analysis options. The documentation
is available as part of the SAP JVM Profiler plugin in the Eclipse IDE and you can find it via Help Help Contents
SAP JVM Profiler .
Context
This page describes the format of the Default Trace file. You can view this file for your Web applications via the
cockpit and the Eclipse IDE.
For more information, see Investigating Performance Issues Using the SQL Trace [page 965] and Using Logs in
the Eclipse IDE [page 1170]
Parameter Description
RECORD_SEPARATOR ASCII symbol for separating the log records. In our case, it is
"|" (ASCII code: 124)
ESC_CHARACTER ASCII symbol for escape. In our case, it is "\" (ASCII code:
92)
FINEST|Information|FINER|Information|FINE|Information|
CONFIG|Information|DEBUG|Information|PATH|
Information|INFO|Information|WARNING|Warning|ERROR|
Error|SEVERE|Error|FATAL|Error
HEADER_END
Besides the main log information, the Default Trace logs information about the tenant users that have
accessed a relevant Web application. This information is provided in the new Tenant Alias column parameter,
which is automatically logged by the runtime. The Tenant Alias is:
● A human-readable string;
● For new accounts, it is shorter than the tenant ID (8-30 characters);
● Unique for the relevant SAP Cloud Platform landscape;
● Equal to the account name (for new accounts); might be equal to the tenant ID (for old accounts).
Example
In this example, the application has been accessed on behalf of two tenants - with identifiers 42e00744-
bf57-40b1-b3b7-04d1ca585ee3 and 5c42eee4-d5ad-494e-9afb-2be7e55d0f9c.
FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2
FILE_ID:1391169413918
ENCODING:[UTF8|NWCJS:ASCII]
RECORD_SEPARATOR:124
COLUMN_SEPARATOR:35
ESC_CHARACTER:92
COLUMNS:Time|TZone|Severity|Logger|ACH|User|Thread|Bundle name|JPSpace|JPAppliance|
JPComponent|Tenant Alias|Text|
SEVERITY_MAP:FINEST|Information|FINER|Information|FINE|Information|CONFIG|
Information|DEBUG|Information|PATH|Information|INFO|Information|WARNING|Warning|
ERROR|Error|SEVERE|Error|FATAL|Error
HEADER_END
2014 01 31 12:07:09#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
exec-1##myaccount#myapplication#web#null#null#myaccount#The app was accessed on
behalf of tenant with ID: '42e00744-bf57-40b1-b3b7-04d1ca585ee3'|
2014 01 31 12:08:30#
+00#INFO#com.sap.demo.tenant.context.TenantContextServlet##anonymous#http-bio-8041-
exec-3##myaccount#myapplication#web#null#null#subscriberaccount#The app was
accessed on behalf of tenant with ID: '5c42eee4-d5ad-494e-9afb-2be7e55d0f9c'|
Related Information
To monitor whether your deployed application is up and running, you can register an availability check and JMX
checks for it and configure email recipients who will receive notification if the application goes down. For the email
recipients configuration, you use the SAP Cloud Platform console client. You can also generate a report of metrics
that shows performance statistics of the CPU, DB, and response times.
Table 369:
Content
Introductory Video
Table 370:
Table 371:
Table 372:
Availability Checks
Table 373:
Cockpit Configuring Availability Checks for Java Applications from the Cockpit [page 1194]
Console Client Configuring Availability Checks for Java Applications from the Console Client [page 1195]
Table 374:
Monitoring Service
Table 375:
Table 376:
Related Information
The availability check is one per Java or SAP HANA XS application and is executed every minute. You can
configure an availability check for an application either from the cockpit or from the console client. If your
application is not available or its response time is too high, you will receive an e-mail notification. If you stop the
application by yourself, you will not receive a notification as in this case alerting is suppressed and enabled once
again when you start the application. However, this is not valid for productive SAP HANA databases as you cannot
stop them. In this case, the availability check will start running at the moment you create it and will not stop until
you delete it. E-mail alert is triggered if the application is not in state OK for two consecutive checks. There are five
types of notifications:
Table 377:
Notification Description
You may also set your availability check for Java applications on account level using a relative URL. This means
that each application started in your account will immediately receive an availability check requesting
application_url/configured_relative_url. This option is useful in case you start multiple instances of
the same application (applications with the same relative health check URL) in your account and allows you to
configure this check only once for all of them. You can configure availability checks on account level only from the
console client. If there is a check configured on account level and a check configured on application level, the one
on the application level has higher priority. For example, if you have in your account ten applications with the /
health_check relative URL and one multitenant application with the /myapp/health_check relative URL, you
can configure an availability check on account level for all applications and one availability check for the
multitenant application to override the one on account level.
Limitations
Availability monitoring in SAP Cloud Platform is done by running HTTP GET requests against URL provided by the
application operator. The http/https ping is not parsing the response body, but it is relying only on the HTTP
response code.
Currently there are two limitations that need to be considered when designing your availability URL:
● The monitoring infrastructure does not support authorization for the checks. This means that you cannot
pass user and password or client certificate when configuring the availability check. Therefore, you must
design the availability URL without authentication or authorization. This will make sure that your application
can be accessed in any case, the correct response code is returned (for example 200, 404, 500 and so on)
and the response time is only from your application. If your application responds with 302, the ping will follow
the redirect.
Caution
If you design the availability URL as a protected resource, the check will consider 401 and 403 response
codes as 200 OK. Note that these response codes may come from Identity Authentication and not from
your application, in case of an authenticated application.
Currently, the response codes accepted by the 'http/https ping' are 200, 302, 401 and 403. This is done to
cover all the different types of URLs that can be monitored. You need to make sure that if something does not
work as expected, your application is not returning some of the above 4 codes as you will not get an alert.
● The monitoring infrastructure supports only one availability check per Java or SAP HANA XS application. This
means that if you have multiple web applications deployed together as one application in your account or
Recommendation
We recommend that the response is a simple, plain HTML, just stating which web application is OK and
which is not. It depends on the implementation of the availability URL whether it will just inform that a web
application is available or it will also check whether it is working as expected. If you plan to develop and
operate multiple applications in your account, it is a good idea to have identical availability URLs for the
different applications (for example /availability). This will allow you to configure the availability check only
once on account level.
Caution
Note that the availability URL designed according to the above recommendations is unprotected and can be
accessed by everyone. We recommend not putting sensitive information about your application there (for
example error stack traces).
Related Information
Configuring Availability Checks for Java Applications from the Cockpit [page 1194]
Configuring Availability Checks for Java Applications from the Console Client [page 1195]
Configuring Availability Checks for SAP HANA XS Applications from the Cockpit [page 1088]
Configuring Availability Checks for SAP HANA XS Applications from the Console Client [page 1088]
Availability Checks Commands
list-availability-check [page 214]
create-availability-check [page 128]
delete-availability-check [page 147]
JMX Checks [page 1196]
In the cockpit, you can configure availability checks for the applications deployed in your account. If you have
configured an availability check on account level, you can override it by creating one on application level. You can
manage the checks on account level from the console client only.
Prerequisites
Procedure
1. In the cockpit, choose Applications Java Applications in the navigation area for the account and then
choose an application in the application list.
2. In the Availability panel, choose the Create Check button.
3. Select the URL you want to monitor from the dropdown list and fill in values for warning and critical thresholds
if you want them to be different from the default ones. Choose Save.
Your availability check is created. You can view your application's latest HTTP response code and response
time as well as status icon showing whether your application is up or down. If you want to receive alerts when
your application is down, you need to configure alert recipients from the console client. For more information,
see Subscribe recipients to notification alerts step in Configuring Availability Checks for Java Applications
from the Console Client [page 1195].
Related Information
Prerequisites
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat and neo.sh (<SDK
installation folder>/tools).
2. Create the availability check.
Execute:
Note
The availability check will be visible in the SAP Cloud Platform cockpit in around 2 minutes.
Execute:
Note
Setting an alert recipient for an application will trigger sending all alerts for this application to the
configured email(s). Once the recipients are subscribed, you do not need to subscribe them again after
every new check you configure. You can also set the recipients on account level if you skip the -b
parameter so that they receive alerts for all applications and for all the metrics you are monitoring.
Caution
If you stop the application by yourself, you will not receive a notification alert. Alerting is suppressed with
the manual stop of an application. Alerting is automatically enabled once again when you start the
application.
Related Information
Registering JMX checks allows alerting on any metric that is based on JMX MBean attribute.
The checks support attributes that are java.lang.String or java.lang.Number or CompositeDataSupport. In case it
is CompositeDataSupport, the objects that are mapped to the keys again should be java.lang.String or
java.lang.Number, otherwise error will be thrown. For more information, see CompositeDataSupport .
The MBean can be registered either by the application runtime (for example, standard JVM MBeans like
java.lang:type=Memory) or by the application itself (application specific). The MBeans registered by the
application runtime can be checked using jconsole and connecting to the local server from the SDK.
You can set multiple JMX checks per application. They will be executed each minute. In case the JMX check fails
due to an error in the MBean execution like, for example, wrong ObjectName, Attribute, MBean not registered,
Table 378:
Notification Description
CRITICAL The JMX check fails due to an error in the MBean execution or the attribute value is not within the de
fined CRITICAL threshold.
UNSTABLE Your application does not behave consistently. For example, the attribute is OK upon check n, then is
CRITICAL upon check n+1, then is again OK on check n+2, and so on.
You may also set JMX checks on account level. This means that each application started in your account will
immediately receive all the JMX checks configured on account level in addition to the checks configured on the
application level. If there is a check configured on account level and a check configured on application level with
one and the same name, the one on the application level has higher priority and only it will be assigned to the
started application.
Related Information
This topic shows how you can configure a JMX check for your application and subscribe recipients to receive alert
e-mail notifications when your application is down or responds slowly.
Prerequisites
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create the JMX check.
Execute:
○ Replace "myapp", "myaccount" and "myuser" with the names of your account, application, and user
respectively.
○ Replace "myMBeanObjectName" and "myMBeanAttributeName" with the attribute and object name of
the MBean that you want to monitor. You can use existing standard MBean from the runtime (for
example, standard JDK MBean like Catalina:type=ThreadPool,name=\"http-bio-8041\" and attribute like
currentThreadsBusy) or your own MBean which should be part of your application and your application
should register it in the MBean server. For more information about the attribute command, see "JMX
Checks Commands" document in the Related Links section below.
○ Replace "myCheckName" with the name you want to see the check with in the cockpit.
○ Replace "myWarningThreshold" and "myCriticalThreshold" with a suitable threshold for the attribute
you want to check. If the actual value is above the threshold, is out of the threshold range in case you use
a range, or is a different string in case your metric has a string value, you will receive a warning,
respectively critical, notification. For more details how to set a threshold, see "JMX Check Commands"
document.
○ Replace "unit" with the unit you want to be displayed next to the value of your MBean attribute, for
example MBs or ms.
○ Use the respective landscape host for your account type. For more information, see Related Links section
below.
3. Subscribe recipients to notification alerts.
Execute:
○ Replace "myapp", "myaccount" and "myuser" with the names of your account, application, and user
respectively.
○ Replace "alert-recipients@example.com" with the email addresses that you want to receive alerts.
Separate email addresses with commas. We recommend that you use distribution lists rather than
personal email addresses. Keep in mind that you will remain responsible for handling of personal email
addresses with respect to data privacy regulations applicable.
○ Use the respective landscape host for your account type.
Note
Setting an alert recipient for an application will trigger sending all alerts for this application to the
configured emails. Once the recipients are subscribed, you do not need to subscribe them again after every
new check you configure. You can also set the recipients on account level if you skip the -b parameter, so
that they receive alerts for all applications and for all the metrics you are monitoring.
The JMX console available in the cockpit enables you to monitor and manage the performance of the JVM and
your Java applications running on the platform.
Prerequisites
Context
The JMX console in the cockpit is based on the Java Management Extensions (JMX) specification. It exposes all
the MBeans registered in the platform runtime and allows you to execute operations on them and view their
attributes to monitor and manage the performance of the JVM and your applications. The MBeans visible in the
JMX console are standard JVM MBeans, SAP-specific MBeans and MBeans registered by your application
runtime. The usage of few specific MBeans that can be dangerous in cloud environment is restricted.
Procedure
You can do this by choosing the Java application under Applications Java Applications or by navigating
from the Overview page.
The MBean attributes and operations are populated in the respective fields.
6. Depending on your needs, you can do the following:
○ Execute an MBean operation using (Execute) and check the results in the Operation Results section.
Related Information
SAP Cloud Platform allows you to achieve isolation between the different application life cycle stages
(development, testing, productive) by using multiple accounts.
Prerequisites
● You have developed an application. For more information, see Developing Java Applications [page 1034].
● You have a customer or partner account. For more information, see Account Types [page 14].
Context
Using multiple accounts ensures better stability as in the productive account, you only deploy tested versions of
the application. Also, you can achieve better security for productive applications because permissions are given
per account.
For example, you can create three different accounts for one application and assign the necessary amount of
compute unit quota to them::
● dev - use for development purposes and for testing the increments in the cloud, you can grant permissions to
all application developers
● test- use for testing the developed application and its critical configurations to ensure quality delivery
(integration testing and testing in productive-like environment prior to making it publicly available)
You can create multiple accounts and assign quota to them either using the console client or the cockpit.
Procedure
Next, you can deploy your application in the newly created account using the Eclipse IDE or the console client.
Then, you can test your application and make it ready for productive use.
You can transfer the application from one account to another by redeploying it in the respective account.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create a new account.
Execute:
Execute:
Next, you can deploy your application in the newly created account by executing neo deploy -a
<account> -h <landscape host> -b <application name> -s <file location> -u <user
name or email>. Then, you can test your application and make it ready for productive use.
You can transfer the application from one account to another by redeploying it in the respective account.
After you have developed and deployed your SAP HANA application, you can then monitor it.
Table 379:
Console Client
To monitor whether your deployed SAP HANA XS application is up and running, you can register an availability
check for it and configure email recipients who will receive notification if the application goes down. For the email
recipients configuration, you use the SAP Cloud Platform console client. Furthermore, you can just view the
metrics of a database system of any type.
Table 380:
Content
Table 381:
Availability Checks
Table 382:
Cockpit Configuring Availability Checks for SAP HANA XS Applications from the Cockpit [page 1088]
Console Client Configuring Availability Checks for SAP HANA XS Applications from the Console Client [page 1088]
Table 383:
Monitoring Metrics
Table 384:
Related Information
The availability check is one per Java or SAP HANA XS application and is executed every minute. You can
configure an availability check for an application either from the cockpit or from the console client. If your
application is not available or its response time is too high, you will receive an e-mail notification. If you stop the
application by yourself, you will not receive a notification as in this case alerting is suppressed and enabled once
again when you start the application. However, this is not valid for productive SAP HANA databases as you cannot
stop them. In this case, the availability check will start running at the moment you create it and will not stop until
you delete it. E-mail alert is triggered if the application is not in state OK for two consecutive checks. There are five
types of notifications:
Table 385:
Notification Description
You may also set your availability check for Java applications on account level using a relative URL. This means
that each application started in your account will immediately receive an availability check requesting
application_url/configured_relative_url. This option is useful in case you start multiple instances of
the same application (applications with the same relative health check URL) in your account and allows you to
configure this check only once for all of them. You can configure availability checks on account level only from the
console client. If there is a check configured on account level and a check configured on application level, the one
on the application level has higher priority. For example, if you have in your account ten applications with the /
health_check relative URL and one multitenant application with the /myapp/health_check relative URL, you
can configure an availability check on account level for all applications and one availability check for the
multitenant application to override the one on account level.
Limitations
Availability monitoring in SAP Cloud Platform is done by running HTTP GET requests against URL provided by the
application operator. The http/https ping is not parsing the response body, but it is relying only on the HTTP
response code.
Currently there are two limitations that need to be considered when designing your availability URL:
● The monitoring infrastructure does not support authorization for the checks. This means that you cannot
pass user and password or client certificate when configuring the availability check. Therefore, you must
design the availability URL without authentication or authorization. This will make sure that your application
Caution
If you design the availability URL as a protected resource, the check will consider 401 and 403 response
codes as 200 OK. Note that these response codes may come from Identity Authentication and not from
your application, in case of an authenticated application.
Currently, the response codes accepted by the 'http/https ping' are 200, 302, 401 and 403. This is done to
cover all the different types of URLs that can be monitored. You need to make sure that if something does not
work as expected, your application is not returning some of the above 4 codes as you will not get an alert.
● The monitoring infrastructure supports only one availability check per Java or SAP HANA XS application. This
means that if you have multiple web applications deployed together as one application in your account or
application with multiple end points you want to check, you need to design one common availability URL to be
able to monitor them all together. If one of the applications fails, you will get an alert and then you will have to
check which one exactly is failing by opening the availability URL.
Recommendation
We recommend that the response is a simple, plain HTML, just stating which web application is OK and
which is not. It depends on the implementation of the availability URL whether it will just inform that a web
application is available or it will also check whether it is working as expected. If you plan to develop and
operate multiple applications in your account, it is a good idea to have identical availability URLs for the
different applications (for example /availability). This will allow you to configure the availability check only
once on account level.
Caution
Note that the availability URL designed according to the above recommendations is unprotected and can be
accessed by everyone. We recommend not putting sensitive information about your application there (for
example error stack traces).
Configuring Availability Checks for Java Applications from the Cockpit [page 1194]
Configuring Availability Checks for Java Applications from the Console Client [page 1195]
Configuring Availability Checks for SAP HANA XS Applications from the Cockpit [page 1088]
Configuring Availability Checks for SAP HANA XS Applications from the Console Client [page 1088]
Availability Checks Commands
list-availability-check [page 214]
create-availability-check [page 128]
delete-availability-check [page 147]
JMX Checks [page 1196]
In the cockpit, you can configure availability checks for the SAP HANA XS applications running on your productive
SAP HANA database system.
Procedure
1. In the cockpit, choose Applications HANA XS Applications in the navigation area of the account and
open the application list of the productive SAP HANA database system.
2. Select an application from the list and in the Application Details panel choose the Create Check button.
3. In the dialog that appears, select the URL you want to monitor from the dropdown list and fill in values for
warning and critical thresholds if you want them to be different from the default ones. Choose Save.
Your availability check is created. You can view your application's latest HTTP response code and response
time as well as status icon showing whether your application is up or down. If you want to receive alerts when
your application is down, you need to configure alert recipients from the console client. For more information,
see the Subscribe recipients to notification alerts. step in Configuring Availability Checks for SAP HANA XS
Applications from the Console Client [page 1088].
Related Information
Prerequisites
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Create the availability check.
Execute:
○ Replace "myaccount", "myhana:myhanaxsapp" and "myuser" with the names of your account,
productive SAP HANA database name and application, and user respectively.
○ The availability URL (/heartbeat.xsjs in this case) is not provided by default by the platform. Replace it
with a suitable URL that is already exposed by your SAP HANA XS application or create it. Keep in mind
the limitations for availability URLs. For more information, see Availability Checks [page 1204].
Note
In case you want to create an availability check for a protected SAP HANA XS application, you need to
create a sub-package, in which to create an .xsaccess file with the following content:
{
"exposed": true,
"authentication": null,
"authorization": null
}
Execute:
○ Replace "myaccount", "myhana" and "myuser" with the names of your account, productive SAP HANA
database name, and user respectively.
○ Replace "alert-recipients@example.com" with the email addresses that you want to receive alerts.
Separate email addresses with commas. We recommend that you use distribution lists rather than
personal email addresses. Keep in mind that you will remain responsible for handling of personal email
addresses with respect to data privacy regulations applicable.
○ Use the respective landscape host for your account type.
Note
Setting an alert recipient for an application will trigger sending all alerts for this application to the
configured email(s). Once the recipients are subscribed, you do not need to subscribe them again after
every new check you configure. You can also set the recipients on account level if you skip the -b
parameter so that they receive alerts for all applications and for all the metrics you are monitoring.
Related Information
Configuring Availability Checks for SAP HANA XS Applications from the Cockpit [page 1088]
Landscape Hosts [page 41]
Availability Checks Commands
list-availability-check [page 214]
create-availability-check [page 128]
delete-availability-check [page 147]
Alert Recipients Commands
list-alert-recipients [page 217]
set-alert-recipients [page 267]
clear-alert-recipients [page 122]
In the cockpit, you can view the current metrics of a selected database system to get information about its health
state. You can also view the metrics history of a productive database to examine the performance trends of your
database over different intervals of time or investigate the reasons that have led to problems with it. You can view
the metrics for all types of databases.
1. In the cockpit, navigate to the Database Systems page either by choosing Persistence from the navigation
area or from the Overview page.
All database systems available in the selected account are listed with their details, including the database
version and state, and the number of associated databases.
2. Select the entry for the relevant database system in the list.
3. Choose Monitoring from the navigation area to get detailed information about the current state and the
history of metrics for a selected productive database system.
The Current Metrics panel shows the current state of the metrics for the selected database system. When a
threshold is reached, the metric health status changes to warning or critical.
The Metrics History panel shows the metrics history of your database. You can view the graphics of the
different metrics and zoom in when you click and drag horizontally or vertically to get further details. If you
zoom in a graphic horizontally, all other graphics zoom in to the same level of details too. You can press
Shift and then drag to scroll all graphics simultaneously to the left or right. You can zoom out to the initial
state with a double-click.
You can select different time intervals for viewing the metrics. Depending on the selected interval, data is
aggregated as follows:
○ last 12 or 24 hours - data is collected each minute
○ last 7 days - data is aggregated from the average values for 10 minutes
○ last 30 days - data is aggregated from the average values for an hour
You can also select a custom time interval when you are viewing the history of metrics. Note that if you select
an interval in which the database has not been running, the graphics will not contain any data.
Related Information
For an overview of the current status of the individual HTML5 applications in your account, use the SAP Cloud
Platform cockpit.
It provides key information in a summarized form and allows you to initiate actions, such as starting or stopping.
Table 386:
Managing Destinations
Table 387:
Table 388:
Table 389:
Logging
Table 390:
Related Information
You can export HTML5 applications either with their active version or with an inactive version.
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the link to the application you
want to export.
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the link to the application you
want to export.
2. Choose Versioning in the navigation area, and then choose Versions under History.
3. In the table row of the version you want to export, choose the export icon ( ).
4. Save the zip file.
You can import HTML5 applications either creating a new application or creating a new version for an existing
application.
Note
When you import an application or a version, the version is not imported into master branch of the repository.
Therefore, the version is not visible in the history of the master branch. You have to switch to Versions in the
navigation area.
Procedure
1. To upload a zip file, choose Applications HTML5 Applications in the navigation area, and then Import
from File ( ).
2. In the Import from File dialog, browse to the zip file you want to upload.
3. Enter an application name and a version name.
4. Choose Import.
The new application you created by importing the zip file is displayed in the HTML5 Applications section.
5. To activate this version, see Activating a Version [page 1117].
Procedure
1. Choose Applications HTML5 Applications in the navigation area, and then the application for which you
want to create a new version.
2. Choose Versioning in the navigation area.
3. To upload a zip file, choose Versions under History and then Import from File ( ).
4. In the Import from File dialog, browse to the zip file you want to upload.
5. Enter a version name.
6. Choose Import.
The new version you created by importing the zip file is displayed in the History table.
7. To activate this version, select the Activate this application version icon ( ) in the table row for this version.
8. Confirm that you want to activate the application.
On the Application Details panel, you can add or change a display name and a description to the selected HTML5
application.
Context
If a display name is maintained, this display name is also shown in the list of HTML5 applications and in the list of
HTML5 subscriptions instead of the application name.
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
2. Choose Applications HTML5 Applications in the navigation area, and select the application for which to
add or change a display name and description.
3. Under Application Details of the Overview section, choose Edit.
4. Enter a display name and a description for the HTML5 application.
Table 391:
Field Comment
Display Name Human-readable name that you can specify for your HTML5 application.
Description Short descriptive text about the HTML5 application, typically stating what it
does.
An HTML5 application can have multiple versions, but only one of these can be active. This active version is then
available to end-users of the application.
However, developers can access all versions of an application using unique URLs for testing purposes.
The Versioning view in the cockpit displays the list of available versions of an HTML5 application. Each version is
marked either as active or inactive. You can activate an inactive version using the activation button.
For every version, the required destinations are displayed in a details table. To assign a destination from your
account global destinations to a required destination, choose Edit in the details table. By default, the destination
with the same name as the name you defined for the route in the application descriptor is assigned. If this
destination does not exist, you can either create the destination or assign another one.
When you activate a version, the destinations that are currently assigned to this version are copied to the active
application version.
Related Information
If an HTML5 application requires connectivity to one or more back-end systems, destinations must be created or
assigned.
Prerequisites
Context
For the active application version the referenced destinations are displayed in the HTML5 Application section of
the cockpit. For a non-active application version the referenced destinations are displayed in the details table in
the Versioning section. HTML5 applications use HTTP destinations, which can be defined on the account level of
your account.
By default, the destination with the same name as the name you defined for the route in the application descriptor
is assigned. If this destination does not exist, you can create the destination with the same name as described in
Configuring Destinations from the Cockpit [page 344]. Then you can assign this newly created destination.
Alternatively, you can assign another destination that already exists in your account. To assign a destination,
follow the steps below.
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
2. Choose Applications HTML5 Applications in the navigation area, and choose the application for which
you want to assign a different destination (than the default one) from your account global destinations.
3. Choose Edit in the Required Destinations table.
4. In the Mapped Account Destinations column, choose an existing destination from the dropdown list.
End users can only access an application if the application is started. As long as an application is stopped, its end
user URL does not work.
Context
The first start of the application usually occurs when you activate a version of the application. For more
information, see Activating a Version.
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
The end user URL for the application is displayed under Active Version.
Related Information
Resources of an HTML5 application can be protected by permissions. The application developer defines the
permissions in the application descriptor file.
To grant a user the permission to access a protected resource, you can either assign a custom role or one of the
predefined virtual roles to such a permission. The following predefined virtual roles are available:
AccountDeveloper and AccountAdministrator require SAP IdP to be configured as identity provider. If you
want to use the AccountDeveloper or AccountAdministrator role together with a custom IDP, create those
roles as custom roles and assign the corresponding user manually.
The role assignments are only effective for the active application version. To protect non-active application
versions, the default permission NonActiveApplicationPermission is defined by the system for every
As long as no other role is assigned to a permission, only account members with developer or administrator
permission have access to the protected resource. This is also true for the default permission
NonActiveApplicationPermission.
You can create roles in the cockpit using either of these panels:
Note
An HTML5 application’s own permissions also apply when the application is reached from another HTML5
application (see Accessing Application Resources [page 1127]). Previously, only the permissions of the HTML5
application that was accessed first were considered. If you need time to assign the proper roles, you can
temporarily switch back to the previous behavior by unchecking Always Apply Permissions in the cockpit.
Related Information
You can manage roles and permissions for the HTML5 applications or subscriptions using the HTML5 Applications
panel.
You create roles that are assigned to HTML5 applications or HTML5 applications subscriptions. The roles are
available for all HTML5 applications and all subscriptions to HTML5 applications.
Procedure
Prerequisites
● If you want to use groups, you have configured the groups for your identity provider as described in ID
Federation with the Corporate Identity Provider [page 1406].
Context
Since all HTML5 applications and all HTML5 application subscriptions use the same roles, changing a role affects
all applications that use this role.
Procedure
Once you have created the required roles, you can assign the roles to the permissions of your HTML5 application
or of your HTML5 application subscription to an HTML5 application.
Procedure
You can manage roles and permissions for the HTML5 applications or subscriptions using the Subscriptions
panel.
You create roles that are assigned to HTML5 applications or HTML5 applications subscriptions. The roles are
available for all HTML5 applications and all subscriptions to HTML5 applications.
Procedure
Prerequisites
● If you want to use groups, you have configured the groups for your identity provider as described in ID
Federation with the Corporate Identity Provider [page 1406].
Context
Since all HTML5 applications and all HTML5 application subscriptions use the same roles, changing a role affects
all applications that use this role.
Procedure
Once you have created the required roles, you can assign the roles to the permissions of your HTML5 application
or of your HTML5 application subscription to an HTML5 application.
Procedure
You can view logs on any HTML5 application running in your account or subscriptions to these apps. Currently,
only the default trace log file is written. The file contains error messages caused by missing back-end connectivity,
for example, a missing destination, or logon errors caused by your account configuration.
Context
There is one file a day. The logs are kept for 7 days before they are deleted. If the application is deleted, the logs
are deleted as well. A log is a virtual file consisting of the aggregated logs of all processes. Currently, the following
data is logged:
● The time stamp (date, time in milliseconds, time zone) of when the error occurred
● A unique request ID
● The log level (currently only ERROR is available)
● The actual error message text
Procedure
1. Log on with a user (who is an account member) to the SAP Cloud Platform cockpit.
Related Information
According to your needs, you can change the default application URL by configuring application domains different
from the default one: custom or platform domains.
You can configure application domains using SAP Cloud Platform console client.
Note that you can use either platform domains or custom domains.
Custom Domains
Use custom domains if you want to make your applications accessible on your own domain different from
hana.ondemand.com - for example, www.myshop.com. When a custom domain is used, the domain name as well
as the server certificate for this domain are owned by the customer.
Platform Domains
Caution
You can configure different platform domains only for Java applications.
By default, applications accessible on hana.ondemand.com are available on the Internet. Platform domains enable
you to use additional features by using a platform URL different from the default one.
For example, you can use svc.hana.ondemand.com to hide the application from the Internet and access it only
from other applications running on SAP Cloud Platform, or, cert.hana.ondemand.com if you want an application to
Related Information
SAP Cloud Platform allows account owners to make their SAP Cloud Platform applications accessible via a
custom domain that is different from the default one (hana.ondemand.com) - for example www.myshop.com.
Prerequisites
To use a custom domain for your application, you need to fulfil a number of preliminary steps.
Scenario
After fulfilling the prerequisite, you can configure the custom domain on your own using SAP Cloud Platform
console client commands.
First, set up secure SSL communication to ensure that your domain is trusted and all application data is
protected. Then, route the traffic to your application:
1. Create an SSL Host [page 1225] - the host holds the mapping between your chosen custom domain and the
application on SAP Cloud Platform as well as the SSL configuration for secure communication through this
custom domain.
2. Upload a Certificate [page 1226] - it will be used as a server certificate on the SSL host.
3. Bind the Certificate to the SSL Host [page 1228].
4. Add the Custom Domain [page 1228] - this maps the custom domain to the application URL.
5. Configure DNS [page 1229]- you can create a CNAME mapping.
6. Configure Single Sign-On [page 1230] - if you have a custom trust configuration in your account, you need to
enable single logout..
The configuration of custom domains has different setups related to the subscriptions of your account. For more
information about custom domains for applications that are part of a subscription, see Custom Domains for
Multitenant Applications [page 1232].
Before configuring SAP Cloud Platform custom domains, you need to make some preliminary steps and fulfil a
number of prerequisites.
You need to have a quota for domains configured for your account. One domain corresponds to one SSL host that
you can use. For more information, see Purchasing a Customer Account [page 18].
The following two steps involve external service providers - domain name registrar and certificate authority.
Note
The domain name and the server certificate for this domain are issued by external authorities and owned by the
customer.
You need to come up with a list of custom domains and applications that you want to be served through them. For
example, you may decide to have three custom domains: test.myshop.com, preview.myshop.com,
www.myshop.com - for test, preview and productive versions of your SAP Cloud Platform application.
The domain names are owned by the customer, not by SAP Cloud Platform. Therefore, you will need to buy the
custom domain names that you have chosen from a registrar selling domain names.
To ensure your domain is trusted and all your application data is protected, you have to get an appropriate SSL
certificate from a Certificate Authority (CA). To sign and issue this certificate, you need a certificate signing
request (CSR), which you will create in the following procedure. Note that we do not support uploading of existing
certificates that are not generated using our generate-csr command.
Before buying a certificate from a provider, you need to decide on the number and type of domains you want to be
protected by this certificate. One certificate can be valid for a number of domains.
● Multiple domain - secures multiple domain names with a single certificate. This type allows you to use any
number of different domain names or common names. For example, one certificate can support:
www.myshop.com, *.test.myshop.com, *.myshop.eu, www.myshop.de.
Note
Choose as specific domain names as possible. Also, host all domains in the certificate in one single place (SAP
Cloud Platform).
Caution
The CSR is valid only for the landscape host on which it was generated and cannot be moved and downloaded.
The host represents a regional data center: hana.ondemand.com for Europe; us1.hana.ondemand.com for the
United States; ap1.hana.ondemand.com for Asia-Pacific.
The certificate has to be in Privacy-enhanced Electronic Mail (PEM) format (128 or 256 bits) with private key
(2048-4096 bits).
Related Information
To make sure your domain is trusted and all application data is protected, you need to first set up secure SSL
communication. The next step will then be to make your application accessible via the custom domain and route
traffic to it.
Context
You have to create an SSL host that will serve your custom domain. This host holds the mapping between your
chosen custom domain and the application on SAP Cloud Platform as well as the SSL configuration for secure
communication through this custom domain.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/neo.sh (<SDK installation
folder>/tools).
2. Create an SSL host. In the console client command line, execute neo create-ssl-host. For example:
Note
In the command output, you get the SSL host. For example, "A new SSL host [mysslhost] was
created and is now accessible on 123456.ssl.ondemand.com". Write this SSL host down as
you will need it in the following steps.
You need an SSL certificate to allow secure communication with your application. Once installed, the SSL
certificate is used to identify the client/individual and to authenticate the owner of the site.
Context
The certificate generation process starts with certificate signing request (CSR) generation. A CSR is an encoded
file containing your public key and specific information that identifies your company and domain name.
The next step is to use the CSR to get a server certificate signed by a certificate authority (CA) chosen by you.
Before buying, carefully consider the appropriate type of SSL certificate you need. For more information, see
Prerequisites [page 1223].
Procedure
1. Generate a CSR.
The --name parameter is the unique identifier of the certificate within your account on SAP Cloud Platform
and will be used later. It can contain alphanumeric symbols, '.', '-' and '_'.
Note
For security reasons, you can only upload certificates that are generated using the generate-csr
command.
Note
When sending the CSR to be signed by a CA, keep the following requirements in mind:
The certificate must be in Privacy-enhanced Electronic Mail (PEM) format (128 or 256 bits) with private key
(2048-4096 bits).
3. Upload the SSL certificate you received from the CA to SAP Cloud Platform:
Note
Note that some CAs issue chained root certificates that contain an intermediate certificate. In such cases,
put all certificates in the file for upload starting with the signed SSL certificate.
Caution
Once uploaded, the domain certificate (including the private key) is securely stored on SAP Cloud Platform
and cannot be downloaded for security reasons.
Note that when the certificate expires, you will receive a notification from your CA. You need to take care of
the certificate update. For more information, see Updating an Expired Certificate [page 1233]
Tip
The number of certificates you can have is limited and is calculated based on the number of custom
domains you have multiplied by 3. For example, if you have one custom domain, you can have 3
certificates.
To free up some space for new certificates, execute list-domain-certificates to get the names of
the created ones and then delete-domain-certificate for each certificate you do not need.
You need to bind the uploaded certificate to the created SSL host so that it can be used as SSL certificate for
requests to this SSL host.
Procedure
To make your application on the SAP Cloud Platform accessible via the custom domain, you need to map the
custom domain to the application URL.
Context
Note
After you configure an application to be accessed over a custom domain, its default URL hana.ondemand.com
will no longer be accessible. It will only remain accessible for applications that are part of a subscription -
https://<application_name><provider_account>-<consumer_account>.<domain>.
Procedure
1. In the console client command line, execute neo add-custom-domain with the appropriate parameters.
Note that you can only do this for a started application.
To route the traffic for your custom domain to your application on SAP Cloud Platform, you also need to configure
it in the Domain Name System (DNS) that you use.
Context
You need to make a CNAME mapping from your custom domain to the created SSL host for each custom domain
you want to use. This mapping is specific for the domain name provider you are using. Usually, you can modify
CNAME records using the administration tools available from your domain name registrar.
Procedure
1. Sign in to the domain name registrar's administrative tool and find the place where you can update the
domain DNS records.
2. Locate and update the CNAME records for your domain to point to the DNS entry you received from us
(*.ssl.ondemand.com) - the one that you got as a result when you created the SSL host using the create-
ssl-host command. For example, 123456.ssl.ondemand.com. You can check the SSL host by executing the
list-ssl-hosts command.
For example, if you have two DNS records : myhost.com and www.myhost.com, you need to configure them
both to point to the SSL host 123456.ssl.ondemand.com.
After you configure the custom domain, make sure that the setup is correct and your application is accessible on
the new domain.
Procedure
1. Log on to the cockpit, select an account and go to your Application Dashboard. In Application URLs, check if
the new custom URL has replaced the default one.
2. Open the new application URL in a browser. Make sure that your application responds as expected.
3. Check that there are no security warnings in the browser. View the certificate in the browser. Check the
Subject and Subject Alternative Name fields - the domain names there must match the custom domain.
4. Perform a small load test - request the application from different browser sessions making at least 15
different requests.
Results
After this procedure, your application will be accessible on the custom domain, and you will be able to log on
(single sign-on) successfully. Single logout, however, may not work yet. If you have a custom trust configuration in
your account, you will need to perform an additional configuration to enable single logout.
Next Steps
Configure single logout. For more information, see Configure Single Logout [page 1230]
To enable single logout, you need to configure the Custom Domain URLs, and, optionally, the Central Redirect URL
for the SAML single sign-on flow. Even if single sign-on works successfully with your application at the custom
domain, you will need to follow the current procedure to enable single logout.
Prerequisites
● You are logged on with a user with administrator role. See Account Member Roles.
● You are aware of the productive landscape that hosts your account. See Landscape Hosts.
Context
Central Redirect URL is the central node that facilitates assertion consumer service (ACS) and single logout (SLO)
service. By default, this node provided by SAP Cloud Platform, and has the authn.<productive landscape
host>.com URL (for example, authn.hana.ondemand.com). If you want to use your application’s root URL as
the ACS, instead of the central node, you will need to maintain the Central Redirect URL.
For Java applications, you can follow the procedure described in the current document. For HANA XS
applications, create an incident in component BC-IAM-IDS.
Procedure
1. In your Web browser, open the SAP Cloud Platform cockpit and choose Security Trust in the navigation
area.
2. Choose the Custom Application Domains Settings subtab.
3. Choose Edit. The custom domains properties become editable.
4. Select the Use Custom Application Domains option.
5. In Central Redirect URL, enter the URL of your application process that will serve as the central node.
Tip
The Central Redirect URL value has to be the same as the ACS endpoint value in the metadata of the
service provider.
Note
Make sure you do not stop the application VM specified as the Central Redirect URL. Otherwise, SAML
authentication will fail for all applications in your account.
6. The values in Custom Domain URLs are used for SLO. Enter the required values (all custom domain URLs) in
Custom Domain URLs.
7. Save your changes. The system generates the respective SLO endpoints. Test them in your Web browser and
make sure they are accessible from there.
A subscription means that there is a contract between an application provider and a tenant that authorizes the
tenant to use the provider's application. As the consumer account, you do not own, deploy, or operate these
applications yourself. Subscriptions allow you to configure certain features of the applications and launch them
through consumer-specific URLs.
When you configure custom domains for such applications that are part of a subscription, the following scenarios
are possible:
● The custom domain is owned by the application provider who uses an SSL host from their account quota. The
provider also does the configuration and assignment of the custom domain. The provider can assign a
subdomain of its own custom domain to a particular subscription URL. To do this, the provider needs to have
rights in both the provider and consumer account.
● The customer (consumer) uses an SSL host from the consumer account quota. In this case, the customer
(consumer) owns the custom domain and the SSL host and is therefore able do the necessary configuration
on their own.
Related Information
When the SSL certificate you configured for the custom domain expires, you have to perform the same procedure
with the new certificate and remove the old one.
Context
If you had configured the certificate using the console client commands, follow the steps:
Procedure
1. Generate a new CSR by executing the neo generate-csr command with the appropriate parameters:
2. In the command line output, you get the generated new CSR. To sign your certificate, copy and send the text
to your trusted CA.
3. When you receive a signed SSL certificate from the CA, upload it to SAP Cloud Platform by executing:
5. Assign the new certificate to your existing SSL host by executing neo bind-domain-certificate with the
appropriate parameters.
6. If you want to list your custom domain certificates, execute: neo list-domain-certificates.
Related Information
If you do not want to use the custom domain any longer, you can remove it using the console client commands. As
a result, your application will be accessible only on its default hana.ondemand.com domain.
Procedure
Related Information
Using platform domains, you can configure the application network availability or authentication policy. You can
achieve that by configuring the appropriate platform domain which will change the URL on which your application
will be accessible.
Prerequisites
You have installed and configured SAP Cloud Platform console client. For more information, see Setting Up the
Console Client.
Context
● hana.ondemand.com - any application is accessible on this default domain after being deployed on SAP Cloud
Platform
● cert.hana.ondemand.com - enables client certificate authentication
● svc.hana.ondemand.com - provides access within the same landscape; for internal communication and not
open on the Internet or other networks
You can configure the platform domains using the application-domains group of console client commands:
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/neo.sh(<SDK installation
folder>/tools).
2. Configure the platform domain you have chosen by executing the add-platform-domain command.
As a result, the specified application will be accessible on cert.hana.ondemand.com and on the default
hana.ondemand.com domain.
Procedure
1. To make sure the new platform domain is configured, execute the list-application-domains command:
2. Check if the returned list of domains contains the platform domain you set.
Procedure
1. When you no longer want the application to be accessible on the configured platform domain, remove it by
executing the remove-platform-domain command:
2. Repeat the step for each platform domain you want to remove.
Related Information
You can enable transport of SAP Cloud Platform applications using the enhanced Change and Transport System
(CTS+) tool.
Prerequisites
To be able to transport an application, you have to package it in a Multi-Target Appllication (MTA) archive as
described in Multi-Target Applications [page 1239].
Context
Use CTS+ to transport and promote your applications, for example, from development to a test or production
environment. You can also deploy one or several MTA archives to your account in one go.
Procedure
Trigger the import of an SAP Cloud Platform application as described in How To... Configure SAP Cloud Platform
for CTS.
Caution
SAP Cloud Platform applications cannot be exported to CTS+. You need to manually add them to a transport
request.
1.5.2.5.1 Troubleshooting
While transporting SAP Cloud Platform applications using the CTS+ tool, or while deploying solutions using the
cockpit, you might encounter one of the following issues. This section provides troubleshooting information about
correcting them.
Technical error [Invalid MTA This error could occur if the MTA archive is not consistent. There are several
archive [<mtar archive>]. MTA different reasons for this:
deployment descriptor (META-INF/
mtad.yaml) could not be parsed. ● The MTA deployment descriptor META-IND/mtad.yaml cannot be
Check the troubleshooting guide parsed, because it is syntactically incorrect according to the YAML
for guidelines on how to resolve specification. For more information, see the publicly available YAML
descriptor errors. Technical specification.
details: <…> Make sure that the descriptor is compliant with the specification. Vali
date the descriptor syntax, for example, by using an online YAML
parser.
Note
Ensure that you do not submit any confidential intormation to the
online YAML parser.
● The MTA deployment descriptor might contain data that is not compat
ible with SAP Cloud Platform. Make sure the MTA deployment descrip
tor complies with the specification at Multi-Target Applications [page
1239].
● The archive might not be suitable for deployment to the SAP Cloud
Platform. This might happen if, for example, you attempt to deploy an
archive built for XSA to the SAP Cloud Platform. The technical details
might contain information similar to the following:
"Unsupported module type "<module type>" for
platform type "HCP-CLASSIC""
Technical error [Invalid MTA The archive is inconsistent, for example, when a module referenced in the
archive [<MTA name>]: Missing META-INF/mtad.yaml is not present in the MTA archive or is not refer
MTA manifest entry for module
enced correctly. Make sure that the archive is compliant with the MTA speci
[<module name>]]
fication available at The Multi-Target Application Model .
Technical error [MTA extension This error could occur if one or more extension descriptors are not consis
descriptor(s) could not be tent. There are several different reasons for this:
parsed. Check the
troubleshooting guide for ● One or more extension descriptors might not be syntactically compliant
with the YAML specification. Validate the descriptor syntax, for exam
guidelines on how to resolve
ple, by using an online YAML parser.
descriptor errors. Technical
details: <…>
Note
Ensure that you do not submit any confidential intormation to the
online YAML parser.
● One or more extension descriptors might contain data that is not com
patible with SAP Cloud Platform. Make sure all extension descriptors
comply with the specification at Multi-Target Applications [page 1239].
Technical error [MTA deployment This error could occur if the MTA archive, or one or more extension descrip
descriptor (META-INF/mtad.yaml) tors are not consistent. There are several different reasons for this:
from archive [<mtar archive>]
and some of extension ● The MTA deployment descriptor or an extension descriptor might con
tain data that is not compatible with the SAP Cloud Platform. Make sure
descriptors [<extension
the MTA deployment descriptor and all extension descriptors comply
descriptor>] could not be
with the specification at Multi-Target Applications [page 1239].
processed . Check the
● The archive may not be suitable for deployment to SAP Cloud Platform.
troubleshooting guide for
This might happen if, for example, you attempt to deploy an archive
guidelines on how to resolve
built for XSA to the SAP Cloud Platform. The technical details might
descriptor errors. Technical contain information similar to the following:
details: <…> "Unsupported module type "<module type>" for
platform type "HCP-CLASSIC""
Complex business applications are composed of multiple parts developed with focus on micro-service design
principles, API-management, usage of the OData protocol, increased usage of application modules developed
with different languages, IDEs, and build methodologies. Thus, development, deployment, and configuration of
separate elements introduce a variety of lifecycle and orchestration challenges. To address these challenges, SAP
introduces the multi-target application (MTA) concept. It addresses the complexity of continuous deployment by
employing a formal target-independent application model.
An MTA comprises of multiple modules created with different technologies, deployed to different target runtimes,
but having a common lifecycle. Initially, developers describe the modules of the application, the
interdependencies to other modules, MTAs and services, and required and exposed interfaces. Afterward, an
MTA-aware application lifecycle management framework validates, orchestrates, and automates the deployment
of the MTA.
You can use MTAs, for example, to transport applications or to deploy solutions on the SAP Cloud Platform using
the cockpit.
By using the YAML data serialization language, an MTA is described by a deployment descriptor containing the
following:
The MTA deployment descriptor (mtad.yaml) and module binaries are packaged in a single archive (MTA
archive). There could be more than one module of the same type in an MTA archive.
For more information about the Multi Target Application model, see the Multi-Target Application Model
specification.
Read the following sections for more details about which parts of the MTA specification are currently supported in
SAP Cloud Platform.
Related Information
General information
You define deployment prerequisites and dependencies of a multi-target application in an MTA deployment
descriptor. It contains the following sections:
● (Mandatory) Global Model Elements - schema version, application name and version
● Global Parameters
○ (Mandatory) Parameters - deployer version (currently 1.1.0)
○ (Optional) Parameters - provider url
● (Optional) modules - a list of the application modules contained in the MTA deployment archive
● (Optional) resources - a list of the resources that the modules require
Note
Security data such as passwords must be added in the MTA extension descriptor.
For each module and for each resource, the following attributes are mandatory:
Depending on the type of the module or the resource, additional parameters may be specified in the parameters
or properties subsections.
● requires- used to define a dependency within the MTA. Starts an optional section that contains a list of
required resources, or required parts of other modules.
● provides- used to define a dependency within the MTA. Starts an optional section that contains
configuration data, which can be required by other modules within the same MTA.
name HTML5 application name, which has to be unique within the current ac yes
count - used for deploying Java applications
Note
The display-name and name parameters belong to an application
level that is different from the one of the application versions. If an
other application version is defined in the MTA deployment descrip
tor, then its display name has to be identical to display names of other
already defined versions of the application or has to be omitted.
Note
- used for deploying Java HTML5 modules with the same version can
be deployed only once. In the version parameter, the usage of a
<timestamp> read-only variable is supported. Thus, a new version
string is generated with every deploy. For example, version:
'0.1.0-${timestamp}'
active This flag indicates whether the related version of the application should no
be activated or not. The default value is true.
sfsf-access-point If true, the application is activated for the SuccessFactors system. The no
default value is false.
sfsf-tiles Registers SuccessFactors Employee Central (EC) home page tiles in the no
For more information, see tiles.json [page 1298]. Ensure that each tile
name is unique within the current account.
Table 394:
Supported Parameter Parameter Description Manda
tory
name Java application name, which has to be unique within the current account. yes
runtime One of the following values should be used, depending on the used runtime: yes
● neo-java-web
● neo-javaee6-wp
runtime-version If a specific runtime version needs to be pinned, for example 1.88 or 2.70 no
role-provider (Beta) Defines the application that provides the role for the Java application. One of no
● sfsf
● hcp
roles (Beta) Maps Java application predefined roles to the groups they have to be as no
signed to.
sfsf-access-point (Beta) If true, the application is activated for the SuccessFactors system. The default no
value is false.
sfsf-idp-access (Beta) If true, the extension application is registered as an authorized assertion con no
sumer service for the SuccessFactors system to enable the application to use
the SuccessFactors identity provider (IdP) for authentication.
tors system. It creates the required HTTP destination and registers an OAuth
client for the Java application in SuccessFactors. An SFSF connection can
only be created after the corresponding Java application has been deployed
and started, so a module of this type depends on a com.sap.java module.
● default
● technical-user
sfsf-tiles (Beta) This parameter is a YAML dictonary with one element with key resource no
and value <path to resource>. The resource is a descriptor file that de
fines the SuccessFactors tiles. The resource has to be in JSON format.
For more information, see tiles.json [page 1298]. Ensure that each tile name
is unique within the current account.
destinations (Beta) (Beta) This parameter is a YAML list comprised of one or more connectivity no
destinations. For more information, see Destinations as Multi-Target Applica
tion Entities (Beta) [page 1249] and Destination Parameters [page 1256].
Note
● If you have sensitive data, all destination parameters have to be
moved to the extension descriptor.
● When you redeploy a destination, any parameter changes performed
after deployment of the destination are removed. Your custom
changes have to be performed again.
java.tomcat - used for deploying Java applications in the Java Web Tomcat runtime
Table 395:
Supported Parameter Parameter Description Manda
tory
name Java application name, which has to be unique within the current ac yes
count
html5-app-name SAP Fiori application name, which has to be unique within the current ac yes
count
Note
The html5-app-display-name and html5-app-name pa
rameters belong to an application level that is different from the one
of the application versions. If another application version is defined in
the MTA deployment descriptor, then its display name has to be iden
tical to display names of other already defined versions of the applica
tion or has to be omitted.
Note
The same rules apply as for the sap.com.hcp.html5 version
parameter with the difference that this parameter is optional. Default
value: '${timestamp}'
html5-app-active This flag indicates whether the related version of the application should no
be activated or not. The default value is true.
name SAP Fiori custom role name, which has to be unique within the current yes
account.
Table 398:
Supported Parameter Parameter Description Manda
tory
services List of OData services. Parameters required for an OData service are: yes
Table 399:
Supported Parameter Parameter Description Manda
tory
services List of OData services. Parameters required for an OData service are: yes
Note
If a service with the same name/namespace/version
combination already exists but has different description, the
import will fail.
Note
If a service with the same name/namespace/version
combination already exists but has different model-id, the
import will fail.
Note
If a service with the same name/namespace/version
combination already exists but has different default destina
tion, the import will fail.
com.sap.hcp.sfsf-roles (Beta) - used for uploading and importing SuccessFactors HCM suite
roles.
Uploads and imports SuccessFactors HCM suite roles from the SAP Cloud Platform system repository into the
SuccessFactors customer instance. The role definitions must be described in a JSON file. For more information
about how to create a roles.json file see Create the Resource File with Role Definitions [page 1287]
com.sap.hcp.group (Beta) - used for modelling the SAP Cloud Platform groups.
Table 400:
Supported Parameter Parameter Description Manda
tory
name Group name, which has to be unique within the current account. yes
To see the available parameters and values, go to Destinations as Multi-Target Application Entities (Beta) [page
1249] and Destination Parameters [page 1256].
Resource types
<untyped> Used for adding any properties that you might require and
which you define. It does not have a lifecycle.
Note
The untyped resource is unclassified, that is, it does not
have a type.
Table 401:
Note
● For a proper binding, the standard data source
jdbc/DefaultDB has to be set up at the stage
of the Java application development.
● The binding is always performed to the default data
source (data source name <DEFAULT>).
Note
We recommend you place this parameter in the MTA
extension descriptor, if you are using one.
Note
We recommend you place this parameter in the MTA
extension descriptor, if you are using one.
Note
The provider account must meet the following criteria:
Note
Always wrap any numeric values, for example product version and IDs, in single quotes to ensure that they are
not automatically interpreted as numbers.
Related Information
In addition to creating connectivity destinations as described in Destinations [page 324], you can also deploy new
destinations as a part of a Multi-Target Application described as a module type or as parameters of the
com.sap.java module. Using a common set of parameters, you describe destinations needed or offered in the
mtad.yaml MTA deployment descriptor, and additionally in the extension descriptor if required. The technical
modelling of these destinations is based on whether the providing and consuming sides are a part of a given MTA,
or when the providing side is an external entity. In the following cases, the resources that the destinations point at
already exist.
Currently, the following destination types are supported with MTA deployment:
● Account-level destinations
● Application-level destinations
Depending on whether the destination is a part of your account or not, destinations are also classified as internal
or external.
Related Information
Destinations to external resources lead to services or applications that are not part of the current MTA.
Account-Level Destinations
When you want to describe account-level destinations to external resources, the modelling is as a module of type
com.sap.hcp.destination. In this type of destination relations, first you declare that a module requires the
dependency using a requires element, and then you provide the dependency details as module type
parameters. The account level destination has a lifecycle that is independent from the applications that use it.
Note that if you need your Java application to have more than one destination, you have to model each account-
level destination in a separate module, and in the java module requires each of them.
modules:
- name: nwl
type: com.sap.java
requires:
- name: examplewebsite-connect
parameters:
name: networkinglunch
...
- name: examplewebsite-connect
type: com.sap.hcp.destination
parameters:
name: ExampleWebsite
type: HTTP
description: Connection to ExampleWebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: John
password: Abcd1234
Application-Level Destinations
Application-level destinations to external resources are modeled as items within the destinations parameter of
the com.sap.java module type. This means that the lifecycle of such a destination is bound to the lifecycle of the
corresponding application. See the following example:
modules:
- name: abc
type: com.sap.java
parameters:
name: networking
...
destinations:
- name: ExampleDestination
type: MAIL
user: John
password: abcd1234
- name: ExampleDestination_02
type: HTTP
url: http://www.examplewebsite02.com
proxy-type: Internet
authentication: NoAuthentication
Result: The networking Java application is deployed and then the MAIL destination ExampleDestination as
well as the HTTP destination ExampleDestination_02 are created.
In case some of the destinations parameters are security-sensitive, for example user credentials, we
recommended that you specify all destination parameters in an extension descriptor to ensure their secure
handling. This means that the destinations parameter in the mtad.yaml needs to be empty for this approach
to function, as described in the following example:
modules:
- name: abc
type: com.sap.java
parameters:
name: networking
...
destinations:
The below example for an accompanying extension descriptor contains the destinations parameters:
modules:
- name: abc
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
description: Connection to ExampleWebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user: John
password: abcd1234
Destinations to internal applications are destinations of type HTTP that point to a Java application that is a part of
the current MTA. These destinations require more complex MTA modeling where MTA placeholders and MTA
references are used.
Account-Level Destinations
Account-level destinations to internal Java applications within the same MTA are modeled using MTA modules
and two pairs of provides and requires dependencies, that is, one between the providing application module
and the destination module, and another one between the destination module and the consuming application
module.
● On the providing side, the service-providing application declares a provides dependency containing the
application-url as a property.
● The destination itself is represented by a com.sap.hcp.destination. It has a requires dependency
linked to the above provides dependency application-url.
● On the consuming side, the service-consuming application declares a requires dependency to the above
module.
In the following example, the HTML5 application networkingui (module abc-ui) uses the account-level
destination NetworkingLunchBackend (module abc-destination), which represents a connection to the
backend Java application networking (module abc).
- name: abc
type: com.sap.java
provides:
- name: abc
properties:
application-url: ${default-url}
parameters:
name: networkinglunch
...
- name: abc-destination
type: com.sap.hcp.destination
requires:
- name: abc
parameters:
name: NetworkingLunchBackend
type: HTTP
url: ~{abc/application-url}
proxy-type: Internet
authentication: AppToAppSSO
- name: abc-ui
type: com.sap.hcp.html5
requires:
- name: abc-destination
parameters:
name: networkingui
Result: The Java application networkinglunch is deployed, then the account level destination
NetworkingLunchBackend of type HTTP is created and finally the HTML5 application networkinglunchui is
deployed.
If the destination is assigned to the HTML5 application, the application will communicate with the Java back-end
system through the destination. For more information, see Assigning Destinations for HTML5 Applications [page
1214].
Application-Level Destinations
Application-level destinations to internal applications within the same MTA are modeled by a pair of provides
and requires dependencies between the respective application modules.
● On the providing side, the service-providing application declares a provides dependency containing the
application-url as a property.
● On the consuming side, the service-consuming application declares a requires dependency linked to the
above provides dependency application-url .
If the same application needs to declare destinations to multiple other internal applications, it has to use multiple
requires dependencies. See the following example:
- name: abc
type: com.sap.java
provides:
Result: The Java application javaapp1 is deployed and then the Java application javaapp2 is deployed with
application level destination JavaApp1Backend that points to javaapp1.
You can use a natural syntax on the consuming side of a destination by placing the destination paranmeters
directly on the module level and using the reference syntax ~{abc/url} to refer to the URL property provided by
the provides dependency abc.
Placeholders are strings that are resolved depending on the scope in which they are used. They have the syntax $
{<name>}.
A certain set of placeholders that can be resolved are supported, otherwise they are processed incorrectly and
might cause errors. The supported placeholders are:
● ${default-url} - used to resolve a default URL of a Java application when it is successfully deployed and
started.
Note
Тhis placeholder can be part only of the property application-url, which serves as a provided
dependency of com.sap.java module type.
Example
The following example shows the usage of the ${default-url} placeholder. The modeled java-module
provides the application-url dependency, which can be consumed by every HTTP destination in the
MTA:
modules:
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
application-url: ${default-url}
Result: if the current account is named myaccount and the landscape is us1.hana.ondemand.com, the
application-url can be http://myappmyaccount.us1.hana.ondemand.com/myapp.
● ${landscape-url} - used to resolve a current landscape URL where the deployment is running.
● ${account-name} - used to resolve a current account name where the deployment is running.
Note
The placeholders ${landscape-url} and ${account-name} can be used only in the property url for a
destination and property application-url that serves as a provided dependency of the module type
com.sap.java.
Example
The following example shows the usage of ${landscape-url} and ${account-name} placeholders. Two
HTTP destinations are modeled, where the first is account-level and the second is application-level:
modules:
- name: abc-destination
type: com.sap.hcp.destination
parameters:
name: ExampleApplicationBackend
type: HTTP
url: http://myapp.${landscape-url}/${account-name}
...
- name: abc-java
type: com.sap.java
parameters:
destinations:
- name: ExampleWebsite
type: HTTP
url: http://myapp.${landscape-url}/${account-name}
....
Result: if the account is named myaccount and the landscape is us1.hana.ondemand.com, both destinations
have a URL that equals to http://myapp.us1.hana.ondemand.com/myaccount.
Related Information
References are used in conjunction with required dependencies to refer properties provided by a linked
provided dependency.
As references are resolved during deployment, their values are converted to the values of referred properties of
the corresponding provided dependency. References have the syntax ~{<dependency>/<name>}.
Note
The name of the provided dependency of a module has to be equal to the module name.
Example
In the following example, html5-module references the java-module, and requests the provided-name
property from its provided dependency.
- name: java-module
type: com.sap.java
provides:
- name: java-module
properties:
provided-name: example
parameters:
name: example
- name: html5-module
type: com.sap.hcp.html5
requires:
- name: java-module
parameters:
name: ~{java-module/provided-name}ui
Result: The Java application example is deployed, then the HTML5 application exampleui is deployed.
Related Information
An untyped resource is an unclassified resource type that is used to group properties that are specified only
once, so that they can be referred by multiple destinations using the MTA references mechanism. Untyped
resources do not have a lifecycle.
In the following example, the Java application ExampleApplication (module abc) uses the application-level
destination examplewebsite, which represents an examplewebsite connection. Some of the destination
parameters, including the url, are represented by the untyped resource examplewebsite-connect. As some
destinations parameters are specific to the target landscape, they have to be specified in an extension descriptor.
Deployment Descriptor
modules:
- name: abc
type: com.sap.java
requires:
- name: examplewebsite-connect
parameters:
name: ExampleApplication
destinations:
- name: examplewebsite
type: ~{examplewebsite-connect/type}
description: ~{examplewebsite-connect/description}
url: ~{examplewebsite-connect/url}
proxy-type: ~{examplewebsite-connect/proxy-type}
authentication: ~{examplewebsite-connect/authentication}
user: ~{examplewebsite-connect/user}
password: ~{examplewebsite-connect/password}
resources:
- name: examplewebsite-connect
properties:
type: HTTP
description: Connection to examplewebsite
url: http://www.examplewebsite.com
proxy-type: Internet
authentication: BasicAuthentication
user:
password:
Extension Descriptor
resources:
- name: examplewebsite-connect
properties:
user: John
password: abcd1234
Result: the Java application ExampleWebsite is deployed. Afterwards, the HTTP destination examplewebsite
for the application with BasicAuthentication and credentials John and abcd1234 is created.
Related Information
description String
● if BasicAuthentication is
the Use Authentication type
● if the parameter type has the
values MAIL, or RFC.
● if BasicAuthentication is
the Authentication type
● if MAIL, or RFC are the
destination type.
jco-client String Yes 3 digits Use with the RFC parameter type.
Mandatory only for this
authentication.
jco-r3name String 3 letters or digits Use with the RFC parameter type, if
jco-mshost is specified.
Note
In the case of account-level destinations, we recommend you place any sensitive parameters in the MTA
extension descriptor. In the case of the com.sap.java module type, we recommend that you move all
destination parameters to the extension descriptor.
This mtad.yaml is an example of an MTA deployment descriptor consisting of one Java, one HTML5 module, and
one ODP module.
In this particular example the HTML5 module has a dependency towards the Java module. At deploy time, this
results in deploying the Java application first, and then the HTML5 application. The Java application itself requires
a persistence resource.
_schema-version: '2.1'
ID: com.sap.example
version: '0.1.0'
parameters:
hcp-deployer-version: '1.1.0'
modules:
- name: html5-module
type: com.sap.hcp.html5
requires:
- name: example-service
parameters:
name: example
display-name: example application
version: 'version2'
active: false
- name: java-module
type: com.sap.java
provides:
- name: example-service
requires:
- name: example-database
parameters:
name: example
This mtad.yaml is an example of an MTA deployment descriptor intended for a SuccessFactors extension.
_schema-version: '2.1'
ID: com.sap.hana.cloud.samples.benefits
version: 0.1.0
parameters:
hcp-deployer-version: '1.0'
modules:
- name: benefits-app
type: com.sap.java
requires:
- name: benefits-db
parameters:
name: &benefits-app-name benefits
jvm-arguments: -server
java-version: JRE 7
runtime: neo-java-web
runtime-version: 1
sfsf-idp-access: true
sfsf-connections:
- type: default
role-provider: sfsf
sfsf-tiles:
resource: resources/tiles.json
- name: benefits-roles
type: com.sap.hcp.sfsf-roles
resources:
- name: benefits-db
type: com.sap.hcp.persistence
parameters:
id: B01
MTA archives are created in a way compatible with the JAR specification. This allows reusing common tools (for
creating, manipulating, signing and handling such archives). As the deployment descriptor does not contain any
information concerning the location of modules within the archive, this aspect is added by an archive manifest.
The archive descriptor (MANIFEST.MF) has to be located within the META-INF folder of the archive.
The file MANIFEST.MF has to contain a name section for each MTA module contained in the archive that has a file
content. In the name section the following information has to be added:
● Name - the path within the MTA archive, where the corresponding module can be found
● Content-Type - the type of the file that is used to deploy the corresponding module
● MTA-module - the name of the module as it has been defined in the deployment descriptor
Manifest-Version: 1.0
Name: examplejava.war
Content-Type: application/zip
MTA-Module: java-module
Name: examplehtml5.zip
Content-Type: application/zip
MTA-Module: html5-module
Name: resources/roles.json
Content-Type: application/json
MTA-Module: benefits-roles
Related Information
Note
This is a beta feature available for all SAP Cloud Platform accounts. For more information about the beta
features, see Using Beta Features in Accounts [page 26].
Note
Using an extension descriptor is optional.
The purpose of an extension descriptor is to provide additional configuration information required when deploying
a specific multi-target application (MTA) to a concrete target platform. It can extend the mtad.yaml deployment
descriptor contained in the MTA, or another extension descriptor. You can provide it during the initial deployment
● The extension descriptor can contain information that is purposefully not included in the mtad.yaml. As an
MTA archive and its deployment descriptor should not be changed after packaging and vendor signing, the
extension descriptor could be used to provide deployment configurations for the actual MTA deployment.
Note
○ Security data, such as passwords, must not be added to the MTA deployment descriptor.
○ To ensure secure handling of sensitive parameters and passwords contained in the extension
descriptor, we recommend you specify them as Base64-encoded strings using the !!binary tag.
● As it can also derive from a third party, several extension descriptors can be used to provide properties to a
certain MTA archive, thus not interfering with the main content.
An extension descriptor has to adhere to a structure identical to the mtad.yaml, and it can contain the following
sections:
● (Mandatory) Global - contains the schema version used to comprise the particular extension descriptor, own
ID of the extension descriptor, an extends element containing the ID of the MTA archive that is being
extended.
○ (Optional) parameters section - it can contain a title, description and a logo. The logo has to be a
Base64-encoded image in jpeg, gif, or png file format, with size no larger than 100 kb.
● (Optional) modules - a list of the application modules contained in the MTA deployment archive
● (Optional) resources - a list of the resources that the modules require
Note
Only sections originally contained in the mtad.yaml may be added. Original MTA data cannot be overwritten or
deleted by using an extension, that is, you cannot add new modules, resources, or dependencies.
_schema-version: '2.1'
ID: com.sap.hana.cloud.samples.benefits.config
extends: com.sap.hana.cloud.samples.benefits
parameters:
title: SAP Employee Benefits Management Sample Application
description: This is a sample extension application for SuccessFactors Employee
Central.
modules:
Related Information
You can create, access and analyze application logs in your Cloud Foundry environment.
SAP Cloud Platform uses the open source logging platform Elasticsearch, Logstash, Kibana (ELK) to store, parse
and visualize the application log data coming from cloud foundry applications. For more information about ELK,
see the ELK product documentation at https://www.elastic.co.
You can have both application logs that originate from the Cloud Foundry router (you get such logs by default)
and logs explicitly issued by the application itself. You can generate logs by creating requests to your application.
Note
The underlying Cloud Foundry environment does not provide a reliable logging pipeline, that is, it may happen
that logs are dropped, for example in case of too many logs being issued in parallel.
Table 402:
To learn about See
How to access and analyze application logs Accessing and Analyzing Application Logs [page 1264]
Use this procedure to produce application logs and forward them to the Elasticsearch, Logstash, Kibana (ELK)
stack for further processing.
Context
Application logs in Cloud Foundry can originate from the Cloud Foundry router (you get such logs by default) or
they can be explicitly issued by the application itself.
While you can rely on the standard logging mechanisms, SAP Cloud Platform also offers dedicated support
libraries for Java applications running in the Cloud Foundry environment. The libraries serve the following
purposes:
Procedure
You can access the support libraries and find more information about how to use them in GitHub at https://
github.com/SAP/cf-java-logging-support for Java applications and https://github.com/SAP/cf-nodejs-
logging-support for NodeJS applications.
Results
You can generate logs by creating requests to your application. The related logs are produced automatically and
forwarded to the ELK stack for further processing.
You can access and analyze application logs in your Cloud Foundry environment.
Prerequisites
You have generated some logs. For more information, see Producing Logs [page 1264].
SAP Cloud Platform uses the open source data visualization platform Kibana to visualize the application log data
coming from cloud foundry applications. For more information about Kibana, see the Kibana product
documentation at https://www.elastic.co.
You can access and analyze your application logs produced in the last few days. The exact timeframe depends on
the overall log volume which is produced in the system.
Note
Even without any specific application logs, you can analyze your applications based on the automatically issued
logs from the Cloud Foundry router.
Procedure
To view the logs of your application, open , and sign in using your credentials.
Results
Kibana displays a set of pre-built dashboards that help you analyze your application, as follows:
● Use the Overview dashboard (default) to understand the evolution of logs and basic KPIs regarding failures,
response time, and response size.
● Use the Usage dashboard to investigate the actual requests, their URLs, user and component information.
● Use the Performance and Quality dashboard to investigate failures and response times.
● Use the Network and Load dashboard to investigate network traffic and payloads.
● Use the Requests and Logs dashboard to analyze the overall set of logs, and requests as well as their involved
components.
● Use the Statistics dashboard to see how many logs where shipped for each of your components as well as
how many logs were dropped by the pipeline due to quota limitations.
Note
Multitenancy at UI level such as restricting which UI content (dashboards, visualizations and saved searches) is
accessible to a group of users is not yet part of our application logging offering. Therefore:
You can now navigate across the dashboards, and set and pin filters in order to flexibly explore what you are
interested in. For example, you can focus your search on a specific application, with specific response codes
producing unexpected high response times.
1.6 Solutions
In the context of SAP Cloud Platform, a solution is comprised of various application types and configurations,
designed to serve a certain scenario or task flow.
● А Multi-Target Application (MTA) archive file, which contains one or multiple applications created with
different technologies and deployed to different target container runtimes, but with the same lifecycle.
● Оptionally, an extension descriptor, which contains any additional configuration information.
Note
Using MTA extension desciptors is supported only when you deploy solutions using the cockpit. For more
information, see МТА Extension Descriptors (Beta) [page 1261].
You can compose a solution by yourself, or you can aquire one from a third-party solution vendor.
Table 403:
To learn more about See
Provisioning a solution using the cockpit (Beta) - by using the Deploying Solutions Using the Cockpit (Beta) [page 1267]
cockpit, you can deploy all of the solution artifacts to your ac
count in one go. After deployment, you can use the cockpit to
monitor the state of each solution either in detail or overall.
Transporting a solution using the CTS+ - you can use the CTS Change Management with CTS+ [page 1237]
+ to transport and promote a solution among different envi
ronments or landscapes
Deploying a solution using a console client command - you deploy-mta [page 172]
can use the deploy-mta command to deploy one or several
solutions in one go.
Related Information
Prerequisites
Note
This is a beta feature available for all SAP Cloud Platform accounts. For more information about the beta
features, see Using Beta Features in Accounts [page 26].
Ensure that the MTA archive containing your solution is created as described in Multi-Target Applications [page
1239].
Make sure that you do not select the Provider deploy checkbox. If you select it, you will provide your solution for a
subscription. For more information, see .
Procedure
Results
Your newly deployed solution appears in the Standard Solutions category in the Solutions page in the cockpit.
Each solution component originates from a certain MTA module that in turn can result in several solution
components. That is, one MTA module corresponds to given solution components.
Note
This is a beta feature available for all SAP Cloud Platform accounts. For more information about the beta
features, see Using Beta Features in Accounts [page 26].
Proceed as follows to see a status overview of an individual solution or solution components in your account:
Procedure
○ Overview - it displays the solution name and status. For more information about the solution states, see
Solution (BETA) page help in the cockpit.
○ Description - a short descriptive text about the solution, typically stating what it does.
○ Solution components - a list of the components that are part of the solution, the states of these
components and their types.
The solution components types that you can monitor are the following:
○ Java Application
○ HTML5 Application
○ Data Source Binding
○ SuccessFactors Connection
○ SuccessFactors Role
○ SuccessFactors Homepage Tile
○ SuccessFactors Application Access
○ SuccessFactors Role Provider
○ Role
○ OData Service
○ Destination
For more information about the possible states of a solution component and what they mean, see your
solution page help in the cockpit.
Prerequisites
Еnsure that the related HTML5 or Java applications is started, if your solution contains one or more of the
following components:
● SuccessFactors Connection
● SuccessFactors Homepage Tile
● SuccessFactors Application Access
Context
Note
This is a beta feature available for all SAP Cloud Platform accounts. For more information about the beta
features, see Using Beta Features in Accounts [page 26].
Procedure
The solution undeployment dialog remains on the screen during the process. А confirmation appears when
the undeployment is completed.
If you close the dialog while the process is running, you can open it again by choosing Check Progress of the
corresponding operation, located in the Ongoing Operations table in the solution overview page.
Note
○ SFSF Roles are not deleted.
○ Custom application destinations and account destinations are also deleted.
SAP Cloud Platform is the extension platform for SAP. It enables developers to develop loosely coupled extension
applications securely, thus implementing additional workflows or modules on top of the existing SAP cloud
solution they already have.
SAP Cloud Platform provides a secure application container which decouples the extension applications from the
extended SAP solution via a public API layer. This container ensures that extension applications have no impact
on the stability of the extended solutions. It also ensures that data access is governed through the same roles and
permission checks as those of any other SAP interface. SAP Cloud Platform simplifies many of the system
integration challenges, handling aspects such as identity propagation, account onboarding, dynamic theming and
branding and installation automation and provisioning.
Technical aspects
● Extensions and extended SAP cloud solutions co-located in the same data center, where possible
In most of the cases the extensions that are being developed are co-located in the same data center as the
SAP product that is being extended. The co-location ensures that the complete solution is using one
infrastructure and is operated by one and the same team on this infrastructure. It also improves the response
time for API calls.
● Integration with SAP Cloud product toolset
This integration allows SAP solution administrators to have a consistent experience in managing extensions
as an integral part of the product they are responsible for, including but not limited to software lifecycle
management, administration of roles, permissions and visibility groups.
● Dynamic UI branding and theming
The tight integration between the SAP product and SAP Cloud Platform allows extension users to get the
same seamless user experience as the native product modules. It also allows the delivery of SAP solution-
specific artifacts, such as navigation exit points, tiles, widgets or external business objects.
● Security integration
The integration between the SAP product and SAP Cloud Platform also allows you to manage the extension
you are building by using all the authentication and authorization capabilities of the SAP product you want to
extend.
Development options
● Custom development
As a customer of an SAP cloud solution, you can create your own extension applications using SAP Cloud
Platform. SAP provides access to all the required integration and implementation materials describing how
SAP Cloud Platform is connected to the corresponding SAP cloud solution. Furthermore, for some of the SAP
Extension concept
SAP Cloud Platform serves as a dedicated and isolated secure application container (hosting Java or HTML5
applications, or both). On one hand, it provides the API-level access to the extended SAP solution. On the other
hand, it takes care of the lifecycle management and the initial configuration of the extension applications. There
are several levels of extension integration:
● Application customization
Usually, every SAP cloud solution comes with certain customization capabilities. Depending on the
technology stack, this might vary from a fully fledged customization for existing business objects, through
creating custom business objects, and up to generating native user interfaces based on the customized
objects. Some of the SAP technology stacks allow implementation teams to even do some simple coding,
which is then executed natively as part of the customized product. Regardless of how feature-rich the
extended solution is, SAP Cloud Platform adds much more to the extension capabilities and enables you to
build a large number of extension scenarios and interact with on-premise and cloud systems.
● Loosely coupled applications
As a minimum, extension application need a configured Single Sign-On (SSO) with the extended SAP solution.
All the SAP cloud solutions provide the means for such configuration - you can either leverage the solution
local integrated SAML 2.0-compliant identity provider, or by using the SAP Cloud Platform Identity
Authentication service as a central trust point in the landscape. As a rule of thumb, if you want to integrate a
number of different SAML 2.0-compliant solutions in your landscape, a central trust management point such
as Identity Authentication will significantly simplify the management of additional trusts. Furthermore, SAP
Cloud Platform comes pre-integrated with Identity Authentication.
Another aspect of the loosely coupled applications is that you have to ensure the end-to-end user identity
propagation going across all the extension application layers. This means that if, for example, a user has
logged on to an HTML5 application, it has to be the same user on behalf of which all the underlying backend
calls are performed. To achieve this, you leveraging the SAML 2.0 bearer assertion authentication flow, which
is the default way for accesing any SAP cloud solution API from SAP Cloud Platform. You use the same
approach for Java applications.
Related Information
Extension account
An extension account is a customer or partner SAP Cloud Platform account which is configured to interact with a
particular SAP solution through standardized destinations, usually with identity propagation turned on.
Tip
For extension accounts, we recommend that you change the default SAP Cloud Platform role provider to the
one of the extended SAP solution. Thus you channel all role assignment calls to the underlying extended SAP
system. For more information about changing the default role provider, see: Changing the Default Role
Provider [page 1404]
An extension application consists of several layers. It usually has a front-end UI layer decoupled from the back end
by OData, REST, or JSON services.
To achieve smooth retheming and rebranding, you can use SAPUI5 for implementing the UI layer. However, you
can also use any HTML5 or JavaScript UI framework.
The extension application back end includes existing SAP solution services, or can expose custom services
delivered with the extension application on SAP Cloud Platform.
Related Information
An extension application usually consists of several layers. There is a front-end UI layer decoupled from the back
end by OData, REST, or JSON services.
To achieve smooth retheming and rebranding, you implement the front end UI layer using SAPUI5. You can also
use any HTML5 or JavaScript UI framework.
SAP Cloud Platform offers various tools and capabilities to help you create, customize, and integrate your
extension front-end components.
The following artifacts are part of the UI package and delivered with the extension:
The following graphic provides an overview of the building blocks of the extension application front end:
Extensions usually aggregate data from multiple different business systems by combining multiple application
widgets on one or multiple pages. If you have to combine data and need to apply additional security checks, then
you usually define a higher level back-end services in Java or XS, aggregating the required data and exposing it
with a new REST, JSON or OData API to the UI tier.
Native customization
There are different native custumization options available with the SAP solutions. Most commonly, you can adjust
the user interface by changing the initial product configuration, by adjusting object metadata, by manipulating
field and operation visibility or by defining custom business objects. These customization options do not require
any coding on the frond-end tier since the resulting UI is generated natively in the extended solution.
SAPUI5 UI
To achieve smooth retheming and rebranding, you leverage SAPUI5 for the extension UI. SAPUI5 allows smooth
subsequent embedding of the custom UIs in the extended SAP solutions. The built-in extension and
customization mechanisms of SAPUI5 make it easy to replace standard views, to customize i18N resource texts,
to add new or to customize the existing navigation paths or even override existing code. Using SAPUI5 is a good
practice but you can also use other popular UI frameworks.
To achieve dynamic branding and retheming of extension UIs, we recommend that you use Portal service sites
configured with a corresponding template to mimic the look and feel of the extended SAP solution. Furthermore,
the Portal service allows dynamic redesign of pages leveraging the Portal authoring environment.
If you decide to go beyond pure configuration and customize the UI using SAPUI5, a natural choice would be SAP
Web IDE. SAP Web IDE helps you develop, test, and deploy SAPUI5 applications in your SAP Cloud Platform
account, and expose your applications as widgets. It offers various extension templates such as SAPUI5
templates which you can use to start with. Based on the OData services of the extended solution and on their
metadata, you can start creating and adjusting the new user interface. SAP Web IDE comes with a source code
editor that helps you fine tune the generated HTML code on your own, leveraging code completion.
Related Information
The extension application back end includes existing SAP solution services, or it can expose custom services
delivered with the extension application on SAP Cloud Platform. Usually, the back end is decoupled from the front
end by OData, REST, or JSON services.
● Active business logic, including both the content and the security checks
● Persistency layer
● Connectivity to one or more back-end systems
Business logic
The clearly decoupled business logic makes it easier to develop, test and operate extension applications on SAP
Cloud Platform. It also enables the implementation of concepts such as zero-downtime updates, A/B testing for
UI, and other. It ensures that all security checks are performed on the right level, leaving no space for error of
putting business logic in the UI tier. Extension applications can leverage any available SAP Cloud Platform
runtime. However, the level of integration of the different runtimes may vary. The list of features whose support
may vary depending on the runtime includes but is not limited to automatic application provisioning, roles and
identity propagation, auto-discovery of different application-bundled artifacts.
Extension applications benefit from the security model provided by both SAP Cloud Platform and the extended
SAP solution. The security frame comprises automatic roles and permissions import, usage of SAP solution-
native admin tools, transparency on roles permission assignment, consistent administration experience.
By leveraging all the available platform services, extension applications will benefit from the account-levelhave
Single Sign-On with the extended solution. For some of the SAP solutions (for example, SAP SuccessFactors), it is
possible to turn on native management of permissions and roles using the solution-native administration tools.
This is implemented by changing the default SAP Cloud Platform role provider. Essentially, extension applications
use the available runtime-specific standard mechanisms to check for role assignment and SAP Cloud Platform
transparently performs the assignment check in the underlying extended SAP solution.
In the scenario where the extended solution does not come with an embedded identity provider (IdP), we use the
SAP Cloud Platform Identity Authentication service as a central point for managing trust and user authentication.
By using the IdP-proxy feature of Identity Authentication, you can define your own identity provider.
The following graphic provides an overview of the business logic of the extension application back end:
The persistency layer is an essential aspect that needs to be considered when developing an extension
application. There are several options for storing data offered by SAP Cloud Platform, including both relational
(for example, SAP HANA and Sybase ASE as offered by persistence service) and unstructured (document
service) data storage options. Thus, the various storage needs of the extension applications can be covered.
It is also possible to store data in the extended SAP solution in the form of custom field or custom business
objects. This option varies for the different extended solutions. Custom business objects, however, are usually
limited both in volume and in number.
Connectivity
One of the most critical layers for the SAP Cloud Platform extension concept is the connectivity layer. It connects
an extension application to the extended SAP solution and to other required backend systems. The connectivity is
accomplished through a set of standardized destinations. All back-end calls are performed on behalf of the user
who is logged on to the extension front-end layer. To implement that, SAP Cloud Platform leverages SAML 2.0
bearer assertion authentication flow. The standardized destination names allow the portability of partner
applications - partner extension applications can expect to be installed in an environment where the required
destinations are in place and can be used. For more information about the standardized destinations, see
solution-specific section.
It is also possible to have destinations configured to use basic authentication or other authentication means.
However, we do not recommend the use of service users or a hard-coded user for back-end calls because the
back-end systems will not be able to perform user-based authorization checks. Furthermore, using service users
makes the end-to-end traceability very hard to achieve.
Related Information
You can extend the scope ofSAP SuccessFactors HCM Suite using SAP Cloud Platform extension applications.
Overview
SAP Cloud Platform, extension package for SAP SuccessFactors allows you to extend your SAP SuccessFactors
scope with applications running on the platform. The extension package makes it quick and easy for companies to
adapt and integrate SuccessFactors cloud applications to their existing business processes, thus helping them
maintain competitive advantage, engage their workforce and improve their bottom line.
The extension package for SAP SuccessFactors delivers the in-memory computing speed of SAP Cloud Platform
and includes capabilities from the SAP SuccessFactors metadata framework (MDF) and SAP Cloud Platform for
extension development. This combination of technologies makes it easier for SAP SuccessFactors customers,
partners, and developers to extend cloud or on-premises applications, build entirely new cloud applications, and
enable new processes that meet unique business needs. Therefore, you can use the SAP Cloud Platform,
extension package for both internal custom development based on the provided SAP SuccessFactors APIs and
for running certified extension applications provided by SAP partner companies.
Extensibility layers
Using MDF, you can develop custom objects, automatically expose them to SAP Cloud Platform, and enable them
for social media and mobile apps. This allows you to quickly define the data layer inside the SuccessFactors HCM
suite. You can then access that data layer and build on top of it by defining complex application logic and creating
a feature-rich user interface in SAP Cloud Platform.
With MDF you can create the precise functionality needed to your company's unique business requirements. You
can easily maintain and update the functionality as needed throughout the application lifecycle. You can also
integrate changes into your existing business processes, since every MDF object comes ready with an OData API
that can both read and write data.
Developers can leverage the following HTTP connectivity destinations pointing to SuccessFactors:
Note
You create the destination manually.
You use the
ConnectivityConfiguration
API for accessing the destination
configuration. For more information,
see ConnectivityConfiguration API
[page 318]
Supported APIs
You can find a list and implementation details of the APIs supported by SuccessFactors HCM Suite on SAP Help
Portal, at http://help.sap.com/hr_api/.
SAP Cloud Platform provides the following options for deploying and configuring SAP SuccessFactors extension
application. The preferred option depends on your scenario.
● Deploying and configuring an extension application using the SAP Cloud Platform cockpit (preferable for
productive scenarios).
For more information, see Solutions [page 1266].
● Deploying and configuring an extension application using console client commands (preferable for
development scenarios).
For more information, see Installing and Configuring Extension Applications (Beta) [page 1284].
You create an integration token required for the automated configuration of SAP Cloud Platform extension
package for SAP SuccessFactors.
Prerequisites
Note
This functionality is available for SAP SuccessFactors HCM Suite Q2 2016 release and higher.
You have the Administrator role for any of the SAP Cloud Platform accounts associated with the global account to
which the newly created extension account will be assigned during the automated configuration of SAP Cloud
Platform extension package for SAP SuccessFactors.
Context
To initiate the automated configuration of the SAP Cloud Platform extension package for SAP SuccessFactors,
the SAP SuccessFactors administrators with Provisioning access need an integration token. The integration token
determines the SAP Cloud Platform users who will be initially authorized to deploy and administer the extension
applications in the SAP Cloud Platform extension account created during the automated configuration. The token
also determines the SAP Cloud Platform landscape and the global account from which the respective resources
will be consumed.
As an SAP Cloud Platform user with permissions for the respective global account, you create the integration
token using the SAP Cloud Platform cockpit, and then pass it over to theSAP SuccessFactors administrator.
Procedure
1. In your Web browser, open the SAP Cloud Platform cockpit using the URLs given below. Use the relevant URL
for the region with which your customer account is associated:
○ Europe: https://account.hana.ondemand.com/cockpit
To separate the user IDs, use commas, spaces, semicolons, or line breaks.
Your user and the users you have entered will be assigned the Administrator role for the extension account
created during the automated configuration of the SAP Cloud Platform extension package for SAP
SuccessFactors.
5. Choose Create.
Your newly created token appears in the list of integration tokens and its status is ACTIVE. In the Integration
Tokens panel, you can view details such as the user who has created the token, the creation date and the
expiration date.
Note
The integration token can be used only once. Once the integration token is used, it is no longer valid.
○ To view the integration token value and the SAP user IDs assigned to this token, choose View token in the
Actions column on the row of the respective token.
○ To delete an integration token, choose Delete token in the Actions column on the row of the respective
token.
Results
You have created an integration token which you can use to initiate the automated configuration of the SAP Cloud
Platform extension package for SAP SuccessFactors.
Note
Make sure to use the integration token before its expiration date.
Next Steps
You can now pass over the value of the token to theSAP SuccessFactors administrator who will be triggering the
automated configuration of the SAP Cloud Platform, extension package for SAP SuccessFactors. For more
information, see the Configuring Extension Package for SAP SuccessFactors Automatically section in SAP Cloud
Platform, Extension Package for SAP SuccessFactors Implementation Guide .
As an implementation partner, you install and configure the extension applications that you want to make
available for customers.
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
You deploy your extension application, configure its connectivity to the SuccessFactors system and map the roles
defined in your extension application to the roles in the corresponding SuccessFactors system.
Prerequisites
● You have an SAP Cloud Platform extension account and the corresponding SuccessFactors customer
instance connected to it. Your account has been onboarded with the SAP Cloud Platform Extension Package
for SuccessFactors. For more information, see the Configuring Extension Package for SuccessFactors
Automatically section in the SAP Cloud Platform, Extension Package for SuccessFactors Implementation
Guide
● You have the quota purchased for the corresponding global account assigned to the SAP Cloud Platform
extension account. See Managing Accounts and Quota [page 19].
● You are an administrator of the SAP Cloud Platform extension account.
● You have a SuccessFactors administrator user with one of the following permission sets assigned to it:
○ General Admin and System Admin permissions
or
○ Company System and Logo Settings permissions
● You have the role-based permissions enabled for the SuccessFactors customer instance.
● When creating the extension application, you have defined the required roles in the web.xml file of the
application.
● In the SuccessFactors system, you have created or imported roles with the same names as those defined in
the application web.xml.
● You have the required permissions grouped into SuccessFactors role definitions.
● You have the WAR file of your application.
You deploy your extension application in your SAP Cloud Platform extension account and create the resource file
with role definitions. You also need to configure the application connectivity to SuccessFactors and to enable the
use of the HCM Suite OData API. To ensure that only approved applications are using the SuccessFactors IdP for
authentication, you need to register the extension application as an authorized assertion consumer service in
SuccessFactors. Then you you register the extension application home page tiles and import the extension
application roles in the SuccessFactors customer instance connected to the extension account.
To finalize the configuration on SAP Cloud Platform side, you change the default role provider to the
SuccessFactors one. To finalize the configuration on SuccessFactors side, you assign user groups to the
permission roles defined for your extension application.
Table 405:
Task Description
1. Deploy the Extension Application on the Cloud [page 1286] Deploy the extension application in your extension account on
SAP Cloud Platform.
2. Create the Resource File with Role Definitions [page 1287] Create the reosource file containing the SuccessFactors HCM
role definitions.
3. Register the Extension Application as an Authorized Asser Register the extension application as an authorized assertion
tion Consumer Service [page 1290] consumer service.
4. Configure the Extension Applications's Connectivity to SAP Configure the connectivity between your Java extension appli
SuccessFactors [page 1292] cation and the SuccessFactors system associated with your
SAP Cloud Platform extension account.
Note
This task is relevant for Java extension applications only.
5. Register a Home Page Tile for the Extension Application Register a home page tile for the extension application in the
[page 1295] extended SuccessFactors system
6. Import the Extension Application Roles in the SAP Success Import the application-specific roles from the SAP Cloud
Factors System [page 1299] Platform system repository into to the SuccessFactors cus
tomer instance connected to your extension account.
7. Assign the Extension Application Roles to Users [page Assign the extension application roles you have imported in
1301] the SuccessFactors systems to the user to whom you want to
grant access to your application.
8. Enable SAP SuccessFactors Role Provider [page 1302] Change the default SAP Cloud Platform role provider of your
Java application to the SuccessFactors role provider.
Note
This task is relevant for Java extension applications only.
9. Test the Role Assignments [page 1304] Try to access the application with the users with different level
of granted access to test the role assignements.
You deploy the extension application in your extension account on SAP Cloud Platform so that you can run it and
integrate it in SAP SuccessFactors.
Prerequisites
● You have the WAR file of the extension application you want to deploy.
● The WAR file contains the ZIP archive of the application site, as well as the <application_name>.spec.xml
file describing the corresponding widgets. For an example of a site ZIP archive and structure, see the Get the
Source Code section in https://github.com/SAP/cloud-sfsf-benefits-ext .
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting
Up the Console Client.
Context
You deploy the extension applications using the SAP Cloud Platform console client. The applications are deployed
in the customer account on the same production landscape where the SAP Cloud Platform, portal service is
deployed. The production landscape is available on a regional basis, where each region represents the location of
a data center. When deploying applications, bear in mind that a customer is associated with a particular region
and that this region is independent of your own location. You could be located in the United States, for example,
but operate your account in Europe (that is, use a data center that is situated in Europe). For more information
about the available landscape hosts, see Landscape Hosts.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. To deploy the extension application, execute the following command:
Results
You have deployed the extension application in your extension account on the SAP Cloud Platform.
Deploying Applications
You create the resource file containing theSAP SuccessFactors HCM role definitions.
Prerequisites
● The correspondingSAP SuccessFactors HCM Suite roles exist in the SAP SuccessFactors system.
● You have admin access to the SAP SuccessFactors OData API and have a valid account with user name and
password. For more information, see http://help.sap.com/saphelpiis_cloud4hr/EN/
SF_HCMS_OData_API_User_en/frameset.htm?4006ecf7444e4bc4aaa18c2364519126.html.
Context
To create the resource file with the role definitions required for your application, you use the SAP SuccessFactors
OData API to query the permissions defined for this role, and create a roles.json file containing the role
definitions. You use HTTP Basic Authentication for the OData API call.
Procedure
1. Call the OData API to query the permissions defined for the required role using the following URL:
https://<SAP_SuccessFactors_host_name>/odata/v2/RBPRole?$filter=roleName eq
'<role_name>'&$expand=permissions&$format=json
Where:
○ <host_name> is the fully qualified domain name of the OData API host depending on the data center
hosting your SuccessFactors instance. For more information about the OData API endpoints, see http://
help.sap.com/saphelpiis_cloud4hr/EN/SF_HCMS_OData_API_User_en/frameset.htm?
03e1fc3791684367a6a76a614a2916de.html.
○ <role_name> is the name of the role as defined in the SAP SuccessFactors system.
The response is a JSON object containing the following properties for each of the permissions defined for the
specified role:
Example response
{
"d": {
"__metadata": {
"uri": "https://localhost:443/odata/v2/RBPRole(82L)",
"type": "SFOData.RBPRole"
},
"roleId": "82",
"roleDesc": "Testing role permissions",
"lastModifiedBy": "admin",
"lastModifiedDate": "\/Date(1404299328000)\/",
"roleName": "Test Role Permissions",
"userType": "null",
"permissions": {
"results": [{
"__metadata": {
"uri": "https://localhost:443/odata/v2/
RBPBasicPermission(60L)",
"type": "SFOData.RBPBasicPermission"
},
"permissionId": "60",
"permissionType": "user_admin",
"permissionStringValue": "change_info_user_admin",
"permissionLongValue": "-1"
},
{
"__metadata": {
"uri": "https://localhost:443/odata/v2/
RBPBasicPermission(4L)",
"type": "SFOData.RBPBasicPermission"
},
"permissionId": "4",
"permissionStringValue": "detail_report",
"permissionLongValue": "-1",
"permissionType": "report"
}]
]
}
}
}
2. Create a roles.json file using the following properties. To list all the avalable permissions in your SAP
SuccessFactors system, use this OData API call: https://<SAP_SuccessFactors_host_name>/
odata/v2/RBPBasicPermission?$format=json. There you can find the properties that you need to
create the roles.json file.
roleName Name of the role as defined in the response to the OData API call
roleDesc Role description as defined in the response to the OData API call
[{
"roleDesc": "My role description",
"roleName": "My Application Role Name",
"permissions": [{
"permissionStringValue": "change_info_user_admin",
"permissionLongValue": "-1",
"permissionType": "user_admin"
},
{
"permissionStringValue": "detail_report",
"permissionLongValue": "-1",
"permissionType": "report"
}]
}]
Results
Next Steps
Import the role definition resource file in the SAP SuccessFactors system connected to your extension account.
For more information, see Import the Extension Application Roles in the SAP SuccessFactors System [page
1299].
Register the extension application as an authorized assertion consumer service to configure its access to the SAP
SuccessFactors system through the SAP SuccessFactors identity provider (IdP).
Prerequisites
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
● You have made yourself familiar with the SAP Cloud Platform console client. For more information, see
Console Client
● The extension application is started. For more information about starting an application deployed in an SAP
Cloud Platform account, see start
● The SAP Cloud Platform account in which you configure the connectivity to the SAP SuccessFactors system
is an extension account. For more information about extension accounts, see Basic Concepts
Context
Extension applications deployed in an SAP Cloud Platform extension account are authenticated against the SAP
SuccessFactors (IdP). To ensure that only approved applications are using the SAP SuccessFactors IdP for
authentication, you need to register the extension application as an authorized assertion consumer service,
configure the application URL, the service provider audience URL and the service provider logout URL of the
extension application in SAP SuccessFactors Provisioning. To do so you use the hcmcloud-enable-
application-access console client command.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Register the extension application as an authorized assertion consumer service. In the console client
command line, execute: hcmcloud-enable-application-access, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
For example, to register a Java extension application to which your account in the US East data center is
subscribed, execute:
3. (Optional) Display the status of an application entry in the list of authorized assertion consumer services for
the SAP SuccessFactors system associated with an extension account. In the console client command line,
execute hcmcloud-display-application-access, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to display the status of the authorized assertion consumer service entry for an application
deployed in your account in the US East data center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
For example, to display the status of the authorized assertion consumer service entry for an application
to which your account in the US East data center is subscribed, execute:
4. (Optional) If your scenario requires it, remove the entry of the exetsnion application from the list of authorized
assertion consumer services for the SAP SuccessFactors system associated with the extension account. In
the console client command line, execute hcmcloud-disable-application-access, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to remove the authorized assertion consumer service entry for a Java application deployed
in your account in the US East data center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
Related Information
Use this procedure to configure the connectivity between your Java extension application and the SuccessFactors
system associated with your SAP Cloud Platform extension account.
Prerequisites
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
● If you configure access to the HCM Suite OData API, you must have the OData API enabled for your SAP
SuccessFactors company instance in Provisioning. For more information, see theOData API Programmer's
Guide, available on SAP Help Portal at http://help.sap.com/cloud4hr .
● You have made yourself familiar with the SAP Cloud Platform console client. For more information, see
Console Client
● You have the role-based permissions enabled for the SAP SuccessFactors company instance.
● The SAP Cloud Platform account in which you configure the connectivity to the SAP SuccessFactors system
is an extension account. For more information about extension accounts, see Basic Concepts
● Your application runtime supports destinations. For more information about the application runtimes
supported by SAP Cloud Platform, see Application Runtime Container
Note
This procedure is relevant only for Java extension applications.
The extension applications interact with the extended SAP SuccessFactors system using the HCM Suite OData
API. The HCM Suite OData API is a RESTful API based on the OData protocol intended to enable access to data in
the SAP SuccessFactors system. You have the following API access scenarios:
To enable the API access and configure the connectivity between the Java extension applications and the
SuccessFactors sytem associated with your extension account, you use the hcmcloud-create-connection
console client command. Using the command, you specify the connection details for the remote communication
of the extension application and create the HTTP destinations required for configuring the API access. The
command also creates and configures the corresponding OAuth clients in the SuccessFactors company instance.
The command uses the following predefined destination names for the different connection types:
Table 408:
OData sap_hcmcloud_core_odata
If your scenario requires it, you can two connections for an extension application as long as the type of the
connections differs.
Depending on whether the extension application is deployed in your account or your account is subscribed to the
extension application, you configure the connectivity on an application level in the account where the application
is deployed, or on a subscription level in the account subscribed to the application.
You can optionally list the connections created for the extension application:
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Configure the connectivity. In the console client command line, execute hcmcloud-create-connection, as
follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
For example, to configure a connection of the OData type for an application to which your account in the
US East data center is subscribed, execute:
3. (Optional) List the connections created for the extension application. In the console client command line,
execute hcmcloud-list-connections, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to list the connections for an application deployed in your account in the US East data
center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<app_provider_account>:<my_app>.
For example, to list the connections for an application to which your account in the US East data center is
subscribed, execute:
4. (Optional) If your scenario requires it, remove the connectivity configured between your extension application
and the SAP SuccessFactors systems associated with the extension account. In the console client command
line, execute hcmcloud-delete-connection, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to remove a connection of the OData type for an application deployed in your account in the
US East data center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<app_provider_account>:<my_app>.
Related Information
You register a home page tile for the extension application in the extended SAP SuccessFactors system so that
you can access the application directly from the SAP SuccessFactors Employee Central (EC) home page.
Prerequisites
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
● You have deployed and started the extension application for which you are registering the home page tile
● You have registered the extension application as an authorized assertion consumer service. For more
information, see Register the Extension Application as an Authorized Assertion Consumer Service [page
1290]
● You have the home page tile provided as part of the application interface
You develop the content of the tile as a dedicated HTML page inside the application and size it according to
the desired tile size. You describe the tiles in a tiles.json descriptor and package them in a ZIP archive.
For more information about the structure of the tiles.json descriptor, see tiles.json [page 1298].
● You have created the tiles.json descriptor.
Context
The SAP SuccessFactors EC home page provides a framework that allows different modules to provide access to
their functionality using tiles. For the extension applications hosted in the SAP Cloud Platform extension account,
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Register the SAP SuccessFactors EC home page tiles in the SAP SuccessFactors company instance linked to
the specified SAP Cloud Platform account. In the console client command line, execute: hcmcloud-
register-home-page-tiles, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to register a home page tile for a Java extension application running in your account in the
US East data center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of the extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
For example, to register a home page tile for a Java extension application to which your account in the US
East data center is subscribed, execute:
Note
The size of the tile descriptor file must not exceed 100 KB.
3. (Optional) List the extension application home page tiles registered in the SAP SuccessFactors company
instance associated with the extension account. In the console client command line, execute hcmcloud-get-
registered-home-page-tiles, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to list the tiles registered for a Java extension application deployed in your account in the US
East data center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of the extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
Note
If you do not specify the application parameter, the command returns all the tiles registered in the SAP
SuccessFactors EC home page of the SAP SuccessFactors company instance linked to the extension
account.
There is no lifecycle dependency between the tiles and the application, so the application may not be
started or may not be deployed anymore.
4. (Optional) If your scenario requires it, unregister the SAP SuccessFactors EC home page tiles registered for
the extension application. In the console client command line, execute hcmcloud-unregister-home-
page-tiles, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to unregister the SAP SuccessFactors EC home page tiles for a Java application deployed in
your account in the US East data center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
For example, to unregister the SAP SuccessFactors EC home page tiles for a Java application to which
your account in the US East data center is subscribed, execute:
Note
There is no lifecycle dependency between the tiles and the application, so the application may not be
started or may not be deployed anymore.
The tiles.json descriptor contains the definition of the home page tiles for the extension application.
Properties
Table 409:
Required
Default: 1
Accepted values:
● 1 - medium
● 2 - large
● 3 - extra large
padding Defines whether to add padding around the tile and the application tile content
Default: false
metadata Defines the localized tile title and description. If you do not define this parameter, the
framework displays the value of the name parameter to the users.
Table 410:
Optional
Note
The tiles.json descriptor file must use UTF-8 encoding and its size must not exceed 100 KB.
Example
To complete the authorization configuration of your extension application, you import the application-specific
roles into to the SAP SuccessFactors company instance connected to your extension account.
Prerequisites
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
● You have created the resource file with the required role definitions. For more information, see Create the
Resource File with Role Definitions [page 1287].
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting
Up the Console Client.
Using the hcmcloud-import-roles console client command, you import the required role definitions in the SAP
SuccessFactors company instance connected to this account.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Execute the following command:
Note
The size of the file containing the role definitions must not exceed 500 KB.
Results
You have imported the application-specific roles in the SuccessFactors company instance connected to your
account. Now you need to assign users to these roles.
Related Information
To complete the authorization configuration for your extension application, you assign the extension application
roles you have imported in the SAP SuccessFactors systems to the user to whom you want to grant access to
your application.
Prerequisites
● You have a role-based permission environment for your SAP SuccessFactors company instance.
● Your have either a Super Administrator or a Security Admin user for SAP SuccessFactors and have access to
the functionality on the SAP SuccessFactors Admin page.
● You have deployed the extension application.
Context
Procedure
https://<SAP_SuccessFactors_landscape>/login
Where <SAP_SuccessFactors_landscape> is the fully qualified domain name of the host on which the SAP
SuccessFactors company is running.
2. Navigate to the Manage Permission Roles, as follows:
○ For Version 12 UI Framework (Revolution) not enabled: Navigate to: Admin Center Manage Security
Manage Permission Roles .
○ For Version 12 UI Framework (Revolution) enabled: Navigate to: Admin Center Manage Employees
Set User Permissions Manage Permission Roles .
3. Locate the role you want to manage, and from the Take Action dropdown box next to the role, select Edit.
4. On the Permission Role Detail page, scroll down to the Grant this role to...section, and then choose Add. The
system opens the Grant this role to... page.
5. On the Grant this role to... page, define whom you want to grant this role to, and specify the target population
accordingly.
6. To navigate back to the Permission Role Detail page, choose Done.
7. Save your entries.
If you have SAP Cloud Platform extension package for SAP SuccessFactors configured for your account, you can
change the default SAP Cloud Platform role provider of your Java application to the SAP SuccessFactors role
provider.
Prerequisites
Note
This is a beta feature available for SAP Cloud Platform extension accounts. For more information about the
beta features, see Using Beta Features in Accounts [page 26].
● You have an SAP Cloud Platform extension account. For more information about extension accounts, see
Basic Concepts
● You are an administrator of your SAP Cloud Platform account
● You have configured the Java extension application's connectivity to the SAP SuccessFactors sytem
associated with the extension account. For more information, see Configure the Extension Applications's
Connectivity to SAP SuccessFactors [page 1292].
● In the SAP SuccessFactors system, you have created or imported roles with the required permissions and
these roles are with the same names as those defined in the web.xml file of the extension application.
For more information about importing roles, see Import the Extension Application Roles in the SAP
SuccessFactors System [page 1299].
For more information about creating permission roles in SAP SuccessFactors, see the How do you create a
permission role? section in Role-Based Permissions Administration Guide.
● In the SAP SuccessFactors system, you have assigned the required roles to the corresponding users and
groups. For more information, see Assign the Extension Application Roles to Users [page 1301].
● When creating the extension application, you have defined the required roles in the web.xml file of the
application and these roles are the same as the ones you have for the application in the SAP SuccessFactors
system. For more information about how to define roles in the web.xml file of the application, see Enabling
Authentication.
Context
A role provider is the component that retrieves the roles for a particular user. By default, the role provider used for
SAP Cloud Platform applications and services is the SAP Cloud Platform role provider. For Java extension
applications, however, you have to change the default role provider to the provider of the corresponding system.
For Java extension applications for SAP SuccessFactors you change the default role provider to the SAP
SuccessFactors role provider. To change the role provider for a Java exetension application for SAP
SuccessFactors automatically, use the hcmcloud-enable-role-provider console client command.
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (SDK installation folder/tools).
2. Enable the SAP SuccessFactors role provider for your Java extension application. Execute: hcmcloud-
enable-role-provider, as follows:
○ For an application deployed in your account, specify the name of your extension application for the
application parameter.
For example, to enable the SAP SuccessFactors role provider for a Java extension application running in
your account in the US East data center, execute:
○ For an application to which your account is subscribed, specify the application provider account and the
name of your extension application for the application parameter in the following format:
<application_provider_account>:<my_application>.
For example, to enable the SAP SuccessFactors role provider for a Java extension application to which
your account in the US East data center is subscribed, execute:
Related Information
To test the role assignments you first start the deployed extension application to make it available for requests,
and then try to access it with the users with different level of granted access to the application.
Prerequisites
● You have downloaded and configured SAP Cloud Platform console client. For more information, see Setting
Up the Console Client.
● You have made yourself familiar with the SAP Cloud Platform cockpit concepts. For more information, see
Cockpit
Procedure
1. Open the command prompt and navigate to the folder containing neo.bat/sh (<SDK installation
folder>/tools).
2. Start the deployed application using the following command:
3. Access the application using users with different roles assigned to them.
To access the application, use the application URL. To get the login URL of an application deployed in your
extension account, open the SAP Cloud Platform cockpit, and navigate to Account <account_name>
Java Applications <name_of_your_extension_application> Application URLs .
You can use the SAP Cloud Platform virtual machines to install and maintain your own applications in scenarios
not covered by the platform.
Note
Virtual machines are currently available only in the data center in Europe. SAP Cloud Platform systems in other
data centers can communicate with virtual machines only via public Internet, so you have to architect your
applications accordingly.
Note
As the owner of a virtual machine, you are responsible for applying patches on the operating system and for
triggering backups via snapshots.
Building Components
An SAP Cloud Platform virtual machine is the virtualized hardware resources (CPU, RAM, disk space, installed
OS) on which the installed software runs. The virtual machines come with a SUSE Linux Enterprise Server
installed. For more information about the currently supported version, see Patching the OS Image [page 1310].
You create virtual machines on which you install your own software different from the programming models
supported by the platform.
Each virtual machine has a volume - the storage behind the file system and all software installed on it.
Depending on your needs, you can choose from the following sizes with predefined configurations:
Table 411:
Virtual Machine Size CPU Core RAM (GB) Disk storage (GB)
xs 1 2 20
s 2 4 40
m 4 8 80
l 8 16 160
xl 16 32 320
Capabilities
You can
Limitations
● An SAP Cloud Platform virtual machine is running in a private network and cannot be accessed from another
customer account.
● Virtual machines are not exposed directly to the Internet. Outbound communication to the Internet and other
systems is allowed, but inbound communication has to be configured by registering an access point.
● Communication to the virtual machines is allowed only via a SSH tunnel using the console client. Additionally,
communication from a SAP Cloud Platform Java Compute Unit or a SAP HANA system can be configured.
You need to create and start a virtual machine using the console client. Then, you establish a secure
communication channel to it over Secure Shell (SSH) protocol. You open an SSH tunnel and get all the
communication details you need to log in to the virtual machine and install and maintain your software.
Prerequisites
● Your account has quota for virtual machines. You can view and assign quota to virtual machines when you
open the SAP Cloud Platform cockpit and navigate to Quota Management.
● You have downloaded the latest SAP Cloud Platform SDK to make sure the console client contains the latest
changes. For more information, see Installing the SDK [page 44].
● You have set up the SAP Cloud Platform console client. For more information, see Setting Up the Console
Client [page 52].
Procedure
1. Open the console client - in the command prompt, navigate to the folder containing neo.bat/neo.sh (<SDK
installation folder>/tools).
2. Create a virtual machine. This also starts it.
Note that in this step you can generate the key pair, which will be used to log in to the virtual machine. When
generating the key pair, the file name is auto-generated and the file is saved under the following file path:
<directory where the command is executed>/<landscape host>/<account>.
For security reasons, the private key in the generated key pair can be encrypted with a passphrase. You will
need the private key passphrase when you connect to the virtual machine using an SSH client. Provide and
confirm the passphrase with the --private-key-passphrase and --private-key-passphrase-
confirmation parameters in the command line, or, when prompted.
Note
Encryption with a passphrase is supported if you are using OpenSSH, but may not work with other SSH
clients.
3. Check if the virtual machine was created by executing list-vms, which shows all the virtual machines in an
account:
You can also view this information when you open the SAP Cloud Platform cockpit and navigate to Virtual
Machines. To view details about a specific virtual machine, choose its name in the list.
4. Open an SSH tunnel to the virtual machine.
You can provide a port on which you will connect to the virtual machine once the tunnel is opened. If you do
not provide a port, you receive such automatically. Execute:
After opening an SSH tunnel, the virtual machine is available on localhost on the respective port.
Note
Instead of opening an SSH tunnel, you can use the Service Channel option in the SAP Cloud Platform cloud
connector to connect to the virtual machine. For more information, see Configuring a Service Channel for
Virtual Machine [page 539].
5. Check if the tunnel was opened by listing the currently opened SSH tunnels:
neo list-ssh-tunnels
Results
You are now the owner of this virtual machine and can install your software on it. To do that, or to apply an OS
patch to your virtual machine, you need access to the SUSE Linux Enterprise Server (SLES) repositories. SLES
repositories, like any other repositories, are storage locations from which you can retrieve and install software
packages. To configure your access to them, execute the following commands:
You can manage the lifecycle of the created virtual machine - check its status and delete it when no longer
needed.
You can create another virtual machine with the same file system by using volumes and volume snapshots.
Related Information
When you create a virtual machine and thus become its owner, you have to take care to apply patches and
updates on its operating system (OS). Whenever there is a new OS image with security patches available, the
infrastructure will change the default image used and new virtual machines will be started with the new image.
However, the virtual machines you have previously created will still use the old image and you need to update it.
You need to apply security patches directly from the SUSE Linux Enterprise Server (SLES) repositories.
Prerequisites
You have configured your access to the SLES repositories by executing the following commands:
Procedure
zypper refresh
If you do not specify the --category security parameter, the command will list all the available patches.
3. Install the selected patches.
Results
Note
If you have created a snapshot of a virtual machine before the update and start another virtual machine from
that snapshot, you need to install the security patches on that new virtual machine too as described above.
You can allow communication with SAP Cloud Platform virtual machines from other systems by managing
security group rules using console client commands. Communication between virtual machines within the same
account is available by default.
Prerequisites
You have created a virtual machine. For more information, see Managing Virtual Machines [page 1308].
After you create a virtual machine, it is secure and communication to it is allowed only via SSH using the console
client. You can define the allowed ports on which another SAP Cloud Platform system can connect to the specific
virtual machine by configuring a security group rule for it.
Procedure
For an SAP HANA system, the --source-id is the SAP HANA database system name. You can find your SAP
HANA database system name in the cockpit, where you navigate to Persistence Database Systems . For
a Java application, it is the application name.
The type of the system is specified in the --source-id field. The acceptable --source-type values are
HANA and JAVA.
2. Check the security group rules for the virtual machine:
3. (Optional) When you no longer need the configured communication, delete the security group rule:
Related Information
You can make your software running on a virtual machine accessible from the Internet if your scenario requires it.
Context
Using console client commands, you can enable an access point of your application via which end users can
access the application over HTTPS. Alternatively, you can do that from the SAP Cloud Platform cockpit.
Note
SAP Cloud Platform supports communication over HTTPS only. So Internet traffic will be directed over HTTPS
to a software process running on your virtual machine and listening on port 8041. For such communication, you
need to have a valid server certificate in place.
Procedure
You can check the access point with the list-vms command.
Alternatively, you can enable Internet access to a virtual machine from the cockpit. Open Virtual Machines in
the navigation, click the name of the particular virtual machine and choose Expose to Web.
2. (Optional) When you no longer need the access point, remove it:
Alternatively, you can disable Internet access to a virtual machine from the cockpit. Open Virtual Machines in
the navigation, click the name of the particular virtual machine and choose Hide from Web.
Related Information
A volume is the persistent storage that is created automatically when a virtual machine is created.
Context
Each virtual machine has a volume that stores the file system and all software installed on it. You can create a new
virtual machine with the same volume; delete a volume; create a snapshot of a volume.
Procedure
The output shows all volumes with their ID, status, size as well as ID of the virtual machine they are attached
to. You can choose a volume from which you want to create another virtual machine and take its ID. The
volume must be in status available.
2. Create a new virtual machine from the volume.
3. (Optional) Delete a volume when you no longer need it to free some resources.
You cannot delete a volume that has snapshots or is in use by a virtual machine.
Next Steps
You can create a snapshot of the volume of a virtual machine. This snapshot contains everything that was
installed on the file system, but does not keep any running processes and runtime configurations. See Managing
Volume Snapshots [page 1315].
Related Information
You can take a snapshot of an existing virtual machine volume in your account and use it to create a new virtual
machine with the same file system thus saving any manual installation.
Prerequisites
Context
Each virtual machine has a volume – the storage behind the file system and all software installed on it. Using
console client commands, you can create a snapshot of the volume of a virtual machine. This snapshot contains
everything that was installed on the file system, but does not keep any running processes and runtime
configurations. Then, you create a new virtual machine from this volume snapshot.
1. List virtual machines in your account to find out the volume of which you want to take a snapshot.
The command output includes all virtual machines with their volume IDs. Copy the ID of the volume you need.
2. Create a snapshot of the specified VM volume.
3. Check the status of volume snapshot creation. You can find the snapshot ID in the output of the create-
volume-snapshot command.
5. (Optional) List all volume snapshots in your account. This will give you more information about each
snapshot, such as ID, name, status, volume ID.
6. (Optional) Delete a snapshot when you no longer need it. This will free some quota to use for new volume
snapshots.
You can consume an SAP HANA database from a virtual machine using JDBC.
Prerequisites
● You have created a virtual machine and connected to it over SSH protocol as a root user so that you can
install your software. For more information, see Managing Virtual Machines [page 1308].
● Your account is configured with a dedicated SAP HANA database.
1. Install the runtime necessary to run your application on the virtual machine (for example, Java, Node.js).
2. Get a valid JDBC driver for your application.
3. To get the details required to connect to the database, go to the overview of the database in the SAP Cloud
Platform cockpit.
a. In the cockpit, select an account and choose Databases & Schemas.
b. Select a database that you want to connect to. This opens the overview for the selected database.
4. Create a database user to get access to an SAP HANA database. Please see Binding SAP HANA Databases to
Java Applications [page 868] (section: Create an SAP HANA Database User).
5. To connect to the database, specify the following details:
a. The user name and password that you defined earlier for the database.
b. The host and port that you should take from the JDBC URL (for example, jdbc:sap://localhost:
30015) displayed in the cockpit in the overview for the selected database.
1.9 Security
This section describes how to secure your applications for SAP Cloud Platform.
Related Information
SAP Cloud Platform supports identity federation and single sign-on with external identity providers. The current
section provides an overview of the supported scenarios.
Contents
Overview
To enable you to seamlessly integrate SAP Cloud Platform applications with existing on-premise identity
management infrastructures, SAP Cloud Platform introduces single sign-on (SSO) and identity federation
features. In SAP Cloud Platform, identity information is provided by identity providers (IdP), and not stored on
SAP Cloud Platform itself. You can have a different IdP for each account you own, and this is configurable using
the Cockpit.
The following graphic illustrates the high-level architecture of identity management in SAP Cloud Platform.
If you don't have a corporate identity management infrastructure, you can use SAP ID Service. It is the default
identity provider for SAP Cloud Platform, and you can use it out of the box, without having to configure SSO and
identity federation.
SAP Cloud Platform also allows you to implement applications protected with the OAuth protocol.
SAP Cloud Platform applications can delegate authentication and identity management to an existing corporate
IdP that can, for example, authenticate your company's employees. It aims at providing a simple and flexible
solution: your employees (or customers, partners, and so on) can single sign-on with their corporate user
credentials, without a separate user store and account in SAP Cloud Platform. All information required by SAP
Cloud Platform about the employee can be passed securely with the logon process, based on a proven and
standardized security protocol. There is no need to manage additional systems that take care for complex user
account synchronization or provisioning between the corporate network and SAP Cloud Platform. Only the
configuration of already existing components on both sides is needed, which simplifies administration and lowers
total cost of ownership significantly. Even existing applications can be "federation-enabled" without changing a
single line of code.
You can use Identity Authentication as an identity provider for your applications. is a cloud solution for identity
lifecycle management. Using it, you can benefit from features such as user base, user provisioning, corporate
branding or logo, and social IdP integration. See Identity Authentication.
Identity Authentication provides an easy way for your applications to delegate authentication and identity
management and keep developers focused on the business logic. It allows authentication decisions to be removed
from the application and handled in a central service.
SAP Cloud Platform offers solid integration with Identity Authentication. When you request an Identity
Authentication tenant for your SAP Cloud Platform account, you can automatically use it as a trusted IdP.
SAP ID service is the place where you have to register to get initial access to SAP Cloud Platform. If you are a new
user, you can use the self-service registration option at the SAP Web site or SAP ID Service . SAP ID Service
manages the users of official SAP sites, including the SAP developer and partner community. If you already have
such a user, then you are already registered with SAP ID Service.
In addition, you can use SAP ID Service as an identity provider for your identity federation scenario, or if you do
not want to use identity federation. Trust to SAP ID Service is pre-configured on SAP Cloud Platform by default, so
● A central user store for all your identities that require access to protected resources of your application(s)
● A standards-based Single Sign-On (SSO) service that enables users to log on only once and get seamless
access to all your applications deployed using SAP Cloud Platform
The following graphic illustrates the identity federation with SAP ID Service scenario.
Managing Roles
Roles allow you to control the access to application resources in SAP Cloud Platform, as specified in Java EE. In
SAP Cloud Platform, you can assign groups or individual users to a role. Groups are collections of roles that allow
the definition of business-level functions within your account. They are similar to the actual business roles existing
in an organization.
The following graphic illustrates a sample scenario for role, user and group management in SAP Cloud Platform. It
shows a person, John Doe, with corporate role: sales representative. On SAP Cloud Platform, all sales
representatives belong to group Sales, which has two roles: CRM User and Account Owner. On SAP Cloud
Platform, John Doe inherits all roles of the Sales group, and has an additional role: Administrator.
OAuth 2.0 is a widely adopted security protocol for protection of resources over the Internet. It is used by many
social network providers and by corporate networks. It allows an application to request authentication on behalf of
users with third-party user accounts, without the user having to grant its credentials to the application. SAP Cloud
Platform provides an API for developing OAuth-protected applications. You can configure the required scopes and
clients using the Cockpit.
The following graphic illustrates protecting applications with OAuth on SAP Cloud Platform.
● Authorization code grant - there is a human user who authorizes a mobile application to access resources on
his or her behalf. See Protecting Applications with OAuth 2.0 [page 1340]
● Client credentials grant - there is no human user but a device instead. In such case, the access token is
granted on the basis of client credentials only. See Enabling OAuth 2.0 Client Credentials Grant [page 1346]
You can use a user store from an on-premise system for user authentication scenarios. SAP Cloud Platform
supports two types of on-premise user stores:
In this section, you can find information relevant for securing SAP HANA applications running on SAP Cloud
Platform.
General security concepts for SAP HANA applications SAP HANA Security Guide
Specific security concepts for SAP HANA applications running Configuring SAML 2.0 Authentication [page 1093]
on SAP Cloud Platform
Setting up SAML authentication for SAP HANA XS applica How to Set Up SAML Authentication For Your SAP Cloud
tions Platform Trial Instance
SAP Cloud Platform uses the Security Assertion Markup Language (SAML) 2.0 protocol for authentication and
single sign-on.
By default, SAP Cloud Platform is configured to use SAP ID service as identity provider (IdP), as specified in SAML
2.0. You can configure trust to your custom IdP, to provide access to the cloud using your own user database.
SAP ID Service provides Identity and Access Management for Java EE Web applications hosted on SAP Cloud
Platform through the mechanisms described in Java EE Servlet specification and through dedicated APIs.
Cross-site Scripting (XSS) is one of the most common types of malicious attacks to Web applications. In order to
help protecting against this type of attacks, SAP Cloud Platform provides a common output encoding library to be
used from applications.
Cross-Site Request Forgery (CSRF) is another common type of attack to Web applications. You can protect
applications running on SAP Cloud Platform from CSRF, based on the Tomcat Prevention Filter.
This section describes how you can implement security in your applications.
SAP Cloud Platform provides the following APIs for user management and authentication:
Package Description
com.sap.security.um The user management API can be used to create and delete
users or update user information.
com.sap.security.um.user
com.sap.security.um.service
Authentication API
Related Information
Prerequisites
● You have installed the SAP Cloud Platform Tools for Java. See Setting Up the Development Environment
[page 43].
● You have created a simple HelloWorld application. See Creating a HelloWorld Application [page 56].
● If you want to use Java EE 6 Web Profile features in your application, you have downloaded the SAP Cloud
Platform SDK for Java EE 6 Web Profile. See Using Java EE 6 Web Profile [page 1036]
Context
Note
User names in SAP Cloud Platform are case insensitive.
Context
The Java EE servlet specification allows the security mechanisms for an application to be declared in the web.xml
deployment descriptor.
FORM Trusted SAML 2.0 identity FORM authentication imple You want to delegate authen
provider tication to your corporate
mented over the Security As
identity provider.
Application-to-Application sertion Markup Language
SSO (SAML) 2.0 protocol. Authen
tication is delegated to SAP ID
service or custom identity
provider.
BASIC User name and password HTTP BASIC authentication Example 1: You want to dele
delegated to SAP ID service gate authentication to SAP ID
or an on-premise SAP Net service. Users will log in with
Weaver AS Java system. Web their SCN user name and
browsers prompt users to en password.
ter a user name and pass
Example 2: You have an on-
word.
premise SAP NetWeaver AS
By default, SAP ID service is Java system used as a user
used. (Optional) If you config store. You want users to log in
ure a connection with an on- using the user name and
premise user store, the au password stored in AS Java.
thentication is delegated to an
on-premise SAP NetWeaver
AS Java system. See Using an
SAP System as an On-Prem
ise User Store [page 1421].
CERT Client certificate Used for authentication only Users log in using their corpo
with client certificate. See En rate client certificates.
abling Client Certificate Au
thentication [page 1368].
BASICCERT User name and password Used for authentication either Within the corporate network,
with client certificate or with users log in using their client
Client certificate
user name and password. See certificates. Outside that net
Enabling Client Certificate Au work, users log in using user
thentication [page 1368]. name and password.
OAUTH OAuth 2.0 token Authentication according to You have a mobile application
the OAuth 2.0 protocol with consuming REST APIs using
an OAuth access token. See the OAuth 2.0 protocol. Users
Protecting Applications with log in using an OAuth access
OAuth 2.0 [page 1340] token.
Application-to-Application
SSO
If you need to configure the default options of an authentication method, or define new methods, see Configuring
Authentication for Your Application [page 1392]
Tip
We recommend using FORM authentication method.
Note
By default, any other methods (DIGEST, CLIENT-CERT, or custom) that you specify in the web.xml are
executed as FORM. You can configure those methods using the Authentication Configuration section at Java
application level in the Cockpit. See Configuring Authentication for Your Application [page 1392].
Results
● When FORM authentication is used, you are redirected to SAP ID service or another identity provider, where
you are authenticated with your user name and password. The servlet content is then displayed.
● When BASIC authentication is used, you see a popup window and are prompted to enter your credentials. The
servlet content is then displayed.
Example
Example 1: Using FORM Authentication
The following example illustrates using FORM authentication. It requires all users to authenticate before
accessing the protected resource. It does not, however, manage authorizations according to the user roles - it
authorizes all authenticated users.
<login-config>
<auth-method>FORM</auth-method>
</login-config>
<security-constraint>
<web-resource-collection>
<web-resource-name>Protected Area</web-resource-name>
<url-pattern>/index.jsp</url-pattern>
<url-pattern>/a2asso.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<!-- Role Everyone will not be assignable -->
<role-name>Everyone</role-name>
</auth-constraint>
</security-constraint>
<security-role>
<description>All SAP Cloud Platform users</description>
<role-name>Everyone</role-name>
</security-role>
If you want to manage authorizations according to user roles, you should define the corresponding constraints
in the web.xml. The following example defines a resource available for users with role Developer, and another
resource for users with role Manager:
<security-constraint>
<web-resource-collection>
<web-resource-name>Developer Page</web-resource-name>
<url-pattern>/developer.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Developer</role-name>
</auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
<web-resource-name>Manager Page</web-resource-name>
<url-pattern>/manager.jsp</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Manager</role-name>
</auth-constraint>
</security-constraint>
<login-config>
<auth-method>FORM</auth-method>
</login-config>
Remember
If you define roles in the web.xml, you need to manage the role assignments of users after you deploy your
application on SAP Cloud Platform. See Managing Roles [page 1394]
Context
With programmatic authentication, you do not need to declare constrained resources in the web.xml file of your
application. Instead, you declare the resources as public, and you decide in the application logic when to trigger
authentication. In this case, you have to invoke the authentication API explicitly before executing any application
code that should be protected. You also need to check whether the user is already authenticated, and should not
trigger authentication if the user is logged on, except for certain scenarios where explicit re-authentication is
required.
If you trigger authentication in an SAP Cloud Platform application protected with FORM, the user is redirected to
SAP ID service or custom identity provider for authentication, and is then returned to the original application that
triggered authentication.
package hello;
import java.io.IOException;
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.security.auth.login.LoginContextFactory;
public class HelloWorldServlet extends HttpServlet {
...
protected void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException, IOException {
String user = request.getRemoteUser();
if (user != null) {
response.getWriter().println("Hello, " + user);
} else {
LoginContext loginContext;
try {
loginContext = LoginContextFactory.createLoginContext("FORM");
loginContext.login();
response.getWriter().println("Hello, " + request.getRemoteUser());
} catch (LoginException e) {
e.printStackTrace();
}
}
...
}
In the example above, you create LoginContext and call its login() method.
Note
All the steps below are described using the FORM authentication method, but they can also be applied to
BASIC.
Procedure
1. Open the source code of your HelloWorldServlet class. Add the code for programmatic authentication to the
doGet() method.
2. Make the doPost() method invoke programmatic authentication. This is necessary because the SAP ID
service always returns the SAML2 response over an HTTP POST binding, and in order to be processed
correctly, the LoginContext login must be called during the doPost() method. The authentication framework
is responsible for restoring the original request using GET after successful authentication. Another alternative
is that your doPost() method simply calls your doGet() method.
When BASIC authentication is used, you should see a popup window prompting you to provide credentials to
authenticate. Once these are entered successfully, the servlet content is displayed.
You can configure session timeout using the web.xml. Default value: 20 minutes. For example:
<session-config>
<session-timeout>15</session-timeout> <!-- in minutes -->
</session-config>
After the specified timeout, user sessions are invalidated. If the user tries to access an invalidated session, SAP
Cloud Platform will return a login page in its response, so the user can enter credentials again. . If you are using
SAML as login protocol, you cannot rely on the response code to find out the your session is expired because it will
be 200 or 302. To check whether the response is for triggering new login, get the
com.sap.cloud.security.login HTTP header, and reload the page. For example:
jQuery(document).ajaxComplete(function(e, jqXHR){
if(jqXHR.getResponseHeader("com.sap.cloud.security.login")){
alert("Session is expired, page
shall be reloaded.");
window.location.reload();
}
}
1.9.3.1.1.4 Troubleshooting
When testing in the local scenario, and your application has Web-ContextPath: /, you might experience the
following problem with Microsoft Internet Explorer:
Output Code
HTTP Status 405 - HTTP method POST is not supported by this URL
If you see such issues, you will have to add the following code into your protected resource:
@Override
protected void doPost(HttpServletRequest req, HttpServletResponse resp)
throws ServletException, IOException { doGet(req, resp); }
Basic Authentication
Tip
Even though basic authentication is usually used for technical users to consume REST services (stateless
communication) we recommend that the client leverages the security session instead of sending credentials
with every call. This means the client needs to make sure it preserves and re-sends all HTTP cookies it receives.
Thus, authentication will happen only once and this could improve performance.
Next Steps
You can now test the application locally. See Security Testing Locally [page 1381].
After testing, you can proceed with deploying the application to SAP Cloud Platform. See Deploying and Updating
Applications [page 1043].
After deploying on SAP Cloud Platform, you need to configure the role assignments users and groups will have for
this application. See Managing Roles [page 1394].
Optionally, you can configure the authentication options applied in the authentication method that you defined in
the web.xml or programmatically. See Configuring Authentication for Your Application [page 1392].
Example
To see the end-to-end scenario of managing roles on SAP Cloud Platform, watch the complete video tutorial
Managing Roles in SAP Cloud Platform .
if(!request.isUserInRole("Developer")){
response.sendError(403, "Logged in user does not have role Developer");
return;
} else {
out.println("Hello developer");
}
}
You can now test the application locally. For more information, see Security Testing Locally [page 1381].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1043].
After deploying on SAP Cloud Platform, you need to configure the role assignments users and groups will have for
this application. For more information, see Managing Roles [page 1394].
The Authorization Management API is a REST API that allows you to manage role and group assignments of users
for Java and HTML5 applications and subscriptions.
Context
The Authorization Management API is protected with the OAuth 2.0 Client Credentials flow.
For detailed description of the available methods, see the Authorization Management API.
Note
HTML5 applications are using a more feature-rich authorization model, which allows to assign permissions on
various URI paths. Those permissions are then mapped to SAP Cloud Platform custom roles. Since all HTML5
applications are run via a central app called dispatcher from the services account – all of them share the same
custom roles and mappings. This the reason why when you are managing roles of HTML5 applications, in the
API calls you need to use dispatcher for appName and services for providerAccount name.
Context
To obtain an OAuth access token via the OAuth Client Credentials flow, you first need to create an OAuth client in
the Cockpit. The OAuth client is identified by a client ID and protected with a client secret. In a later step, those are
used to obtain the OAuth API access token from the OAuth access token endpoint.
1. In your Web browser, open the Cockpit. See Cockpit [page 97].
Caution
Make sure you save the generated client credentials. Once you close the confirmation dialog, you cannot
retrieve the generated client credentials from SAP Cloud Platform.
Context
Once you have the client credentials, you need to send an HTTP POST request to the OAuth access token endpoint
and use the client ID and client secret as user and password for HTTP Basic Authentication. You will receive the
access token as a response. By default, the access token received in this way is valid 1500 seconds (25 minutes).
You can configure its validity length.
Procedure
1. Send a POST request to the OAuth access token endpoint. The URL is landscape specific, and looks like this:
https://api.<landscape_ host>/oauth2/apitoken/v1?grant_type=client_credentials
The parameter grant_type=client_credentials notifies the endpoint that the Client Credentials flow is used.
2. Get and save the access token from the received response from the endpoint.
The response is a JSON object, whose access_token parameter is the access token. It is valid for the specified
time (in seconds) in the expires_in parameter. (default value: 1500 seconds).
Example
Retrieving an access token on the trial landscape will look like this:
POST https://api.hanatrial.ondemand.com/oauth2/apitoken/v1?
grant_type=client_credentials
Output Code
{
"access_token": "51ddd94b15ec85b4d54315b5546abf93",
"token_type": "Bearer",
"expires_in": 1500,
"scope": "hcp.manageAuthorizationSettings hcp.readAuthorizationSettings"
}
Procedure
In the requests to this API, include the access token as a header with name Authorization and value Bearer <token
value>.
Example
GET https://api.hanatrial.ondemand.com/authorization/v1/accounts/p1234567trial/
users/roles/?userId=myUser
Headers:
Authorization: Bearer 51ddd94b15ec85b4d54315b5546abf93
Related Information
You can access user attributes using the User Management Java API (com.sap.security.um.user). It can be
used to get and create users or to read and update their information.
<resource-ref>
<res-ref-name>user/Provider</res-ref-name>
<res-type>com.sap.security.um.user.UserProvider</res-type>
</resource-ref>
Then look up UserProvider via JNDI in the source code of your application. For example:
Note
If you are using the SDK for Java EE 6 Web Profile, you can look up UserProvider via annotation (instead of
embedding JNDI lookup in the code). For example:
@Resource
private UserProvider userProvider;
try {
// Read the currently logged in user from the user storage
return userProvider.getUser(request.getRemoteUser());
} catch (PersistenceException e) {
throw new ServletException(e);
}
import com.sap.security.um.user.User;
import com.sap.security.um.user.UserProvider;
import com.sap.security.um.service.UserManagementAccessor;
...
// Check for a logged in user
if (request.getUserPrincipal() != null) {
try {
// UserProvider provides access to the user storage
UserProvider users = UserManagementAccessor.getUserProvider();
// Read the currently logged in user from the user storage
User user = users.getUser(request.getUserPrincipal().getName());
// Print the user name and email
response.getWriter().println("User name: " + user.getAttribute("firstname") + "
" + user.getAttribute("lastname"));
response.getWriter().println("Email: " + user.getAttribute("email"));
} catch (Exception e) {
// Handle errors
}
}
Next Steps
You can now test the application locally. For more information, see Security Testing Locally [page 1381].
After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information, see
Deploying and Updating Applications [page 1043].
This topic describes how to enable users to log out from your applications.
Context
You can provide a logout operation for your application by adding a logout button or logout link.
When logout is triggered in a SAP Cloud Platform application, the user is redirected to the identity provider to be
logged out there, and is then returned to the original application URL that triggered the logout request.
The following code provides a sample servlet that handles logout operations. When loginContext.logout() is
used, the system automatically redirects the logout request to the identity provider, and then returns the user to
the logout servlet again.
import javax.security.auth.login.LoginContext;
import javax.security.auth.login.LoginException;
import com.sap.security.auth.login.LoginContextFactory;
...
public class LogoutServlet extends HttpServlet {
. . .
//Call logout if the user is logged in
LoginContext loginContext = null;
if (request.getRemoteUser() != null) {
try {
loginContext = LoginContextFactory.createLoginContext();
loginContext.logout();
} catch (LoginException e) {
// Servlet container handles the login exception
// It throws it to the application for its information
response.getWriter().println("Logout failed. Reason: " + e.getMessage());
}
} else {
response.getWriter().println("You have successfully logged out.");
}
. . .
}
response.getWriter().println("<a href=\"LogoutServlet\">Logout</a>");
CSRF is a common Web hacking attack. For more information, see Cross-Site Request Forgery (CSRF) (non-
SAP link). You might consider protecting the logout operations for your applications from CSRF to prevent your
users from potential CSRF attack related problems (for example, XSRF denial of service on single logout).
Note
Although SAP Cloud Platform provides ready-to-use support for CSRF filtering, with logout operations you
cannot use it. The reason is users are sent to the logout servlet twice: first, when they trigger logout by clicking
a button/link, and second, when the identity provider has logged them out and redirected them back to the
application. You cannot specify the system to apply the CSRF filter first time, and skip it the second time.
Source Code
We add a logout link to the HelloWorld servlet, which references this logout servlet:
Source Code
try {
HttpSession session = request.getSession(false);
if(session != null){
long tokenValue = 0L;
if(session.getAttribute("csrf-logout") != null){
For efficient logout to work, the servlet handling logout must not be protected in the web.xml. Otherwise,
requesting logout will result in a login request. The following example illustrates how to unprotect successfully a
logout servlet. The additional <security-constraint>...</security-constraint> section explicitly enables access to
the logout servlet.
<security-constraint>
<web-resource-collection>
<web-resource-name>Start Page</web-resource-name>
<url-pattern>/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>Everyone</role-name>
</ auth-constraint>
</security-constraint>
<security-constraint>
<web-resource-collection>
<web-resource-name>Logout</web-resource-name>
<url-pattern>/LogoutServlet</url-pattern>
</web-resource-collection>
</security-constraint>
Avoid mapping a servlet to resources using wildcard (<url-pattern>/*</url-pattern> in the web.xml). This may
lead to an infinite loop. Instead, map the servlet to particular resources, as in the following example:
<servlet-mapping>
<servlet-name>Logout Servlet</servlet-name>
<url-pattern>/LogoutServlet</url-pattern>
<servlet-class>test.LogoutServlet</servlet-class>
</servlet-mapping>
Next Steps
You can now test the application locally. For more information, see Security Testing Locally [page 1381].
This section describes the error messages you may encounter when using BASIC authentication with SAP ID
Service as an identity provider.
For more information about using BASIC authentication, see Enabling Authentication [page 1326].
Your account is temporarily locked. It will be automatically un SAP ID Service has registered five unsuccessful login at
locked in 60 minutes. tempts for this account in a short time. For security reasons,
your account is disabled for 60 minutes.
Password authentication is disabled for your account. Log in The owner of this account has disabled password authentica
with a certificate. tion using their user profile settings in SAP ID service.
Inactive account. Activate it via your account creation confir This is a new account and you haven’t activated it yet. You will
mation email receive an e-mail confirming your account creating, and con
taining an account activation link.
Login failed. Contact your administrator. You cannot log in for a reason different from all others listed
here.
SAP Cloud Platform supports the OAuth 2.0 protocol as a reliable way to protect application resources. The
current document describes the specifics of implementing an OAuth-protected application (resource server) for
SAP Cloud Platform.
Overview
OAuth 2.0
OAuth has taken off as a standard way and a best practice for applications and websites to handle authorization.
OAuth defines an open protocol for allowing secure API authorization of desktop, mobile and web applications
through a simple and standard method.
In this way, OAuth mitigates some of the common concerns with authorization scenarios.
The following table shows the roles defined by OAuth, and their respective entities in SAP Cloud Platform:
Authorization server SAP Cloud Platform infrastructure The server that manages the
authentication and authorization of the
different entities involved.
If you want to implement a login based on credentials in the form of an OAuth token, you can do that by using
OAuth as a login method in your application web.xml. For example:
<login-config>
<auth-method>OAUTH</auth-method>
</login-config>
<security-constraint>
<web-resource-collection>
<web-resource-name>Protected Area</web-resource-name>
<url-pattern>/rest/get-photos</url-pattern>
</web-resource-collection>
<auth-constraint>
<!-- Role Everyone will not be assignable -->
<role-name>Everyone</role-name>
</auth-constraint>
</security-constraint>
<security-role>
<description>All SAP Cloud Platform users</description>
<role-name>Everyone</role-name>
</security-role>
In your protected application you can acquire the user ID and attributes as described in Working with User Profile
Attributes [page 1335].
There are two additional user attributes you can use to retrieve token specific information:
Handling Sessions
The Java EE specification requires session support on the client side. Sessions are maintained with a cookie which
the client receives during the authentication and then passes it along to the server on every request. The OAuth
specification, however, does not necessarily require the client to support such a session mechanism. That is, the
support of cookies is not mandatory. On every request, the client passes along to the server only the token
instead of passing cookies. Using the OAuth login module described in the Protecting Resources Declaratively
section, you can implement a user login based on an access token. The login, however, occurs on every request,
and thus it implies the risk of creating too many sessions in the Web container.
To use requests that do not hold a Web container session, use a filter with the proper configuration, as described
in the following example:
<filter>
<display-name>OAuth scope definition for viewing a photo album</display-name>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<filter-class>
com.sap.cloud.security.oauth2.OAuthAuthorizationFilter
</filter-class>
<init-param>
<param-name>scope</param-name>
<param-value>view-photos_upload-photos</param-value>
</init-param>
<init-param>
<param-name>no-session</param-name>
<param-value>false</param-value>
</init-param>
</filter>
One of the ways to enforce scope checks for resources is to declare the resource protection in the web.xml. This is
done by specifying the following elements:
Element Description
Initial parameters With these, you specify the scope, user principal and HTTP
method:
● scope
● http-method
● user-principal - if set to "yes", you will get the user
ID
● no-session - if you set this to "true", the session will
be destroyed when you finish using the filter. This means
The following example shows a sample web.xml for defining and configuring OAuth resource protection for the
application.
<filter>
<display-name>OAuth scope definition for viewing a photo album</display-name>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<filter-class>
com.sap.cloud.security.oauth2.OAuthAuthorizationFilter
</filter-class>
<init-param>
<param-name>scope</param-name>
<param-value>view-photos</param-value>
</init-param>
<init-param>
<param-name>http-method</param-name>
<param-value>get post</param-value>
</init-param>
</filter>
In this code snippet you can observe how the PhotoAlbumServlet is mapped to the previously specified OAuth
scope filter:
<filter-mapping>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<servlet-name>PhotoAlbumServlet</servlet-name>
</filter-mapping>
If you would like to use URL pattern instead, simply specify the pattern that should apply here:
<filter-mapping>
<filter-name>OAuthViewPhotosScopeFilter</filter-name>
<url-pattern>/photos/*.jpg</url-pattern>
</filter-mapping>
In the second case, all files with the *.jpg extension that are served from the /photos directory will be protected
by the OAuth filter.
For more information regarding possible mappings, see the filter-mapping element specification.
Alternatively to the declarative approach with the web.xml (described above), you can use the OAUTH login
module programmatically. For more information, see Programmatic Authentication [page 1329].
When a resource protected by OAuth is requested, your application must pass the access token using the HTTP
"Authorization" request header field. The value of this header must be the token type and access token value. The
currently supported token type is "bearer".
When the protected resource access check is performed the filter calls the API and the API calls the authorization
server to check the validity of the access token and retrieve token’s scopes.
In the table below the result handling between the authorization server and resource server, resource server and
the API, and resource server and filter is presented.
If user-
principal=tru
e ->
request.getUs
erPrincipal()
. getName()
returns user_id
reason =
"access_forbi
dden"
reason =
"missing_acce
ss_token
reason =
"missing_acce
ss_token
reason =
"missing_acce
ss_token
Next Steps
1. You can now deploy the application on SAP Cloud Platform. For more information, see Deploying and
Updating Applications [page 1043]
2. After you deploy, you need to configure clients and scopes for the application. For more information, see
Configuring OAuth 2.0 [page 1425].
SAP Cloud Platform supports the client credentials grant flow from the OAuth 2.0 specfication. This flow enables
grant of an OAuth access token based on the client credentials only, without user interaction. You can use this
flow for enabling system-to-system communication (with a service user), for example, in device communication in
an Internet of things scenario.
Context
The current procedure is for application developers that need their SAP Cloud Platform applications to be enabled
for OAuth 2.0 client credentials grant.
Procedure
1. Register a new OAuth client of type Confidential. See Registering an OAuth Client [page 1426].
2. Using that client, you can get an access token using a REST call to the endpoints shown in cockpit
Security OAuth Branding .
Create a REST call containing grant_type: client credentials, client ID and password.
Tip
You can use the client ID returned as remote user to assign Java EE roles to clients, and use them for
role-based authorizations. See:
Cross-site request forgery (CSRF or XSRF) is also known as one-click attack or session riding. The key step of the
attack is that a malicious user tricks the victim’s browser into executing an HTTP request on behalf of the valid
user. As a result, a security sensitive action is performed on the server side. If the victim has already logged in the
attacked site, the browser has valid session cookies and sends them automatically with subsequent requests. The
server trusts these requests based on the valid cookies sent by the browser and confirms that the action has been
initiated by the victim.
The predictability of the HTTP request is a prerequisite for the attacker to be able to insert a request in advance in
order to make the browser execute it. Therefore, the common prevention to this attack is to embed a secret
unpredictable token into the request, unique for each session or request.
Table 417:
CSRF Protection Mechanism Description When to Use How to Use
URL encoding approach Based on the CSRF Preven This is the most common See Using the Apache Tomcat
tion Filter provided by CSRF protection. Use it for
CSRF Prevention Filter [page
Apache Tomcat 7. The pre protecting resources that are
1349]
vention mechanism is based supposed to be accessed via
on a token (a nonce value) some sort of navigation. For
generated on each request example, if there is a refer
and stored in the session. The ence to them in an entry point
token is used to encode all page (included in links/post
URLs on the entry point sites. forms, and so on).
Upon request to a protected
URL, the existence and value
of the token is checked. The
request is allowed to proceed
only if the nonce from the to
ken equals the one stored in
the session. The prevention
mechanism is applied for all
URLs mapped to the filter ex
cept for specially defined en
try points.
Custom header approach Based on a secret token (a Use it when URL encoding is See Using Custom Header
nonce value) generated on not suitable. For example,
Protection [page 1351]
server side and stored in the when protecting resources
session, but unlike the first that are requested only as
approach, here the token is REST APIs (one time requests
transported as a custom that should be served inde
header of the HTTP requests. pendently from previous re
quests and are not included in
links and HTML forms). The
same approach is imple
mented in other SAP web ap
plication servers like AS ABAP
and HANA XS, and is sup
ported by SAP UI5. Common
scenarios that can benefit
from this approach are those
using ODATA services, REST,
AJAX, etc.
Custom CSRF filtering imple If you cannot use URL encod Use it when implementing sin Enabling Logout [page 1337]
mentation
ing or custom header protec gle logout (SLO) for SAP
tion, you can implement your Cloud Platform applications.
custom CSRF filtering Due to redirects to the SAML
2.0 identity provider, you can
not use the out-of-the-box ap
proaches listed here (custom
header protection or URL en
coding.
Note
These approaches cannot be applied together to protect one and the same web resource.
Prerequisites
You have created a working Web application and have enforced authentication for it. See Enabling Authentication
[page 1326]
For the purposes of this tutorial, an example application consisting of the following URLs will be used:
● /home - displays home page, and has links to /doActionA and /doActionB
● /doActionA - executes a security sensitive action A, and also has a link to /doActionB
● /doActionB - executes a security sensitive action B
Entry points are URLs used as a starting point for the navigation across the application. They are not protected
against CSRF as requests to them will not be tested for the presence of a valid nonce. Entry points should meet
the following criteria:
Considering the example application, /doActionA and /doActionB are not plausible for entry points since they
are state changing URLs. They should be protected against CSRF. Following the rules above, you could easily
conclude that /home is best suited to be the entry point.
The CSRF Prevention Filter should be defined in the web.xml configuration file. Important init parameters are
entryPoints and nonceCacheSize. The first parameter's value is a comma separated list of the entry points,
identified in the previous step. In this case /home.
The second parameter, nonceCacheSize, should be used in case of parallel requests that might cause a new
nonce to be generated, before the validation of an encoded URL. The nonceCacheSize parameters defines the
number of previous values stored. The default number is 5.
The definition below will protect all URLs except for the entry point /home.
<filter>
<filter-name>CsrfFilter</filter-name>
<filter-class>org.apache.catalina.filters.CsrfPreventionFilter</filter-class>
<init-param>
<param-name>entryPoints</param-name>
<param-value>/home</param-value>
</init-param>
</filter>
The general recommendation is to enable the filter for all URLs using the pattern /*:
<filter-mapping>
<filter-name>CsrfFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
In the example application the URLs that should be encoded are /protected/doActionA and /protected/
doActionB in /protected/home, and the /protected/doActionB URL in /protected/doActionA. To
encode the URLs use HttpServletResponse#encodeRedirectURL(String) or
HttpServletResponse#encodeURL(String).
In case a new URL needs to be added to the application later, for example, /newlink, then you should evaluate its
need of CSRF protection. For example, if it executes a state changing action, it certainly should be protected.
Depending on the case there are two possibilities:
All CSRF protected links that are used in the new page should be encoded, as described in step 4.
Context
Custom header protection is one of the possible approaches for CSRF protection. It is based on adding a servlet
filter that inspects state modifying requests for the presence of valid CSRF token. The CSRF token is transferred
as a custom header and is valid during the user session. This kind of protection specifically addresses the
In a nutshell, the REST CSRF protection mechanism consists of the following communication steps:
1. The REST CLIENT obtains a valid CSRF token with an initial non-modifying "Fetch" request to the application.
2. The SERVER responds with the valid CSRF token mapped to the current user session.
3. The REST CLIENT includes the valid CSRF token in the subsequent modifying REST requests in the frame of
the same user session.
4. The SERVER rejects all modifying requests to protected resources that do not contain the valid CSRF token.
Custom header CSRF protection mechanism requires adoption both in the client (JavaScript) and server (REST)
parts of the Web applications.
To better illustrate the mechanism we’ll use an example web application exposing the following REST APIs. We’ll
use the same example application throughout the document.
Table 418:
Number Exposed with HTTP REST API Description Type
methods
Prerequisites
You have created a working Web application and have enforced authentication for it, as described in Enabling
Authentication [page 1326]. All CSRF protected resources should be protected with an authentication
mechanism.
Procedure
In the application's web.xml, protect all REST APIs using the out-of-the-box CSRF filter available with the SAP
Cloud Platform SDK.
Note
You must have at least one non-modifying REST operation listed.
Identify all web application resources that have to be CSRF protected and map them to
org.apache.catalina.filters.RestCsrfPreventionFilter (this class represents the out-of-the-box
Note
If you are using an older version of the SAP Cloud Platform rutime for Java, use the
com.sap.core.js.csrf.RestCsrfPreventionFilter class instead. It delivers the same implementation
as the other one. Namely, use that class with the following runtime versions:
As a result, all modifying HTTP requests matching the given url-pattern would be CSRF validated, i.e. checked
for the presence of the valid CSRF token.
Applications should expose at least one non-modifying REST operation to enable CSRF token fetch mechanism. In
order to obtain the valid CSRF token, the clients need to make an initial fetch requests. That is why the non-
modifying REST API is necessary. Requirements for the non-modifying REST API:
○ Any GET/HEAD/OPTIONS requests to the URL shall not cause state modification.
○ The URL should be mapped to the RestCsrfPreventionFilter
○ The URL should be protected with authentication mechanism.
Example
The following example illustrates mapping a set of modifying REST APIs and one non-modifying REST API to
the CSRF protection filter in the application’s web.xml deployment descriptor:
<filter>
<filter-name>RestCSRF</filter-name>
<filter-class>org.apache.catalina.filters.RestCsrfPreventionFilter</filter-
class>
</filter>
<filter-mapping>
<filter-name>RestCSRF</filter-name>
<!— modifying REST APIs-->
<url-pattern>/services/customers/removeCustomer</url-pattern>
<url-pattern>/services/customers/addCustomer</url-pattern>
<url-pattern>/services/customers/initCustomers</url-pattern>
<!— non-modifying REST API-->
<url-pattern>/services/customers/list</url-pattern>
</filter-mapping>
2. In REST Clients
Procedure
As a first step, the REST client should obtain the valid CSRF token for the current session. For this it makes a
non-modifying request and includes a custom header "X-CSRF-Token: Fetch". The returned [sessionid
Client Request:
GET /restDemo/services/customers/list HTTP/1.1
X-CSRF-Token: Fetch
Authorization: Basic dG9tY2F0OnRvbWNhdA==
Host: localhost:8080
Server Response:
HTTP/1.1 200 OK
Set-Cookie: JSESSIONID=4BA3D75B73B8C4591F1D915BA9C2B660; Path=/restDemo/;
HttpOnly
X-CSRF-Token: 5A44B387B75E54417F6C64FF3D485141
..
2. Use the cached [sessionid – csrf token] pair for subsequent REST requests.
Subsequent modifying REST requests to the same application should include the valid jsessionid cookie and
the valid X-CSRF-Token header.
Client Request:
POST /restDemo/services/customers/removeCustomer HTTP/1.1
Cookie: JSESSIONID=4BA3D75B73B8C4591F1D915BA9C2B660
X-CSRF-Token: 5A44B387B75E54417F6C64FF3D485141
Authorization: Basic dG9tY2F0OnRvbWNhdA==
Host: localhost:8080
Server Response:
HTTP/1.1 200 OK
..
403 Forbidden
X-CSRF-Token: Required
Exceptional Cases
Context
In small number of use cases the client is not able to insert custom headers in its calls to a REST API. For example
file uploads via POST HTML FORM consuming a REST API. Only for such use-cases there is an additional
Tip
For security reasons we strongly recommend the following:
● Use this approach only when the header approach cannot be applied.
● Use only hidden post parameter with name X-CSRF-Token, and not query parameters.
<filter>
<filter-name>CSRF</filter-name>
<filter-class>org.apache.catalina.filters.RestCsrfPreventionFilter</filter-
class>
<init-param>
<param-name>pathsAcceptingParams</param-name>
<param-value>/services/customers/acceptedPath1.jsp,/services/customers/
acceptedPath2.jsp
</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>CSRF</filter-name>
<url-pattern>/services/customers/*</url-pattern>
</filter-mapping>
This document describes how to protect SAP Cloud Platform applications from XSS attacks.
Cross-site Scripting (XSS) is the name of a class of security vulnerabilities that can occur in Web applications. It
summarizes all vulnerabilities that allow an attacker to inject HTML Markup and/or JavaScript into the affected
Web application's front-end.
XSS can occur whenever the application dynamically creates its HTML/JavaScript/CSS content, which is passed
to the user's Web browser, and attacker-controlled values are used in this process. In case these values are
included into the generated HTML/JavaScript/CSS without proper validation and encoding, the attacker is able to
include arbitrary HTML/JavaScript/CSS into the application's frontend, which in turn is rendered by the victim's
Web browser and, thus, interpreted in the victim's current authentication context.
For more information about the security measures implemented by SAPUI5, see Securing SAPUI5 Applications.
Note
Using the XSS output encoding library is given as an option that you can use for your applications. You can
successfully use your custom or third-party XSS protection libraries that you have available.
SAP Cloud Platform provides an output encoding library that helps protecting from XSS vulnerabilities. It is a
central library that implements several encoding methods for the different contexts.
The interface provides methods for retrieving parameters or attributes, and for encoding and decoding data.
It also has various methods for different data types that should be encoded:
Тo use XSS output encoding API, you need to add it as library to the Dynamic Web Project. This is done with the
following steps:
In the following example, we demonstrate the use of the XSS Output Encoding API. The example has one HTML
form that retrieves user input, which can contain malicious code:
Even though the attacker might attempt to inject malicious code in both parameters - firstname and lastname, the
firstname is protected, since it uses the output encoding library to neutralize all special symbols. However, the
attack attempt will be successful for the lastname parameter since it is printed directly to the output. This is
unsafe behavior and should be avoided.
1.9.3.1.10 Cryptography
The Keystore Service provides a repository for cryptographic keys and certificates to the applications hosted on
SAP Cloud Platform.
If you want to use cryptography with unlimited strength in an SAP Cloud Platform application, you need to enable
it via installing the necessary Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files on
SAP JVM.
Overview
The Keystore Service provides a repository for cryptographic keys and certificates to the applications hosted on
SAP Cloud Platform. By using the Keystore Service, the applications could easily retrieve keystores and use them
in various cryptographic operations such as signing and verifying of digital signatures, encrypting and decrypting
messages, and performing SSL communication.
The SAP HANA Keystore Service stores and provides keystores encoded in the following formats:
Configuring Keystores
The keystore service works with keystores available on the following levels:
● Subscription level
Keystores available for a certain application provided by another account.
● Application level
Keystores available for a certain application in a particular consumer account.
● Account level
Keystores available for all applications in a particular consumer account.
When searching for a keystore with a certain name, the keystore service will search on the different levels in
following order: Subscription level Application level Account level .
Once a keystore with the specified name has been found at a certain location, further locations will no more be
searched for.
To consume the Keystore Service, you need to add the following reference to your web.xml file:
<resource-ref>
<res-ref-name>KeyStoreService</res-ref-name>
<res-type>com.sap.cloud.crypto.keystore.api.KeyStoreService</res-type>
</resource-ref>
Then, in the code you can look up Keystore Service API via JNDI:
import com.sap.cloud.crypto.keystore.api.KeyStoreService;
...
KeyStoreService keystoreService = (KeyStoreService) new
InitialContext().lookup("java:comp/env/KeyStoreService");
For more information, see Tutorial: Using the Keystore Service for Client Side HTTPS Connections.
Related Information
The keystore console commands are called from the SAP Cloud Platform console client and allow users to list,
upload, download, and delete keystores. To be able to use them, the user must have administrative rights for that
account. The console supports the following keystore commands: list-keystores, upload-keystore, download-
keystore, and delete-keystore.
Related Information
SAP JVM, used by SAP Cloud Platform, trusts the below-listed certificate authorities (CAs) by default. This means
that the external HTTPS services which use X.509 server certificates (which are issued by those CAs), are trusted
by default on SAP Cloud Platform and no trust needs to be configured manually.
For SSL connections to services which use different certificate issuers, you need to configure trust to use the
keystore service of the platform. For more information, see Tutorial: Using the Keystore Service for Client Side
HTTPS Connections [page 1363].
Properties
Table 419:
Certificate Alias Certificate Name Certificate SHA1
Related Information
Prerequisites
● You have downloaded and configured the SAP Eclipse platform. For more information, see Setting Up the
Development Environment [page 43].
● You have created a HelloWorld Web application as described in the Creating a HelloWorld Application tutorial.
For more information, see Creating a HelloWorld Application [page 56].
● You have an HTTPS server hosting a resource which you would like to access in your application.
● You have prepared the required key material as .jks files in the local file system.
Note
File client.jks contains a client identity key pair trusted by the HTTPS server, and cacerts.jks
contains all issuer certificates for the HTTPS server. The files are created with the keytool from the
standard JDK distribution. For more information, see Key and Certificate Management Tool .
Context
This tutorial describes how to extend the HelloWorld Web application to use SAP Cloud Platform Keystore
Service. It tells you how to make an SSL connection to an external HTTPS server by using the JDK and Apache
HTTP Client. For more information about the HelloWorld Web application, see Creating a HelloWorld Application
[page 56].
You test and run the application on your local server and on SAP Cloud Platform.
Procedure
To enable the look-up of the Keystore Service through JNDI, you need to add a resource reference entry to
the web.xml descriptor.
<resource-ref>
<res-ref-name>KeyStoreService</res-ref-name>
<res-type>com.sap.cloud.crypto.keystore.api.KeyStoreService</res-type>
</resource-ref>
package com.sap.cloud.sample.keystoreservice;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.PrintWriter;
import java.security.KeyStore;
import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.net.ssl.KeyManager;
import javax.net.ssl.KeyManagerFactory;
import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.TrustManagerFactory;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import com.sap.cloud.crypto.keystore.api.KeyStoreService;
public class SSLExampleServlet extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doGet(HttpServletRequest request, HttpServletResponse
response) throws ServletException, IOException {
// get Keystore Service
KeyStoreService keystoreService;
try {
Context context = new InitialContext();
keystoreService = (KeyStoreService) context.lookup("java:comp/env/
KeyStoreService");
} catch (NamingException e) {
response.getWriter().println("Error:<br><pre>");
e.printStackTrace(response.getWriter());
response.getWriter().println("</pre>");
throw new ServletException(e);
}
String host = request.getParameter("host");
if (host == null || (host = host.trim()).isEmpty()) {
response.getWriter().println("Host is not specified");
return;
}
String port = request.getParameter("port");
if (port == null || (port = port.trim()).isEmpty()) {
port = "443";
}
String path = request.getParameter("path");
if (path == null || (path = path.trim()).isEmpty()) {
path = "/";
}
String clientKeystorePassword =
request.getParameter("client.keystore.password");
if (clientKeystorePassword == null || (clientKeystorePassword =
clientKeystorePassword.trim()).isEmpty()) {
response.getWriter().println("Password for client keystore is not
specified");
return;
}
String trustedCAKeystoreName = "cacerts";
// get a named keystores with password for integrity check
KeyStore clientKeystore;
try {
clientKeystore = keystoreService.getKeyStore(clientKeystoreName,
clientKeystorePassword.toCharArray());
} catch (Exception e) {
response.getWriter().println("Client keystore is not available: " + e);
return;
}
// get a named keystore without integrity check
KeyStore trustedCAKeystore;
try {
trustedCAKeystore = keystoreService.getKeyStore(trustedCAKeystoreName,
null);
} catch (Exception e) {
response.getWriter().println("Trusted CAs keystore is not available" +
e);
return;
}
f. Save the Java editor and make sure that the project compiles without errors.
3. Deploy and Test the Web Application
Procedure
1. Add the required .jar files of the Apache HTTP Client (version 4.2 or higher) to the build path of your project.
2. Add the following imports:
import org.apache.http.HttpEntity;
import org.apache.http.HttpResponse;
import org.apache.http.client.methods.HttpGet;
import org.apache.http.conn.scheme.Scheme;
import org.apache.http.conn.scheme.SchemeSocketFactory;
import org.apache.http.conn.ssl.SSLSocketFactory;
import org.apache.http.impl.client.DefaultHttpClient;
import org.apache.http.util.EntityUtils;
3. Replace callHTTPSServer() method with the one using Apache HTTP client.
Related Information
Procedure
1. To deploy your Web application on the local server, follow the steps for deploying a Web application locally as
described in Deploying Locally from Eclipse IDE [page 1045].
2. To upload the required keystores, copy the prepared client.jks and cacerts.jks files into <local
server root>\config_master\com.sap.cloud.crypto.keystore subfolder.
3. To test the functionality, open the following URL in your Web browser: http://localhost:<local server
HTTP port>/HelloWorld/SSLExampleServlet?host=<remote HTTPS server host
name>&port=<remote HTTPS server port number>&path=<remote HTTPS server
resource>&client.keystore.password=<client identity keystore password>.
Related Information
Procedure
1. To deploy your Web application on the cloud, follow the steps for deploying a Web application to SAP Cloud
Platform as described in Deploying on the Cloud with the Console Client [page 1053].
Example
Assuming you have myAccount account, myApplication application, myUser user, and the keystore files in
folder C:\Keystores, you need to execute the following commands in your local <SDK root>\tools
folder:
For more information about the keystore console commands, see Keystore Console Commands [page
1360].
3. To test the functionality, open the application URL shown by SAP Cloud Platform cockpit with the following
options:<SAP Cloud Platform Application URL>/SSLExampleServlet?host=<remote HTTPS
server host name>&port=<remote HTTPS server port number>&path=<remote HTTPS server
resource>& client.keystore.password=<client identity keystore password>.
For more information, see Starting and Stopping Applications [page 1149].
Related Information
You can enable the users for your Web application to authenticate using client certificates. This corresponds to
the CERT and BASICCERT authentication methods supported in Java EE.
Overview
Prerequisites
(For the mapping modes requiring certificate authorities) You have a keystore defined. See Keys and Certificates
[page 1358].
Using information in the client certificate, SAP Cloud Platform will map the certificate to a user name using the
mapping mode you specify.
Context
By default, SAP Cloud Platform supports SSL communication for Web applications through a reverse proxy that
does not request a client certificate. To enable client certificate authentication, you need to configure the reverse
proxy to request a client certificate.
For more information about the trusted certificate authorities (CAs) for SAP Cloud Platform, see Trusted
Certificate Authorities for Client Certificate Authentication [page 1374].
In your Web application, use declarative or programmatic authentication to protect application resources.
Use one of the following two methods for client certificate authentication:
If you use the declarative approach, you need to specify the authentication method in the application web.xml file.
See Declarative Authentication [page 1326].
If you use the programmatic approach, specify the authentication method as a parameter for the login context
creation. For more information, see Programmatic Authentication [page 1329].
The user mapping defines how the user name is derived from the received client certificate. You configure user
mapping using Java system properties.
com.sap.cloud.crypto.clientcert.keystore_n Defines the name of the keystore used during the user map
ame ping process, and it is mandatory for the mapping modes that
use the keystore.
Note
Use a keystore that is available in the Keystore Service.
See Keys and Certificates [page 1358].
Note
Use the keystore name without the keystore file extension
(jks for example).
Note
Depending on the value of the
com.sap.cloud.crypto.clientcert.mapping
_mode property,using the
com.sap.cloud.crypto.clientcert.keystor
e_name property may be mandatory.
For more information how to set the value of the system property, see Configuring VM Arguments [page 1145].
For more information about the particular values you need to set, see the table below.
CN The user name equals the Set the A client certificate with
common name (CN) of the com.sap.cloud.crypto cn=myuser,ou=security as a
certificate’s subject. .clientcert.mapping_ subject is mapped to a
mode property with value CN. myuser user name.
Note
The client certificate is not
accepted if its issuer is not
in the keystore or is not in
a chain trusted by this key
store, and then the au
thentication fails. For more
information about the Key
store Service, see Keys
and Certificates [page
1358].
CN@issuer For this mapping mode, the To use this mapping mode, A client certificate with
user name is defined as <CN you have to set the following CN=john, C=DE, O=SAP,
of the certificate’s system properties: OU=Development as a subject
subject>@<keystore alias of
● com.sap.cloud.cry and CN=SSO CA, O=SAP as
the certificate’s issuer>. Use pto.clientcert.ma an issuer is received. The
this mapping mode when you pping_mode with a specified keystore with
have certificates with identi value CN@Issuer trusted issuers contains the
cal CNs. ● com.sap.cloud.cry same issuer, CN=SSO CA,
pto.clientcert.ke O=SAP, that has an sso_ca
ystore_name with a alias. Then the user name is
value the keystore name defined as john@sso_ca.
containing the trusted is
suers
The issuer is trusted if it
is in the keystore or is
part of a trusted certifi
cate chain. A certificate
chain is trusted if at least
one of its issuers exists in
the keystore.
Note
The client certificate is not
accepted if its issuer is not
in the keystore or is not in
a chain trusted by this key
store, and then the au
thentication fails. For more
information about setting
the Keystore Service, see
Keys and Certificates
[page 1358].
wholeCert For this mapping mode, the To use this mapping mode, The following client certificate
whole client certificate is you have to set the following is received:
compared with each entry in system properties:
Subject: CN=john.miller,
the specified keystore, and
● com.sap.cloud.cry C=DE, O=SAP,
then the user name is defined pto.clientcert.ma OU=Development
as the alias of the matching pping_mode with a
entry. value wholeCert Validity Start Date:
● com.sap.cloud.cry March 19 09:04:32 2013 GMT
pto.clientcert.ke
Validity End Date:
ystore_name with a
March 19 09:04:32 2018 GMT
value the keystore name
containing the respective …
user certificates
The specified keystore con
Note tains the same certificate with
an alias john. Then the user
The client certificate is not
name is defined as john.
accepted if no exact match
is found in the specified
keystore, and then the au
thentication fails. For more
information about the Key
store Service, see Keys
and Certificates [page
1358].
subjectAndIssuer For this mapping mode, only To use this mapping mode, A certificate with
the subject and issuer fields you have to set the following CN=john.miller, C=DE,
of the received client certifi system properties: O=SAP, OU=Development as
cate are compared with the
● com.sap.cloud.cry a subject and CN=SSO CA,
ones of each keystore entry, pto.clientcert.ma O=SAP as an issuer is re
and then the user name is de pping_mode with a ceived. The specified keystore
fined as the alias of the value subjectAndIssuer contains a certificate with
matching entry. ● com.sap.cloud.cry alias john that has the same
pto.clientcert.ke subject and issuer fields.
Use this mapping mode when
ystore_name with a Then the user name is defined
you want authentication by
value the keystore name as john.
validating only the certifi
containing the respective
cate’s subject and issuer. user certificates
Note
The client certificate is not
accepted if an entry with
the same subject and is
suer is missing in the
specified keystore, and
then the authentication
fails. For more information
about the Keystore Serv
ice, see Keys and Certifi
cates [page 1358].
To enable client certificate authentication in your application, users need to present client certificates issued by
some of the certificate authorities (CAs) listed below.
Trusted CAs
Table 422:
Subject DN Issuer DN SHA1
CN=Go Daddy Root Certificate Authority CN=Go Daddy Root Certificate Authority 47:BE:AB:C9:22:EA:E8:0E:
- G2, O="GoDaddy.com, Inc.", L=Scotts - G2, O="GoDaddy.com, Inc.", L=Scotts 78:78:34:62:A7:9F:45:C2:54:FD:E6:8B
dale, ST=Arizona, C=US dale, ST=Arizona, C=US
CN=SAP Passport CA, O=SAP Trust CN=SAP Passport CA, O=SAP Trust 8D:71:8C:B5:F4:21:9D:5D:39:0C:
Community, C=DE Community, C=DE 79:04:8A:EA:21:85:54:37:F4:57
CN=thawte Primary Root CA, OU="(c) CN=thawte Primary Root CA, OU="(c) 91:C6:D6:EE:3E:
2006 thawte, Inc. - For authorized use 2006 thawte, Inc. - For authorized use 8A:C8:63:84:E5:48:C2:99:29:5C:75:6C:
only", OU=Certification Services Divi only", OU=Certification Services Divi 81:7B:81
sion, O="thawte, Inc.", C=US sion, O="thawte, Inc.", C=US
CN=VeriSign Class 1 Public Primary Cer CN=VeriSign Class 1 Public Primary Cer 20:42:85:DC:F7:EB:76:41:95:57:8E:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 13:6B:D4:B7:D1:E9:8E:46:A5
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 2 Public Primary Cer CN=VeriSign Class 2 Public Primary Cer 61:EF:43:D7:7F:CA:D4:61:51:BC:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 98:E0:C3:59:12:AF:9F:EB:63:11
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 3 Public Primary Cer CN=VeriSign Class 3 Public Primary Cer 13:2D:0D:45:53:4B:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 69:97:CD:B2:D5:C3:39:E2:55:76:60:9B:
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only", 5C:C6
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 3 Public Primary Cer CN=VeriSign Class 3 Public Primary Cer 22:D5:D8:DF:8F:
tification Authority - G4, OU="(c) 2007 tification Authority - G4, OU="(c) 2007 02:31:D1:8D:F7:9D:B7:CF:8A:2D:
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only", 64:C9:3F:6C:3A
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 3 Public Primary Cer CN=VeriSign Class 3 Public Primary Cer 4E:B6:D5:78:49:9B:1C:CF:5F:58:1E:AD:
tification Authority - G5, OU="(c) 2006 tification Authority - G5, OU="(c) 2006 56:BE:3D:9B:67:44:A5:E5
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
CN=VeriSign Class 4 Public Primary Cer CN=VeriSign Class 4 Public Primary Cer C8:EC:8C:87:92:69:CB:4B:AB:39:E9:8D:
tification Authority - G3, OU="(c) 1999 tification Authority - G3, OU="(c) 1999 7E:57:67:F3:14:95:73:9D
VeriSign, Inc. - For authorized use only", VeriSign, Inc. - For authorized use only",
OU=VeriSign Trust Network, O="Veri OU=VeriSign Trust Network, O="Veri
Sign, Inc.", C=US Sign, Inc.", C=US
OU=Go Daddy Class 2 Certification Au OU=Go Daddy Class 2 Certification Au 27:96:BA:E6:3F:
thority, O="The Go Daddy Group, Inc.", thority, O="The Go Daddy Group, Inc.", 18:01:E2:77:26:1B:A0:D7:77:70:02:8F:
C=US C=US 20:EE:E4
By default, SAP JVM provides Java Cryptography Extension (JCE) with limited cryptography strength. If you want
to use cryptography with unlimited strength in an SAP Cloud Platform application, you need to enable it via
installing the necessary Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files on SAP
JVM. To do that, follow the procedure below.
Prerequisites
You have the appropriate Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files enabling
cryptography with unlimited strength.
Procedure
1. Pack the encryption policy files (JCE Unlimited Strength Jurisdiction Policy Files) in the following folder of the
Web application:
Results
The encryption policy files (Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files) will
be installed on the JVM of the application prior to start. As a result, the application can use unlimited strength
encryption.
Example
The WAR file of the application must have the following file entries:
META-INF/ext_security/jre7/local_policy.jar
META-INF/ext_security/jre7/US_export_policy.jar
Context
Using the password storage API, you can securely persist passwords and key phrases such as passwords for
keystore files. Once persisted in the password storage, they:
Before transportation and persistence, passwords are encrypted with an encryption key which is specific for the
application that owns the password.
Note
Each password is identified by an alias. To check the rules and constraints about passwords aliases, permitted
characters and length, see the security javadoc.
To use the password storage API, you need to add a resource reference to PasswordStorage in the web.xml file
of your application, which is located in the \WebContent\WEB-INF folder as shown below:
<resource-ref>
<res-ref-name>PasswordStorage</res-ref-name>
<res-type>com.sap.cloud.security.password.PasswordStorage</res-type>
</resource-ref>
Note that according to the Java EE Specification, the prefix java:comp/env should be added to the JNDI resource
name (as specified in the web.xml file) to form the lookup name.
Below is a code example of how to use the API to set, get or delete passwords. These methods provide the option
of assigning an alias to the password.
import javax.naming.InitialContext;
import javax.naming.NamingException;
import com.sap.cloud.security.password.PasswordStorage;
import com.sap.cloud.security.password.PasswordStorageException;
.......
Note
It is recommended to cache the obtained value, as reading of passwords is an expensive operation and involves
several internal remote calls to central storage and audit infrastructure.
When you run applications on SAP Cloud Platform local runtime, you can use a local implementation of the
password storage API, but keep in mind that the passwords are not encrypted and stored in a local file. Therefore,
for local testing, use only test passwords.
Related Information
This section describes how you can test the security you have implemented in your Java applications.
First, you need to test your application on your local runtime. If you use the Eclipse Tools, you can easily test with
local users. This is useful if you are implementing role-based identity management in your application.
Then, if everything goes well on the local runtime, you can deploy your application on SAP Cloud Platform, and
test how the application works on the Cloud with your local SAML 2.0 identity provider. This makes use if you are
implementing SAML 2.0 identity federation.
Related Information
When you add user authentication to your application, you can test it first on the local server before uploading it to
SAP Cloud Platform.
Note
On the local server, authentication is handled locally, that is, not by the SAP ID service. When you try to access
a protected resource on the local server, you will see a local login page (not SAP ID service's or another identity
provider's login page). User access is then either granted or denied based on a local JSON (JavaScript Object
Notation ) file (<local_server_dir>/config_master/com.sap.security.um.provider.neo.local/neousers.json),
which defines the local set of user accounts, along with their roles and attributes. This is just for testing
purposes. When you deploy to the cloud, user authentication is still handled by the SAP ID service.
User attributes provide additional information about a user account. Applications can use attributes to distinguish
between users or customization according to users. To add a new attribute, proceed as follows:
Roles are used by applications to define access rights. By default, each user is assigned the User.Everyone role. It
is read-only, which means you cannot remove it. To add a new role, proceed as follows:
1. From the list of JSON files, select the user you want to export.
Tip
The default name of the exported file is localusers.json. You can rename it to something more
meaningful to you.
If you prefer using the console client instead of the Eclipse IDE, you have to find and edit manually the JSON file
configuring local test users. It is located at <local_server_dir>/config_master/
com.sap.security.um.provider.neo.local/neousers.json.
The following example shows a sample configuration of a JSON file with two users, along with their attributes and
roles:
{
"Users": [
{
"UID": "P000001",
"Password": "{SSHA}OA5IKcTJplwLLaXCjmbcV+d3LQVKey+bEXU\u003d",
"Roles": [
"Employee",
"Manager"
],
"Attributes": [
{
"attributeName": "firstname",
"attributeValue": "John"
},
{
"attributeName": "lastname",
"attributeValue": "Doe"
},
{
"attributeName": "email",
"attributeValue": "john.doe@yourcompany.com"
}
]
},
{
"UID": "P000002",
"Password": "{SSHA}OA5IKcTJplwLLaXCjmbcV+d3LQVKey+bEXU\u003d",
"Roles": [
"SomeRole"
],
"Attributes": [
{
"attributeName": "firstname",
"attributeValue": "Boris"
},
{
"attributeName": "lastname",
"attributeValue": "Boykov"
},
{
"attributeName": "email",
"attributeValue": "b.boykov@anothercompany.com"
}
]
}
]
}
When stopping your local server, you might see the following error logs:
#ERROR#org.apache.catalina.core.ContainerBase##anonymous#System Bundle
Shutdown###ContainerBase.removeChild: stop:
org.apache.catalina.LifecycleException: Failed to stop component
[StandardEngine[Catalina].StandardHost[localhost].StandardContext[/idelogin]]
This error causes no harm and you don't need to take any measures.
Next Steps
● After testing, you can proceed with deploying the application to SAP Cloud Platform. For more information,
see Deploying and Updating Applications [page 1043].
● After deploying on the cloud, you may need to perform configuration steps using the cockpit. For more
information, see Security Configuration [page 1392].
You can use a local test identity provider (IdP) to test single sign on (SSO) and identity federation of an SAP Cloud
Platform application end-to-end.
This scenario offers simplified testing in which developers establish trust to an application deployed in the cloud
with an easy-to-use local test identity provider .
For more information about the identity provider concept in SAP Cloud Platform, see ID Federation with the
Corporate Identity Provider [page 1406].
Contents:
● You have set up and configured the Eclipse IDE for Java EE Developers and SAP Cloud Platform Tools for
Java. For more information, see Setting Up the Tools and SDK [page 43].
● You have developed and deployed your application on SAP Cloud Platform. For more information, see
Creating an SAP Cloud Platform Application [page 1036].
Procedure
The usage of the local test identity provider involves the following steps:
1. In a Web browser, open the cockpit and navigate to Security Trust Local Service Provider .
2. Choose Edit.
3. For Configuration Type, choose Custom.
4. Choose Generate Key Pair to generate a new signing key and self-signed certificate.
5. For the rest of the fields, leave the default values.
6. Choose Save.
7. Choose Get Metadata to download and save the SAML 2.0 metadata identifying your SAP Cloud Platform
account as a service provider. You will have to import this metadata into the local test IdP to configure trust to
SAP Cloud Platform in the procedure that follows.
You need to configure your local IdP name if you want to use more than one local IdP. Default local IdP name:
localidp.
The trust settings on SAP Cloud Platform for the local test IdP are configured in the same way as with any other
productive IdP.
1. During the configuration, use the local test IdP metadata that can be requested under the following link:
http://<idp_host>:<idp_port>/saml2/localidp/metadata,
where <idp_host> and <idp_port> are the local server host and port.
To find the <idp_port>, go to Servers, double click on the local server and choose Overview Ports
Configuration .
For more information, see ID Federation with the Corporate Identity Provider [page 1406]
3. Configure the User Attributes.
Assertion-based attributes are used to define a mapping between attributes in the SAML assertion issued by the
local test IdP and user attributes on the Cloud.
This allows you to essentially pass any attribute exposed by the local test IdP to an attribute used in your
application in the cloud.
Define user attributes in the local test IdP by using the Eclipse IDE Users editor for SAP Cloud Platform as is
described in Setting up the local test IdP.
To add an assertion-based attribute, proceed as follows:
1. Open the cockpit in a Web browser, navigate to Security Trust Application Identity Provider .
2. From the table, choose the entry localidp, open the Attributes tab page, and click on Add Assertion-Based
Attribute.
5. Generate self sign-key pair and certificate for the local test IdP (optional)
If an error occurs while requesting the IdP metadata and the metadata cannot be generated, you can do the
following:
1. Generate a localidp.jks keyfile manually. The key and certificate are needed for signing the information that
the local test IdP will exchange with SAP Cloud Platform.
2. Open the directory <JAVA_HOME>/jre/bin/keytool
3. Open a command line and execute the following command:
where <fullpath_dir_name> is the directory path where the jks will be saved after the creation.
4. Under the Server directory, go to config_master\com.sap.core.jpaas.security.saml2.cfg and
create a directory with name localidp.
5. Copy the localidp.jks file under localidp directory.
1. In the Eclipse IDE, go to the already set up local test IdP Server.
2. Copy the file with the metadata describing SAP Cloud Platform as a service provider under the local server
directory config_master/com.sap.core.jpaas.security.saml2.cfg/localidp. To get this
metadata, in the cockpit, choose Security Trust Local Service Provider Get Metadata .
You can now access your application, deployed on the cloud, and test it against the local test IdP and its defined
users and attributes.
When you have implemented security in your application, you need to perform a few configuration tasks using the
Cockpit to enable the scenario to work successfully on SAP Cloud Platform.
Related Information
This is an optional procedure that you can perform to configure the options for the authentication methods you
defined for your application.
Prerequisites
● You have an application with authentication defined in its web.xml or source code. See Enabling
Authentication [page 1326] .
Context
The following table describes the available authentication options. For each authentication method, you can select
a custom combination of options. You may need to select more than one option if you want to enable more than
one way for users to authenticate for this application.
If you select more than one option, SAP Cloud Platform will delegate authentication to the relevant login modules
consecutively in a stack. When a login module succeeds to authenticate the user, authentication ends with
success. If no login module succeeds, authentication fails.
Trusted SAML 2.0 identity provider Authentication is implemented over the Security Assertion
Markup Language (SAML) 2.0 protocol, and delegated to SAP
ID service or custom identity provider (IdP). The credentials
users need to present depend on the IdP settings. See ID Fed
eration with the Corporate Identity Provider [page 1406].
User name and password HTTP BASIC authentication with user name and password.
The user name and password are validated either by SAP ID
service (default) or by an on-premise SAP NetWeaver AS
Java. See Using an SAP System as an On-Premise User Store
[page 1421]
Note
When you select Trusted SAML 2.0 identity provider,
Application-to-Application SSO becomes enabled automat
ically.
OAuth 2.0 token Authentication is implemented over the OAuth 2.0 protocol.
Users need to present an OAuth access token as credential.
See Protecting Applications with OAuth 2.0 [page 1340].
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See Cockpit [page 97].
Make sure that you have selected the relevant global account to be able to select the right account.
Example
You have a Web application that users access using a Web browser. You want users to log in using a SAML
identity provider. Hence, you define the FORM authentication method in the web.xml of the application.
Related Information
In SAP Cloud Platform, you can use Java EE roles to define access to the application resources.
Context
Role Roles allow you to diversify user access to application resources (role-based authorizations).
Note
Role names are case sensitive.
Predefined roles Predefined roles are ones defined in the web.xml of an application.
After you deploy the application to SAP Cloud Platform, the role becomes visible in the Cockpit, and
you can assign groups or individual users to that role. If you undeploy your application, these roles are
removed.
● Shared - they are shared by default. A shared role is visible and accessible within all accounts sub
scribed to this application.
● Restricted - an application administrator could restrict a shared role. A restricted role is visible
and accessible only within the account that deployed the application, and not to accounts subscri
bed to the application.
Note
If you restrict a shared role, you hide it from visibility for new assignments from subscribed ac
counts but all existing assignments will continue to take effect.
Custom roles Custom roles are ones defined using the Cockpit. Custom roles are interpreted in the same way as pre
defined roles at SAP Cloud Platform: they differ only in the way they are created, and in their scope.
You can add custom roles to an application to configure additional access permissions to it without
modifying the application's source code.
Custom roles are visible and accessible only within the account where they are created. That’s why dif
ferent accounts subscribed to the same application could have different custom roles.
User Users are principals managed by identity providers (SAP ID service or others).
Note
SAP Cloud Platform does not have a user database on its own. It cares to map the users authorized
by identity providers to groups, and groups to roles.
Note
When a user logs in, its roles are stored in the user's current browser session. They are not updated
dynamically, and removed from there only if the session is terminated or invalidated. This means if
you change the set of roles for a user currently logged, they will take effect only after logout or ses
sion invalidation.
Group Groups are collections of roles that allow the definition of business-level functions within your account.
They are similar to the actual business roles existing in an organization, such as "manager", "em
ployee", "external" and so on. They help you to get better alignment between technical Java EE roles
and organizational roles.
Note
Group names are case insensitive.
For each identity provider (IdP) for your account, you define a set of rules specifying the groups a user
for this IdP belongs to.
Context
This can be done in two ways: using predefined roles in the web.xml at development time, or using custom roles in
the UI.
Tip
If you need to do mass role or group assignment, to a very large number of users simultaneously, we
recommend using the Authorization Management API instead of the cockpit UI. See Using the Authorization
Management API [page 1333].
● Predefined Roles
a. In the web.xml of the required application, define the roles authorized to access the application resources.
See Enabling Authentication [page 1326].
b. Deploy the application to SAP Cloud Platform.
See Deploying and Updating Applications [page 1043].
c. Optionally, if you want to restrict the roles to the current application only, deselect the Share option for
them in the Cockpit.
● Custom roles with applications from the same account
Context
Groups allow you to easily manage the role assignments to collections of users instead of individual users.
Procedure
Context
You can assign individual users to the roles or, more conveniently, assign groups for collective role management.
You can do it in either of the two ways: using the Security Roles section for the application, or using the
Security Authorizations section for the account.
Procedure
Tip
You can use regular expressions to narrow the groups found.
Context
For each different IdP, you then define a set of rules specifying to which groups a user logged by this IdP belongs.
Note
You must have defined groups in advance before you define default or assertion-based groups for this IdP.
Default groups are the groups all users logged by this IdP will have. For example, all users logged by the company
IdP can belong to the group "Internal".
Assertion-based groups are groups determined by values of attributes in the SAML 2.0 assertion. For example, if
the assertion contains the attribute "contract=temporary", you may want all such users to be added to the
group "TEMPORARY".
Procedure
a. In the cockpit, navigate to Security Authorizations Groups , and choose Add Default Group.
b. From the dropdown list that appears, choose the required group.
● Defining Assertion-Based Groups
a. In the cockpit, navigate to Security Authorizations Groups , and choose Add Assertion-Based
Group. A new row appears and a new mapping rule is now being created.
b. Enter the name of the group to which users will be mapped. Then define the rule for this mapping.
c. In the first field of the Mapping Rules section, enter the SAML 2.0 assertion attribute name to be used as
the mapping source. In other words, the value of this attribute will be compared with the value you specify
(in the last field of Mapping Rules).
d. Choose the comparison operator.
Table 425:
Equals Choose Equals if you want the value of the SAML 2.0 as
sertion attribute to match exactly the string you specify.
Note that if you want to use more sophisticated relations,
such as "starts with" or "contains", you need to use the
Regular expression option.
.*@sap.com$
^(admin).*
e. In the last field of Mapping Rules, enter the value with which you compare the specified SAML 2.0
assertion attribute.
f. You can specify more than one mapping rule for a specific group. Use the plus button to add as many
rules as required. In this case, mapping is based on a logical OR operation for all rules, that is, if one of
your rules applies, the user is added to the group
In the image below, all users logged by this IdP are added to the group Government. The users that have
an arrtibute corresponding to their department name will also be assigned to the respective department
groups.
When you open the Groups tab page of the Authorizations section, you can see the identity provider
mappings for this group.
Try to access the required application logging on with users with and without the required roles respectively.
Context
You may use the following steps to configure default role caching settings. This may be required if you have
automated test procedures for role assignments in your applications. Tests may not work properly with the
default account settings.
Tip
You can take one of the following approaches:
● Increase the time in which the requests are counted to more than the default 2 minutes
● Increase the number of requests – instead of the default 20, set 100 or 200, for example.
The table below shows the VM system properties available for configuring role caching:
Table 426:
Set the required values to the required VM system properties as described in Configuring VM Arguments [page
1145].
If you have SAP Cloud Platform extension package for SuccessFactors configured for your account, you can
change the default SAP Cloud Platform role provider to another one.
Prerequisites
● You have an SAP Cloud Platform partner or customer account. For more information about account types,
see Account Types [page 14]
● You have an SAP Cloud Platform extension package for SuccessFactors and the extension package is
configured for your SAP Cloud Platform account. For more information, see the Configuring Extension
Package for SuccessFactors section in the http://help.sap.com/disclaimer?site=http://service.sap.com/
%7Esapidb/012002523100013621492014E
● You are an administrator of your SAP Cloud Platform account.
● Your application runtime supports destinations. For more information about the application runtimes
supported by SAP Cloud Platform, see Application Runtime Container
● You have configured the HTTP destination required to ensure your application's connectivity to
SuccessFactors. For more information, see the Configuring Destinations for Extension Applications section in
SAP Cloud Platform, Extension Package for SuccessFactors: Implementation Guide .
● In the SuccessFactors system, you have roles with the required permissions and these roles are with the
same names as those defined in the web.xml file of the extension application. For more information about
creating permission roles in SuccessFactors, see the How do you create a permission role? section in Role-
Based Permissions Administration Guide.
● In the SuccessFactors system, you have assigned the required roles to the corresponding users and groups.
For more information, see the How can you grant permission roles? section in the Role-Based Permissions
Administration Guide.
● When creating the extension application, you have defined the required roles in the web.xml file of the
application and these roles are the same as the ones you have for the application in the SuccessFactors
system. For more information about how to define roles in the web.xml file of the application, see Enabling
Authentication [page 1326]
Context
A role provider is the component that retrieves the roles for a particular user. By default, the role provider used for
SAP Cloud Platform applications and services is the SAP Cloud Platform role provider. For extension applications,
however, you can change the default role provider to another one, for example, a SuccessFactors role provider.
Procedure
Make sure that you have selected the relevant global account to be able to select the right account.
2. Navigate to the application for which you want to change the role provider. To do so, proceed as follows:
○ For a Java application running in your account, choose Applications Java Applications , and then
choose the link of the application.
○ For a Java application to which your account is subscribed, choose Applications Subscriptions , and
then choose the link of the application.
5. (Optional) To view the role provider for an SAP Cloud Platform service, in the cockpit navigate to Services
<service_name> , and then choose Configure Roles.
The system displays the role provider in the Role Provider panel in a read-only mode.
Note
For an account with SAP Cloud Platform extension package for SuccessFactors, the role provider for SAP
Cloud Platform Portal is SuccessFactors.
Results
The changes take effect after 5 minutes. If you want the changes to take effect immediately, you restart the
application (valid only for applications running in your account).
You can delegate user authentication for your applications to your corporate identity provider. This is called
identity federation. SAP Cloud Platform supports Security Assertion Markup Language (SAML) 2.0 for identity
federation.
Contents
Prerequisites
● You have a key pair and certificate for signing the information you exchange with the IdP on behalf of SAP
Cloud Platform. This ensures the privacy and integrity of the data exchanged. You can use your pre-generated
ones or use the generation option in the cockpit.
● You have provided the IdP with the above certificate. This allows the IdP administrator to configure its trust
settings.
● You have the IdP signing certificate to enable you to configure the cloud trust settings.
● You have negotiated with the IdP administrator which information the SAML 2.0 assertion will contain for
each user. For example, this could be a first name, last name, company, position, or an e-mail.
● You know the authorizations and attributes the users logged by this IdP need to have on SAP Cloud Platform.
Tip
You can configure your SAP Cloud Platform account for identity federation with more than one identity
provider. In such case, make sure all user identities are unique across all identity providers, and no user is
available in more than one identity provider. Otherwise, this could lead to wrong assignment of security roles at
SAP Cloud Platform.
Context
In the SAML 2.0 communication, each SAP Cloud Platform account acts as a service provider. For more
information, see Security Assertion Markup Language (SAML) 2.0 protocol specification.
Tip
Each SAP Cloud Platform account is a separate service provider. If you need each of your applications to be
represented by its own service provider, you must create and use a separate account for each application. See
Creating Accounts [page 20].
Note
In this documentation and SAP Cloud Platform user interface, we use the term local service provider to
describe the SAP Cloud Platform account as a service provider in the SAML 2.0 communication.
You need to configure how the local service provider communicates with the identity provider. This includes, for
example, setting a signing key and certificate to verify the service provider’s identity and encrypt data. You can
use the configuration settings described in the table that follows.
Table 427:
Default The local provider's own trust settings For testing and exploring the scenario
will inherit the SAP Cloud Platform de
fault configuration (which is trust to SAP
ID service).
None The local provider will have no trust set For disabling identity federation for your
tings, and it will not participate in any account
identity federation scenario.
Custom The local provider settings will have a For identity federation with a corporate
specific configuration, different from the identity provider or Identity
default configuration for SAP Cloud Authentication tenant
Platform.
Table 428:
Force authentication . If you set it to Enabled, you enable force authentication for
your application (despite SSO, users will have to re-authenti
cate each time they access it). Otherwise, set this option to
Disabled.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See Cockpit [page 97].
Make sure that you have selected the relevant global account to be able to select the right account.
Note
It is recommended to use a URI as the local provider name.
7. In Signing Key and Signing Certificate, place the Base64-encoded signing key and certificate. You can use one
generated with the cockpit (using the Generate Key Pair button) or externally generated one.
Note
Certificates generated using the cockpit have validity of 10 years. If you want your identifying certificate to
have different validity, generate the key and certificate pair using an external tool, and copy the contents in
the Signing Key and Signing Certificate fields respectively in the cockpit.
Note
For more information how to use an externally generated key and certificate pair, see (Optional) Guidelines
for Using External Key and Certificate [page 1409].
8. Choose the required value of the Principal Propagation and Force authentication option.
9. Save the changes.
10. Choose Get Metadata to download the SAML 2.0 metadata describing SAP Cloud Platform as a service
provider. You will have to import this metadata into the IdP to configure trust to SAP Cloud Platform.
Use the following guidelines if you want to use for the local service provider a signing key and certificate generated
using an external tool (such as OpenSSL):
Example
You want to use OpenSSL as a tool for key pair generation.
Convert the private key file spkey.pem into the unencrypted PKCS#8 format using the following command:
openssl pkcs8 -nocrypt -topk8 -inform PEM -outform PEM -in spkey.pem -out
spkey.pk8
Now open the file spkey.pk8 in a text editor and copy all contents except for the tags —–BEGIN PRIVATE KEY
—–, —–END PRIVATE KEY—– into the Signing Key text field in the cockpit. Then open the file spcert.pem in a
text editor and copy all contents except for the tags —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—–
into the Signing Certificate text field in the cockpit.
After clicking Save you should get a message that you can proceed with the configuring of your trusted identity
provider settings.
Context
Note
To benefit from fully-featured identity federation with SAML identity providers, you need to have chosen the
Custom configuration type in the Local Service Provider section.
For Default configuration type, you have non-editable trust to SAP ID Service as default identity provider. You
can add other identity providers but they can be used for IdP-initiated single sign-on (SSO) only.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See Cockpit [page 97].
Make sure that you have selected the relevant global account to be able to select the right account.
Assertion Consumer Service The SAP Cloud Platform endpoint type (application root or
assertion consumer service). The IdP will send the SAML
assertion to that endpoint.
Single Sign-on URL The IdP's endpoint (URL) to which the SP's authentication
request will be sent.
Single Sign-on Binding The SAML-specified HTTP binding used by the SP to send
the authentication request.
Single Logout URL The IdP's endpoint (URL) to which the SP's logout request
will be sent.
Note
If there is no single logout (SLO) end point specified, no
request to the IdP SLO point will be sent, and only the
local session will be invalidated.
Signing Certificate The X.509 certificate used by the IdP to digitally sign the
SAML protocol messages.
User ID Source Location in the SAML assertion from where the user's
unique name (ID) is taken when logging into the Cloud. If
you choose subject, this is taken from the name identifier
in the assertions's subject (<saml:Subject>) element. If
you choose attribute, the user's name is taken from an
SAML attribute in the assertion.
Source Value Name of the SAML attribute that defines the user ID on the
cloud.
Note
If nothing else is specified, the default IdP is used for
authentication. Alternatively, you can use a different IdP
using a URL parameter. See Using an IdP Different from
the Default [page 1416].
Only for IDP-initiated SSO If this checkbox is marked, this identity provider can be
used only for IdP-initiated single sign-on scenarios. The
applications deployed at SAP Cloud Platform cannot use it
for user authentication from their login pages, for example.
Note
This checkbox is always marked if you have selected
Default configuration type in the Local Service Provider
section.
5. In the Attributes tab, configure the user attribute mappings for this identity provider.
User attributes can contain any other information in addition to the user ID.
Default attributes are user attributes that all users logged by this IdP will have. For example, if we know that
"My IdP" is used to authenticate users from MyCompany, we can set a default user attribute for that IdP
"company=MyCompany".
Assertion-based attributes define a mapping between user attributes sent by the identity provider (in the
SAML assertion) and user attributes consumed by applications on SAP Cloud Platform (principal attributes).
This allows you to easily map the user information sent by the IdP to the format required by your application
without having to change your application code. For example, the IdP sends the first name and last name user
information in attributes named first_name and last_name. You, on the other hand, have a cloud
application that retrieves user attributes named firstName and lastName. You need to define the relevant
mapping in the Assertion-Based Attributes section so the application uses the information from that identity
provider properly.
Note
○ There are no default mappings of assertion attributes to principal attributes. You need to define those
if you need them.
○ The attributes are case sensitive.
○ You can specify that all assertion attributes will be mapped to the corresponding principal attributes
without a change, by specifying mapping * to *.
For more information about using user attributes in your application, see Enabling Authentication [page
1326].
6. In the Groups tab, configure the groups associated with this IdP's users.
Groups that you define on the cloud are later mapped to Java EE application roles. As specified in Java EE, in
the web.xml, you define the roles authorized to access a protected resource in your application. You therefore
define the groups that exist there and the roles to which each group is mapped via the Groups tab in the
cockpit. For each different IdP, you then define a set of rules specifying to which groups a user logged by this
IdP belongs.
For more information about configuring groups, see Managing Groups and Roles [page 1394].
Note
You must have defined groups in advance before you define default or assertion-based groups for this IdP.
Default groups are the groups all users logged by this IdP will have. For example, all users logged by the
company IdP can belong to the group "Internal".
Assertion-based groups are groups determined by values of attributes in the SAML 2.0 assertion. For
example, if the assertion contains the attribute "contract=temporary", you may want all such users to be
added to the group "TEMPORARY".
1. On the GROUPS tab page, choose Add Assertion-Based Group. A new row appears and a new mapping
rule is now being created.
In the image above, all users logged by this IdP are added to the group Citizens. All users from the ITSupport
department and the user with e-mail admin@mokmunicipality.org are added to group
MOKMunicipalityAdmins for this account. The rest of the employees at MOKMunicipality (having an e-mail
address in the mokmunicipality.org domain) are assigned to group Government.
Context
You can define more than one identity provider for your account. There is always the default IdP. Initially, SAP ID
service is the default IdP but you can change that after you add another IdP.
If you want to use an IdP different from the default one, you can do so by requesting your application with a
special request parameter saml2idp with value the desired IdP name. For example:
You can register a tenant for Identity Authentication service as an identity provider for your account.
Prerequisites
● You have defined service provider settings for the SAP Cloud Platform account. See Configure SAP Cloud
Platform as a Local Service Provider [page 1407].
● You have chosen a custom local provider configuration type for this account (using Cockpit Trust Local
Service Provider Configuration Type Custom )
Context
Identity Authentication service provides identity management for SAP Cloud Platform applications. You can
register a tenant for Identity Authentication service as an identity provider for the applications in your SAP Cloud
Platform account.
Note
If you add a tenant for Identity Authentication service already configured for trust with the same service
provider name, the existing trust configuration on the tenant for Identity Authentication service side will be
updated. If you add a tenant for Identity Authentication configured for trust with SAP Cloud Platform with a
different service provider name, a new trust configuration will be created on the tenant for Identity
Authentication service side.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See Cockpit [page 97].
Make sure that you have selected the relevant global account to be able to select the right account.
○ You have a tenant for Identity Authentication service registered for your current SAP customer user (s-
user). You want to add the tenant as an identity provider.
1. Click Add Identity Authentication Tenant.
2. Choose the required Identity Authentication tenant and save the changes.
In this case, the trust will be established automatically upon registration on both the SAP Cloud Platform
and the tenant for Identity Authentication service side. See Getting Started with Identity Authentication
○ You want to add a tenant for Identity Authentication service not related to your SAP user.
In this case, you need to register the tenant for Identity Authentication service as any other type of
identity provider. This means you need to set up trust settings on both the SAP Cloud Platform and the
Identity Authentication tenant side. See Integration.
The tenant for Identity Authentication appears in the list of SAML identity providers. You can now administrate
further the Identity Authentication tenant by opening Identity Authentication Admin Console (hover over the
registered tenant for Identity Authentication and click Identity Authentication Admin Console). You can manage
the registered tenant for Identity Authentication as any other registered identity provider.
Note
It will take about 2 minutes for the trust configuration with the tenant for Identity Authentication to become
active.
Note
Each SAP Cloud Platform account is a separate service provider in the tenant for Identity Authentication .
Tip
If you need each of your SAP Cloud Platform applications to be represented by its own service provider, you
must create and use a separate account for each application. See Creating Accounts [page 20].
Related Information
If you already have an existing on-premise system with a populated user store, you can configure SAP Cloud
Platform applications to use that on-premise user store. This approach is similar to implementing identity
federation with a corporate identity provider. In that way, applications do not need to keep the whole user
database, but request the necessary information from the on-premise system.
Context
● check credentials
● search for users
● SAP Single Sign-On with a SAP NetWeaver Application Server for Java System - the applications on SAP
Cloud Platform connect to the SAP on-premise system using Destination API (and, if necessary, SAP HANA
Cloud Connector), and make use of the user store there.
● Microsoft Active Directory - this is an LDAP server that can serve as an on-premise user store. The
applications on SAP Cloud Platform connect to the LDAP server using SAP HANA cloud connector, and make
use of the user store there.
Related Information
Overview
You can configure applications running on SAP Cloud Platform to use a user store of an SAP NetWeaver (7.2 or
higher) Application Server for Java system and a SAP Single Sign-On system. That way SAP Cloud Platform does
not need to keep the whole user database, but requests the necessary information from an on-premise system.
Prerequisites
When deploying the application, you have to set system properties of the application VM. For more information,
see Configuring VM Arguments [page 1145].
Table 429:
System Property Value Description
Note
The WAR file that you are using as a source during the deployment has to be protected declaratively or
programmatically. For more information, see Enabling Authentication [page 1326].
Note
The VM arguments passed using this command will have effect only until you re-deploy the application.
Context
The on-premise system is an AS Java with a deployed SCA from SAP Single Sign-On (SSO) 2.0. For the
configuration of the on-premise AS Java system, proceed as follows:
Procedure
For more information about the role assignment process, see Assigning Principals to Roles or Groups.
2. If necessary, set the policy configuration to use the appropriate authentication method.
○ Basic authentication
The on-premise AS Java system is configured to use basic authentication by default. That means the
sap.com/tc~sec~scim~server*scim_v1 policy configuration is set to use basic Web template by
default.
○ Client certificate authentication
For client certificate authentication, you need to update the sap.com/tc~sec~scim~server*scim_v1
policy configuration to use client_cert Web template. In addition, you have to configure the on-premise
system to use the client certificate properly. For more information about the on-premise system
configuration, see Using X.509 Client Certificates on the AS Java.
For more information about the policy configuration, see Editing the Authentication Policy of AS Java
Components.
3. If your user does not exist in the on-premise system, create a technical user.
For the proper communication with the on-premise AS Java system, you need to configure the destination of the
Java application on SAP Cloud Platform. For more information, see Configuring Destinations from the Cockpit
[page 344].
You have to set the following properties for the destination of the cloud application:
Table 430:
Destination Property Value Description
URL https:// < AS Java Host>:<AS Java The URL to the on-premise AS Java sys
HTTPS Port>/scim/v1/ Or http:// tem if it is exposed via reverse proxy. Or
<Virtual host configured in Cloud in case the on-premise systems is ex
Connector>:<virtual Port>/scim/v1/ posed via HANA Cloud Connector the
virtual URL configured in Cloud Connec
tor. In this case, the configured protocol
should be http as the connectivity serv
ice is using secure tunneling to the on-
premise system.
You can use Microsoft Active Directory as an on-premise LDAP server providing a user store for your SAP Cloud
Platform applications.
Prerequisites
Context
When deploying the application, you have to set system properties of the application VM. For more information,
see Configuring VM Arguments [page 1145].
Table 431:
System Property Value Description
Note
The WAR file that you are using as a source during the deployment has to be protected declaratively or
programmatically. For more information, see Enabling Authentication [page 1326].
Example
Note
The VM arguments passed using this command will have effect only until you re-deploy the application.
Create the required destination and configure SAP HANA clolud connector as described in Configuring User Store
in the Cloud Connector [page 533]
Register clients, manage access tokens, configure scopes and perform other OAuth configuration tasks.
Prerequisites
● You have an account with administrator role in SAP Cloud Platform. See Account Member Roles [page 30].
● You have developed an OAuth-protected application (resource server). See Protecting Applications with
OAuth 2.0 [page 1340].
● You have deployed the application on SAP Cloud Platform. See Deploying and Updating Applications [page
1043].
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See Cockpit [page 97].
Field Description
Subscription The application for which you are registering this client. To
be able to register for a particular application, this account
Note
The client ID must be globally unique within the entire
SAP Cloud Platform.
Confidential If you mark this box, the client ID will be protected with a
password. You will need to supply the password here, and
provide it to the client.
Redirect URI The application URI to which the authorization server will
connect the client with the authorization code.
Token Lifetime The token lifetime.This value applies to the access token
and authorization code.
Results
Define scopes for your OAuth-protected application to fine-grain the access rights to it.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See Cockpit [page 97].
With revoking access tokens, you can immediately reject access rights you have previously granted. You may wish
to revoke an access token if you believe the token is be stolen, for example.
You may wish to revoke an access token if you believe the token is be stolen, for example. With revoking access
tokens, you can immediately reject access rights you have previously granted.
● The Cockpit - an administrator user may use the Cockpit to revoke tokens on behalf of different end users
● The end user UI - an end user may access its tokens (and no other user's) and revoke the required using that
UI
1. In the Cockpit, choose the Security OAuth section, and go to the Branding tab.
2. Click the End User UI link.You are now opening the end user UI in a new browser window. You can see all
access tokens issued for the current user.
3. Choose the Revoke button for the tokens to revoke.
Use a QR code for easier copying of the OAuth authorization code on mobile devices.
Context
When your account is configured for trust with a corporate identity provider (IdP), it is often impossible to connect
to the IdP directly using a personal mobile device. The corporate IdP is often part of a protected corporate
network, which does not allow personal devices to access it. To facilitate OAuth authentication on mobile devices,
you can use the end user UI's QR code generation option. It provides as a scannable QR code the authorization
code sent by the OAuth authorization server.
Procedure
You can customize the lookandfeel of the authorization page displayed to end users with your corporate branding.
This will make it easier for them to recognize your organization.
Procedure
1. In your Web browser, log on to the cockpit, and select an account. See Cockpit [page 97].
Results
The authorization page that end users see contains the company logo and colors you specify. The following image
shows an example of a customized authorization page.
Propagate users from external applications with SAML identity federation to OAuth-protected applications
running on SAP Cloud Platform. Exchange the user ID and attributes from a SAML assertion for an OAuth access
token, and use the access token to access the OAuth-protected application.
Prerequisites
● You have an application external to SAP Cloud Platform. The application is integrated with a third-party library
or system functioning as a SAML identity provider. That application has a SAML assertion for each
authenticated user.
Note
How the external application and its SAML identity provider work together and communicate is outside the
scope of this documentation. They can be separate applications, or the external application may be using a
library integrated in it.
Note
If you are using a separate third-party identity provider system for this scenario, make sure you have
configured correctly trust between the external application and the identity provider system. Refer to the
identity provider vendor's documentation for details.
Context
This scenario follows the SAML 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants
specification. The scenario is based on exchanging the SAML (bearer) assertion from the third-party identity
provider for an OAuth access token from the SAP Cloud Platform authorization server. Using the access token,
the external application can access the OAuth-protected application.
The graphic below illustrates the scenario implemented in terms of SAP Cloud Platform.
1. An external application has a SAML assertion on behalf of a successfully logged user. The application needs to
proparate that user and its relevant information (attributes, privileges, and so on) to the OAuth-protected
application running at SAP Cloud Platform.
Procedure
1. Configure SAP Cloud Platform for trust with the SAML identity provider. See Configure Trust to the SAML
Identity Provider [page 1410].
2. Register the external application as an OAuth client in SAP Cloud Platform. See Registering an OAuth Client
[page 1426].
3. Make sure the SAML (bearer) assertion that the external application presents contains the following
information:
Table 432:
Format="urn:oasis:names:
tc:SAML:1.1:nameid
format:unspecified"
xmlns:saml="urn:oasis:na
mes:tc:SAML:
2.0:assertion">p12356789
</saml:NameID>
Table 433:
Certificate ).
xmlns:xsi="http://
www.w3.org/2001/
XMLSchema-instance"
xsi:type="xs:string">tes
t@sap.com
</AttributeValue>
</Attribute>
<Attribute
Name="first_name">
<AttributeValue
xmlns:xs="http://
www.w3.org/2001/
XMLSchema"
xmlns:xsi="http://
www.w3.org/2001/
XMLSchema-instance"
xsi:type="xs:string">Jon
</AttributeValue>
</Attribute>
4. In the code of the OAuth-protected application, you can retrieve the user attributes using the relevant SAP
Cloud Platform API. See Working with User Attributes [page 1335].
Change the default user base for access to the cockpit of your SAP Cloud Platform account.
Prerequisites
You have a Identity Authentication tenant configured. See Identity Authentication documentation.
Context
By default, the cockpit is configured to use SAP ID Service as an identity provider for user authentication. Identity
Authentication, however, uses the SAP user base and default tenant settings. If you want to use your custom user
base or custom Identity Authentication tenant settings (such as two-factor user authentication, or corporate user
store, for example), you can use a custom Identity Authentication tenant as an identity provider.
Once you configure cockpit trust with a Identity Authentication tenant, you start accessing the cockpit using the
following URL:
https://account-<account-name>.hana.ondemand.com
If you open the default cockpit URL (see Cockpit [page 97]), SAP ID Service will be used for user authentication.
Note
This is a Beta feature and must not be used productively. The current implementation is for cockpit access to
your productive SAP Cloud Platform account only. The SAP Cloud Platform console client, Eclipse tools and
other tools still use only the user base and trust settings of SAP ID Service.
Tip
Once you configure the cockpit for using a Identity Authentication tenant, we recommend removing most
account members coming from the default user base, SAP ID Service. Keeping one or several SAP ID Service
users with administrator role can be useful for restoring account access in case the trust with the Identity
Authentication tenant stops working properly.
Note
Changing the identity provider for the cockpit does not affect the identity provider settings for your
applications in this account.
1. In your Web browser, open the cockpit. See Cockpit [page 97].
After you complete these steps, the cockpit is configured to trust and use the Identity Authentication tenant
as an identity provider for user access to this account.
The Identity Authentication tenant’s Administration Console, in turn, displays the SAP Cloud Platform cockpit
as a registered application. The application has <Identity Authentication tenant ID> as display
name, and https://account.hana.ondemand.com/<account name>/admin as SP name.
You can now configure the cockpit user access for this account. Go to the Members tab in the cockpit. You can
see all cockpit users, with their IDs, roles and user base, listed here. To add a new member, choose Add Members
and configure the member users from the respective user base (Identity Authentication tenant). See also
Managing Members [page 26].
You can configure the Identity Authentication tenant. Choose the Administration Console button.
The security guide provides an overview of the security-relevant information that applies to HTML5 applications.
Related Information
1.9.4.1 Authentication
SAP Cloud Platform uses the Security Assertion Markup Language (SAML) 2.0 protocol for authentication and
single sign-on.
By default, the SAP Cloud Platform is configured to use the SAP ID service as identity provider (IdP), as specified
in SAML 2.0. You can configure a trust relationship to your custom IdP to provide access to the cloud using your
own user database. For information, see ID Federation with the Corporate Identity Provider [page 1406].
HTML5 applications are protected with SAML2 authentication by default. For publicly accessible applications, the
authentication can be switched off. For information about how to switch off authentication, see Authentication
[page 1119].
Permissions for an HTML5 application are defined in the application descriptor file. For more information about
how to define permissions for an HTML5 application, see Authorization [page 1120].
Permissions defined in the application descriptor are only effective for the active application version. To protect
non-active application versions, the default permission NonActiveApplicationPermission is defined by the
system for every HTML5 application.
To assign users to a permission of an HTML5 application, a role must be assigned to the corresponding
permission. As a result, all users who are assigned to the role get the corresponding permission. Roles are not
application-specific but can be reused across multiple HTML5 applications. For more information about creating
roles and assigning roles to permissions, see Managing Roles and Permissions [page 1215].
Note
HTML5 application permissions can only protect the access to the REST service through the HTML5
application. If the REST service is otherwise accessible on the Internet or a corporate network, it must
implement its own authentication and authorization concept..
To access a system that is running in an on-premise network, you can set up an SSL tunnel from your on-premise
network to the SAP Cloud Platform using the SAP Cloud Platform Cloud Connector.
For more information about setting up the Cloud connector, see the Cloud Connector Operator's Guide.
Related Information
Cross-site scripting (XSS) is one of the most common types of malicious attacks on web applications.
If an HTML5 application is connected to a REST service, the corresponding REST service must take measures to
protect the application against this type of vulnerabilities. For REST services implemented on the SAP Cloud
Platform a common output encoding library may be used to protect applications. For more information about XSS
protection on the SAP Cloud Platform, see Protecting from Cross-Site Scripting (XSS) [page 1355].
Cross-Site Request Forgery (CSRF) is another common type of attack on web applications.
If an application connects to a REST service, the corresponding REST service must take measures to protect
against CSRF. For REST services implemented on the SAP Cloud Platform a CSRF prevention filter may be used in
the corresponding REST service. For more information about CSRF protection on the SAP Cloud Platform,see
Protecting from Cross-Site Request Forgery [page 1347].
Related Information
If you have questions or encounter an issue while working with SAP Cloud Platform, you can address them as
described below.
Depending on your account, you can use the following support media:
Table 434:
Developer Accounts Customer and Partner Accounts
To report an incident (issue) in the SAP Support Portal, follow the steps below.
Before reporting an incident, check the availability of the platform at SAP Cloud Platform Status Page .
Note
When you specify the correct product, installation and system, the correct support SLA will be applied to
your case.
Please note that not choosing the appropriate product, installation, and system may negatively affect the
processing of the incident. For more information on product, installation, and system values, see KBA
2379404 .
1. Select language, set priority of the incident and enter a subject. Note that if you set a high or very high priority,
you have to also describe the business impact of the incident.
2. To help the support staff process your issue as fast as possible, please provide the following information in
the Description field:
○ Landscape and account name. In the cockpit, open the affected account, and copy the URL.
○ Java application name and URL (only when the problem is related to Java applications). In the cockpit,
open the respective Java application’s Overview page.
○ Database or schema ID (only when the problem is related to a database system or schema). In the
cockpit, see the ID column by navigating to Persistence Databases & Schemas .
3. From the Component dropdown list, select the component name of the area, which fits best to your issue.
Selecting the right component will direct your issue to the corresponding support team. To check the
complete list of components, see SAP Note 1888290 .
4. Enter the steps to reproduce the issue and if necessary, add some attachments.
5. Optionally, define contact(s) apart from the reporter, who is filled in automatically.
6. When ready, choose Submit to create the incident.
Additional Resources
The Eclipse tools come with a wizard for gathering support information in case you need help with a feature or
operation (during deploying/debugging applications, logging, configurations, and so on).
Context
The wizard collects the information in a ZIP file, which can be later sent to SAP support. This way, SAP support
developers can get better understanding of your environment and process the issue faster.
Procedure
Note
If you select Screenshot, your currently open Eclipse windows and views will be snapped as a picture and
added to the ZIP file . Make sure you don't reveal sensitive information.
3. In the File Name field, specify the ZIP file name and location.
4. Choose Finish.
You can create a support ticket, attach the ZIP file to it and send it to the relevant OSS component. For more
information, see Get Support [page 1444].
SAP Cloud Platform is a dynamic product, which has continuous production releases (updates). To be aware of
the new features delivered every takt, check the Release Notes regularly.
● Bi-weekly updates (standard) - each second Thursday, aligned with the contractual obligations of SAP Cloud
Platform to customers and partners. Such updates normally do not affect productive applications but most
often affect the ability to deploy to the platform. Exact details are sent as notification in advance, together
with a notification on completion of the update. During the update, a new platform release is deployed.
● Immediate updates - in case of fixes required for bugs that affect productive application operations, or due to
urgent security fixes. In some cases, this might lead to downtime or application restart, for which the
application groups will receive a notification.
● Major upgrades - they happen rarely, in a bigger maintenance window - up to 4 times per year, usually in
Saturdays from 8:00 to 14:00 CET. When such an upgrade is needed, it is communicated one week in
advance.
The availability of the platform and announcements about upcoming updates and downtimes, you can follow at
https://sapcp.statuspage.io/ .
To receive regular information about landscape downtimes and news, you need to subscribe to the mailing list:
https://listserv.sap.com/mailman/listinfo/hanacloud-announce
An operating model clearly defines the separation of tasks between SAP and the customer during all phases of an
integration project.
SAP Cloud Platform and its services have been developed on the assumption that specific processes and tasks
will be the responsibility of the customer. The following table contains all processes and tasks involved in
Changes to the operating model defined for the services in scope are published using the What's New (release
notes) section of the platform. Customers and other interested parties must review the product documentation
on a regular basis. If critical changes are made to the operating model, which require action on the customer side,
an explicit notification is sent by e-mail to the affected customers.
It is not the intent of this document to supplement or modify the contractual agreement between SAP and the
customer for the purchase of any of the services in scope. In the event of a conflict, the contractual agreement
between SAP and the customer as set out in the Order Form, the General Terms and Conditions of SAP Cloud
Services, the supplemental terms and conditions, and any resources referenced by those documents always
takes precedence over this document.
The responsibilities for operating SAP Cloud Platform are listed in the service catalog below.
Subscribe to the x
communication
channels offered
by SAP for re
ceiving prompt
information
about any service
disruptions, criti
cal maintenance
activities affect
ing the customer
system, and
change requests
requiring action
on the customer
side.
Protect IT assets x
such as systems,
network, and
data from threats
that arise from
unauthorized
physical access
or physical influ
ence on those as
sets.
Provisioning Provisioning of x
resources and
systems to cus
tomers in accord
ance with or
dered package
and require
ments. This in
cludes the alloca
tion and provi
sioning of techni
cal (physical and
virtual) resour
ces, such as stor
age, network,
compute units,
systems, and da
tabase hosts, the
deployment of
the application
software and the
proper initial con
figuration of quo
tas, service sub
scriptions, per
missions, and
trust configura
tion.
Enable resources x
and services (for
example, SAP
HANA compo
nents) provi
sioned according
to the ordered
package and re
quirements.
Perform up x
grades of the in
frastructure, sys
tems, and serv
ices in a bi-
weekly cycle.
Emergency
changes, for ex
ample, triggered
by Incident Man
agement proc
esses, have ac
celerated testing,
approval, and im
plementation.
Consume latest x x
version of provi
sioned infrastruc
ture, systems,
and services (for
example, Java
runtime, operat
ing system) to
run the applica
tion in the cus
tomer account.
Collaborate with x
SAP to ensure
timely processing
of change re
quests affecting
the resources in
the customer ac
count.
Prompt delivery x
of patches for se
curity vulnerabili
ties in the operat
ing system and
database hosted
by the applica
tion. This in
cludes reviewing
the priority of the
relevant patches,
assessing the
risk, and finally
implementing the
patch via the
Change Manage
ment process.
Confirm incident x
resolution in the
incident tracking
system (BCP).
Confirm service x
request comple
tion in the service
request tracking
system (BCP).
Restore previ x
ously backed-up
data to recover to
a consistent
state. Verify the
completeness of
the restored data
based on log files
created during
the recovery and
smoke tests to
verify the sys
tem’s consis
tency.
Collaborate with x
SAP to ensure
timely processing
of data restores if
required.
Validate logical x
integrity and con
sistency of the
restored data.
Provide infra x
structure, tools,
and application
programming in
terfaces for the
lifecycle manage
ment and opera
tions of the appli
cation in the cus
tomer account.
Regularly adopt x
the latest ver
sions of the tools
for lifecycle man
agement and op
erations offered
at the SAP Devel
opment Tools
site.
This page summarizes how we changed SAP Cloud Platform after receiving your comments from our feedback
form.
The other important changes in the software and documentation are described in the Release Notes.
Table 436:
Documentation Description
Added more information about the consumer - provider model, and how
Managing Java Subscriptions [page 33]
HTML5 and Java applications are subscribed.
subscribe [page 288], unsubscribe [page 297], Added a reference to the documentation describing how to manage HTML5
list-subscribed-accounts [page 240], and list- subscriptions.
subscribed-applications [page 241]
Table 437:
Documentation Description
Accessing the Document Service from an Added instructions on how to access the document service from an HTML5
HTML5 Application [page 625] application.
February 2, 2017
Table 438:
Documentation Description
Table 439:
Documentation Description
Enabling SAP HANA Interactive Education Added instructions for enabling the SAP HANA Interactive Education
(SHINE) [page 82] (SHINE) demo application.
Table 440:
Documentation Description
Table 441:
Documentation Description
Creating an SAP HANA XS Hello World Applica Eclipse / SAP HANA Studio.
tion Using SAP HANA Studio [page 73]
Table 442:
Documentation Description
SAP Cloud Platform Cloud Connector [page Added a new section Tasks, listing the procedures required to assign the
480] Cloud connector to an SAP Cloud Platform account.
November 3, 2016
Table 443:
Documentation Description
Storing Passwords [page 1379] Added a note recommending to cache obtained values in code example Us
ing the Password Storage API.
Table 444:
Documentation Description
Added the information that service channels can now also be used with an
Configuring a Service Channel for HANA Data
MDC (Multitenant Database Container) trial instance. See also: Using a Trial
base [page 535]
SAP HANA Instance [page 1098].
HttpDestination API and DestinationFactory Added the information that this procedure requires a successful setup of the
[page 316] Java development environment as a prerequisite to display all API packages.
October 6, 2016
Table 445:
Documentation Description
Step 7 (additional properties): added a note with link to the detailed descrip
Creating RFC Destinations [page 348]
tion of RFC-specific JCo properties.
Step 9 (additional properties): added a note with link to the detailed descrip
Creating HTTP Destinations [page 347]
tion of WebIDE-specific properties.
Added some detailed information about proxy settings for the Cloud
Initial Configuration [page 504]
connector setup.
September 8, 2016
Table 446:
Documentation Description
Consuming SAP Cloud Platform Connectivity Updated the Related Links, now pointing to the correct topic: Connectivity
(HANA XS) [page 466] for SAP HANA XS (Trial Version) [page 477].
Table 447:
Documentation Description
Added a note to explain that the Delete (trashcan) icon appears only on the
Deleting Accounts [page 23]
tile for the account in question if you created the account yourself.
Using XS Destinations for Internet Connectivity Updated the response after executing a HANA XS application (step 5 in pro
[page 468] cedure) to google maps response values.
Table 448:
Documentation Description
Table 449:
Documentation Description
Table 450:
Documentation Description
Table 451:
Documentation Description
Deleting a Repository (Cockpit) [page 659] This topic now includes a link to the reset-ecm-key command with which
you can request a new repository key. This helps avoid deleting a repository
just because you forgot the key.
Table 452:
Documentation Description
HTML5: Getting Started [page 84] The link to Setting up SAP Web IDE now directly points to the SAP Web IDE
documentation. The link to Installing Eclipse IDE now directly points to the
Eclipse topic.
April 7, 2016
Table 453:
Function Description
Connecting to SAP HANA Schemas via the Added a note that you must have created a schema previously to be able to
Eclipse IDE [page 935] select it in this step.
Table 454:
Documentation Description
Get Support [page 1444] Added SAP Note 560499 . You can use it when you need to contact the
24/7 phone hotlines.
Table 455:
Function Description
Monitoring and Logging in the cockpit A filter in the Configure Loggers dialog box in Java applications allows you to
filter the list by logger name and thereby only show only the loggers that you
are interested in. For more information see Using Logs in the Cockpit [page
1177]
Table 456:
Documentation Description
Creating an SAP HANA Database from the Added examples for both productive and trial multitenant database contain
Cockpit [page 830] ers (MDC).
Connecting to SAP HANA Databases via the Added a screenshot for the step where you enter your database user name
Eclipse IDE [page 932] and password.
Connecting to the Remote SAP MaxDB Data Added the prerequisite that you need the connection details, which you ob
base [page 936] tained when you opened the database tunnel, to perform the procedure.
Creating an SAP HANA XS Hello World Applica Added a note that newly created SAP HANA XS applications are only visible
tion Using SAP HANA Studio [page 73] in the cockpit once you have activated them.
(Optional) Installing SAP JVM [page 44] Updated information on the Visual C++ Runtime. If you use Microsoft Win
dows as your operating system, you need to install the Visual C++ 2010 Run
time before you can use SAP JVM.
Installing SAP Development Tools for Eclipse Added the field name where you have to enter the URL when you install new
[page 46] software in Eclipse.
Creating a Project [page 87] The procedure now reflects the changed behavior of the SAP Web IDE.
Using Custom Header Protection [page 1351] Added an improved explanation about where to find the XSRF protection fil
ter class and how to use it (no need to instantiate or extend).
Table 457:
Function Description
Connectivity Service We now display the error code and the reason for connection failure when
you use the Check Connection button in the connectivity destination editor
(in the Cockpit). For more information, see Checking the Availability of a Des
tination (Cockpit) [page 350].
Table 458:
Documentation Description
Business Services for YaaS [page 1012] Added information about YaaS and SAP Cloud Platform and where business
services reside in the whole picture. There is also an illustration showing how
the different components interact.
Table 459:
Documentation Description
Assigning Destinations for HTML5 Applications Reorganized topic to contain prerequisites and provide step-by-step instruc
[page 1214] tions.
Using Java EE 6 Web Profile [page 1036] A new prerequisite has been added to ensure the correct setup of the SAP
Cloud Platform Tools.
Installing the SDK [page 44] Explained how to proceed after the SDK archive file is downloaded and ex
tracted in more detail.
Table 460:
Documentation Description
Managing Members [page 26] Added information about new features in the member list:
The names of members are now displayed in addition to their user IDs.
Note
The name of a member is displayed only after the member visits the ac
count for the first time.
Note
The e-mail option is displayed only after this member visits the account
for the first time.
Sending e-mails to members is only possible after the recipient has logged
on to the account.
Table 461:
Documentation Description
Adding Container-Managed Persistence with A section about deploying applications using persistence on the cloud from
JPA (Java EE 6 Web Profile SDK) [page 795] Eclipse IDE was included.
Enabling Logout [page 1337] A new section dedicated to protecting logout resources from cross-site re
quest forgery (CSRF) was added. A confusing section about JavaScript re
source protection was removed.
Table 462:
Documentation Description
Adding Container-Managed Persistence with A section was added for configuring the persistence.xml file.
JPA (Java EE 6 Web Profile SDK) [page 795]
Table 463:
Documentation Description
ID Federation with the Corporate Identity Pro The entire content is reworked in several ways:
vider [page 1406]
● It is clearer now what a local service provider is and what it is required to
configure it
● The separate parts of the content are accessible from the tree
Using the Authorization Management API The scope parameters are removed from the example. Scopes are redun
[page 1333] dant for this scenario and are ignored (based on the OAuth 2.0 client creden
tials flow).
Using Java EE 6 Web Profile [page 1036] The code sample in section Call the EJB from the JSP was improved.
Setting Up the Development Environment Links to Updating Java Tools for Eclipse and SDK [page 53] and SAP Devel
[page 43] opment Tools for Eclipse have been included.
Setting Up SDK Location and Landscape Host A new step was added to the procedure describing that you need to select
in Eclipse [page 47] the directory where you have downloaded the JVM.
Adding Container-Managed Persistence with A new step was added to the Create a Dynamic Web Project and Servlet pro
JPA (Java EE 6 Web Profile SDK) [page 795] cedure describing that you need to select the Generate web.xml deployment
descriptor checkbox in the Web Module configuration settings.
Adding Application-Managed Persistence with
JPA (Java Web SDK) [page 807]
Configuring a Service Channel for HANA Data A note was added stating that this procedure requires a production SAP
base [page 535] HANA instance and cannot be performed using a trial instance.
Coding Samples
Any software coding and/or code lines / strings ("Code") included in this documentation are only examples and are not intended to be used in a productive system
environment. The Code is only intended to better explain and visualize the syntax and phrasing rules of certain coding. SAP does not warrant the correctness and
completeness of the Code given herein, and SAP shall not be liable for errors or damages caused by the usage of the Code, unless damages were caused by SAP
intentionally or by SAP's gross negligence.
Accessibility
The information contained in the SAP documentation represents SAP's current view of accessibility criteria as of the date of publication; it is in no way intended to be a
binding guideline on how to ensure accessibility of software products. SAP in particular disclaims any liability in relation to this document. This disclaimer, however, does
not apply in cases of willful misconduct or gross negligence of SAP. Furthermore, this document does not result in any direct or indirect contractual obligations of SAP.
Gender-Neutral Language
As far as possible, SAP documentation is gender neutral. Depending on the context, the reader is addressed directly with "you", or a gender-neutral noun (such as "sales
person" or "working days") is used. If when referring to members of both sexes, however, the third-person singular cannot be avoided or a gender-neutral noun does not
exist, SAP reserves the right to use the masculine form of the noun and pronoun. This is to ensure that the documentation remains comprehensible.
Internet Hyperlinks
The SAP documentation may contain hyperlinks to the Internet. These hyperlinks are intended to serve as a hint about where to find related information. SAP does not
warrant the availability and correctness of this related information or the ability of this information to serve a particular purpose. SAP shall not be liable for any damages
caused by the use of related information unless damages have been caused by SAP's gross negligence or willful misconduct. All links are categorized for transparency
(see: http://help.sap.com/disclaimer).