Sei sulla pagina 1di 272

Preface

As a ServiceNow professional, I believe that there is no harm in learning


other applications because, very often, we come across technologies that
need to be managed with ServiceNow. So, the genesis of this book owes to
the fact that several ServiceNow professionals have struggled when
automating other applications with ServiceNow. This book is my humble
attempt to address this. It provides a step-by-step approach for setting up
ServiceNow for the automation of other applications.
Who this book is for
This book targets IT professionals and administrators who have some
experience of working with ITSM or ServiceNow already and are looking
to move into ServiceNow automation. It is advisable to have a basic level
of administration experience with ServiceNow. Familiarity with JavaScript
is assumed.
What this book covers
Chapter 1, The IT Challenges and Power of Automation, covers various
challenges of IT infrastructure and the motives behind the automation of
infrastructures. Furthermore, this chapter also includes industry CI
discovery methods with the ServiceNow discovery product Touch.

Chapter 2, ServiceNow Configuration, covers ServiceNow automation


rather than orchestration process setups, such as the single MID server,
multiple MID server, discovery probe, and sensors that are essential.

, Manage Virtual Machine with ServiceNow, covers various virtual


Chapter 3
machine products such as Amazon EC2, VMware, and Microsoft Azure,
which can be automated with ServiceNow and the chapter includes a step-
by-step process.

, Automation with Puppet, covers the basics of Puppet, including


Chapter 4
the architecture of Puppet and the installation of Puppet master and Puppet
nodes. In the second section, you will learn about the ServiceNow Puppet
application and what can be automated in an application within the Puppet
environment.

, Automation with Chef, covers the basics of Chef, including the


Chapter 5

architecture of Chef and the installation of Chef Server, Chef Workstation,


and Chef Nodes. In the second section, you will learn about the
ServiceNow Chef application and what can be automated in an application
within the Chef Environment.
To get the most out of this
book
ServiceNow is a cloud-hosted enterprise-level application and requires
only a standard browser (Internet Explorer/Firefox/Safari/Google Chrome)
to access ServiceNow. You can even claim a personal development
instance by registering at https://developer.servicenow.com/app.do#!/home.
Download the color images
We also provide a PDF file that has color images of the
screenshots/diagrams used in this book. You can download it at http://www.p
acktpub.com/sites/default/files/downloads/ServiceNowAutomation_ColorImages.pdf.
Conventions used
There are a number of text conventions used throughout this book.

CodeInText indicates code words in text, database table names, folder names,
filenames, file extensions, pathnames, dummy URLs, user input, and
Twitter handles. Here is an example: "Status displays all pid rather process
that are running, such as chef-server-ui, and chef-solar."

Bold indicates a new term, an important word, or words that you see
onscreen. For example, words in menus or dialog boxes appear in the text
like this. Here is an example: "On the same page, you should be able to
see the Configure Network button as well, as we saw in the previous
sections."

Warnings or important notes appear like this.

Tips and tricks appear like this.


Get in touch
Feedback from our readers is always welcome.

General feedback: Email feedback@packtpub.com and mention the book title


in the subject of your message. If you have questions about any aspect of
this book, please email us at questions@packtpub.com.

Errata: Although we have taken every care to ensure the accuracy of our
content, mistakes do happen. If you have found a mistake in this book, we
would be grateful if you would report this to us. Please visit www.packtpub.com
/submit-errata, selecting your book, clicking on the Errata Submission Form
link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on
the Internet, we would be grateful if you would provide us with the
location address or website name. Please contact us at copyright@packtpub.com
with a link to the material.

If you are interested in becoming an author: If there is a topic that you


have expertise in and you are interested in either writing or contributing to
a book, please visit authors.packtpub.com.
Reviews
Please leave a review. Once you have read and used this book, why not
leave a review on the site that you purchased it from? Potential readers can
then see and use your unbiased opinion to make purchase decisions, we at
Packt can understand what you think about our products, and our authors
can see your feedback on their book. Thank you!

For more information about Packt, please visit packtpub.com.


The IT Challenges and Power
of Automation
Automation might not be a new term; you will have probably heard it
when talking about proof of concept (POC), in business meetings, and so
on. If you have prior experience with ServiceNow then you should be able
to co-relate effortlessly. In my experience, I came across automation in
many scenarios where meeting a client's requirement of automation was
vital. However, while recommending automation is very simple,
implementing it is a different task. In this chapter, we are going to learn
about the IT challenges posed by automation, its power, and where
automation is important.

This chapter will cover the following points:

Understanding IT challenges

Why you should automate

Skills required for automation

Understanding the power of automation

Introducing configuration management tools


Understanding IT challenges
Although there are many challenges in IT, this book is mainly focused on
IT infrastructure challenges. Since cloud computing has been introduced,
many organizations have moved into the cloud, but many are still hesitant.
Every organization, infrastructure, and culture is unique, so a reluctance to
change is understandable. Let's explore some of the major challenges
organizations face and where automation might be advisable:

Service delivery and availability often encourage automation.


Take a very simple scenario, where a virtual machine is required
in a project environment. Now just imagine how good it would be
if the delivery time of a service could be reduced to enrich user
experience, and enhance service delivery and availability. Manual
work also encourages automation, as some processes are very
complex and tough to maintain from an auditing perspective; here,
it is important to remember that a non-IT organization may not
have enough skilled IT associates.

Compliance is a term you should be familiar with. Compliance is


necessary for confronting internal or external policies, where an
internal policy is governed by the organization, and external
policy is governed by the government, standard professional
bodies, and so on. By implementing automation, the best IT
practice can be followed to fulfill auditing requirements.
Why you should automate
Why should you automate? This is a very open-ended question and can
only be answered by an organization and its needs. So, let's understand the
customer's or organization's point of view. Here, we will take an example
of an insurance company where there are few IT staff; automation can help
them in process governance, costing, response time, and so on. Now let's
explore some very common reasons for automation.
Return on investment
Return on investment (ROI) is key for any business owner or
stakeholder when assessing benefits like cost saving, governance, quality,
operational efficiency, and so on. Orchestration, which is also known as
automation, provides the following automation packs under workflows
post activation:

Microsoft Active Directory

Exchange

F5 network management

Infoblox DDI

Microsoft System Center Configuration Management (SCCM)

Secure shell (SSH)

Workday

SFTP file transfer

There are other components which come under the orchestration umbrella,
such as password reset, client software distribution and so on, but they are
beyond the scope of this book. If, in your organization, one of the
preceding technologies are being used then you may recommend
automation through the ServiceNow application. In terms of costing and
licensing, orchestration is part of the IT operation management (ITOM)
suit and its cost is based on the number of nodes in a customer's
environment. It's important to remember that nodes can be a virtual or
physical server, so any node orchestration that is done directly by
ServiceNow, or a third party, requires an orchestration license. You will
also have to create a ROI matrix to determine the benefits of automation.
Response and delivery time
Many non-IT organizations use a rate card system to determine the
performance of IT teams rather services. I personally like that kind of
arrangement, as all in all, a happy customer is the key of any business.
Let's take an example of a traditional IT process: if John Snow wants to
access a shared drive to upload a file, he will probably have to follow the
following steps:

1. Log in on the ServiceNow production instance


2. Search for the shared drive catalog item
3. Fill in the appropriate fields
4. Submit the form

Once the form is submitted, a REQ number, a RITM number, and at last a
Task number, will be generated, which will be assigned to a support team
where a support group member will take the task and work on it. If
automation is implemented, then the new process will skip required
manual activities and the workflow will be extended to an external system.
This will enhance response and delivery time as well. If we consider the
service level agreement, or in short SLAs, with the business units, then by
implementing the automation, SLA time with the business units can be
decreased up to a great extent, which will enhance the user experience:
Request, request item, task generation process
Consistency and best practice
If a task is assigned to you that requires manual effort, as an individual
you may have your own choice of execution order. Each technician has
their own way of performing the task at hand, and in the absence of
consistency the desired output may vary, which can lead to critical issues
in the production environment. I have often observed that most IT teams
define their own process guide, which is a good practice for maintaining
the quality of the system. In the automation world, you can define the
process flow considering the best practice and that process will be
executed every time - without manual intervention.
Process audit
Global organizations are very particular about their processes. Many
organizations follow a standard professional body on which half-yearly or
yearly auditing is done. That's why many stick to their process and
continue to implement it. But what about the governance of the process?
That's why organizations have a process owner. With automation,
however, you can achieve better control over process governance.
Skills required for automation
ServiceNow administration experience is required before moving into
automation. If you want to learn about ServiceNow administration and
development then you can go through my book ServiceNow Cookbook,
which will guide you through the ServiceNow platform, as well as its
administration and development. It is important to mention that if you are
experienced in ServiceNow, automation may still be difficult to grasp at
first. So, let's unpack some of the technologies which will help you during
common automation activities:

Scripting knowledge of SSH: SSH is a network protocol and a


secure way of accessing a remote computer. It provides strong
authentication and secure encrypted data communications between
two systems.

Scripting knowledge of PowerShell: PowerShell is provided by


Microsoft to automate system tasks like batch processing, and to
create system management tools for implementing processes.

Experience in VMware administration: VMware allows users to


create multiple virtual computer systems on a single computer or
servers.

Experience in Windows/Linux: I assume that the


Windows/Linux operating systems don't need any introduction. If
you need to, you can refer to any standard website to learn more
about these operating systems.

Experience in desktop management: Desktop management


software is used for managing the desktop, laptop, and so on
within the company.

Experience in Puppet: Puppet is a server management software


and follows client-server architecture, utilizing ServiceNow
CMDB to bring computers into its desired state.

Experience in Chef: Chef is also a server management software


like Puppet and follows client-server architecture and utilizes
ServiceNow CMDB to bring computers into its desired state.

Automation with VMware, Puppet, Chef, and will be discussed in detail in


Chapter 3, Manage Virtual Machine with ServiceNow, Chapter 4, Automation

with Puppet, and Chapter 5, Automation with Chef, respectively.


Understanding the power of
automation
Although more detailed explanations of automation components will be
given later on, in the chapter, let's look over the ServiceNow workflow :

You will have probably heard of ServiceNow's service catalog


before, as it is available on CMS or front-end to business users or
end users for logging a request. Navigate to the Service Catalog
application | Catalog Definitions | Maintain Items | select any
catalog item | open Workflow of the catalog item.

If you open the workflow, you should be able to view a similar


workflow with different types of activities that will be followed
during the execution of the workflow post-request or form
submission by the end user. The following is a graphical
representation of the workflow:
Item workflow or process flow

If you want to call it a generic workflow then yes, you can. The
ServiceNow workflow is very powerful and capable of interacting
with external systems. Once the orchestration plugin is activated,
you should see additional tabs such as Packs, Custom, and Data in
the workflow, which are orchestration-activated tabs. You can
simply drag and drop these on your existing workflow canvas to
extend it to a desired supported system:

New tabs post orchestration plugin activation

Let's now look at orchestration briefly with an example. If some new


members have joined your team, they would need to be added in the
support group that resides in the AD server. If we follow a typical IT
process, then a request is logged on ServiceNow to add new members in
the group, which spawns an RITM number and then a task number. And
finally task is assigned to a support group member who go on external
system to add users in the group. This manual piece of work can be
automated by a tiny active directory activity without manual intervention,
and once this activity is completed by orchestration, new members must
be added in the group, the task will be auto-closed as follows:

Orchestration workflow

Let's understand what's happening here. Requester is adding a new


member in the support group by a simple catalog item request and post
submission it is kicking-off the attached workflow. When the workflow
executes the Add User to Group activity, a required action is performed on
the AD system and returns the result as a success.
Introducing configuration
management tools
Although this book addresses automation by ServiceNow, there is no harm
in understanding independent technologies or terms. If you are from an
infrastructure management background, you will have heard about
management tools or worked with them. Although configuration
management tools will be explained in Chapter 2, ServiceNow
Configuration, we will look at a couple of things now. Configuration
management applications are divided into two segments; the first is an
agent-based application and the other is agentless. Let's briefly take a look.
Agent-based software
Agent is a small piece of software that is installed on a target machine by
IT desktop management personnel.

Technically, an agent communicates to the server when performing tasks.


In a corporate network, there may be millions of devices, but this doesn't
mean that each device is important and needs to be maintained. With
agent-based software you can have greater control on specific devices
without affecting the organization network's bandwidth. Let's explore one
of the configuration management tools, Microsoft SCCM, which works on
client-server technology, before we get into more detail. Here, you need to
understand two terms: client computer and configuration manager.
Typically, client software is installed in machines that need to be
monitored in the corporate network of an organization, and client
computers are managed by configuration manager sites regardless of VPN,
dial-up, and so on. An SCCM configuration manager has a collection of
Discovery methods to search with, like heart beat Discovery, network
Discovery, and so on.

SCCM stores all CIs in the SQL Server, which can be integrated with
ServiceNow to import data through scheduled jobs. Out of the box,
ServiceNow provides SCCM plugins, Microsoft SCCM 2007, and
Microsoft SCCM 2012 v2:

1. Navigate to System Definition | Plugins | Search for *SCCM and


activate the plugin:
SCCM plugins

2. After activating the plugin, ServiceNow places the SCCM


application in the application navigation section, as follows:

ServiceNow SCCM application

3. Navigate to Integration - Microsoft SCCM 2012 v2. Click on the


Setup module to open the configuration form. For SCCM
integration, you need the help of SCCM admins, as only they
would be able to provide you with the Database Name, Database
User ID, Database User Password, and Table schema prefix;
through that access ServiceNow will pull the data from the SQL
server:
SCCM integration

4. Now, it's time to save to our precious CIs database, which will
play a crucial role in automation. In order to do that, click on the
Test data source connections button that directs you to a UI page.
In the background, ServiceNow opens a JDBC connection via an
MID server and runs a series of SQL queries or commands
defined in the data source. If the test is successful then
ServiceNow is connected to the SCCM SQL database; otherwise,
you will see an error, as follows:

SCCM testing error

5. Import Set is a very powerful inbuilt application of ServiceNow


that is key for performing a data import from a file or network. By
default, ServiceNow supports many types of file sources, such as
Excel, XML, and CSV; furthermore in network data retrieval
methods, HTTP, FTP, and JDBC. A JDBC driver is required to
connect with the database, and out of the box ServiceNow
supports MySQL database on port 3306, Microsoft SQL Server on
port 1433, and Oracle on port 1521. SCCM data sources are
available in plugin only. Navigate to Integration - Microsoft
SCCM 2012 v2 | Data Sources and you will be given the
following screen, where the data source type is JDBC and the
format is SQL Server:

SCCM data source

6. If you open a Data Source, then you will notice an SQL statement
that allows SQL queries to run on a target system. It is important
to note that you have the privilege to modify this query out of the
box:
Data source SQL queries

Let's understand how SCCM integration works. I would like to introduce a


couple of terms, or rather pillars, for better understanding: transform map
and import schedule. Firstly, transform map is used to map the import set
table to the target table. Whenever you want to import data from external
sources, like JDBC, FTP, CSV, and so on, data is stored in a temporary
staging table - called an import set table - and that is placed from where
data is mapped into target tables. It is also important to note that your
target table may be a task table, such as incident incident, change
change_request , problem problem, and so on, or a data table like user sys_user,
group sys_user_group, CMDB cmdb_ci, and so on. Navigate to Integration -
Microsoft SCCM 2012 v2 | Scheduled Import, which stores import
schedule configurations. A couple of fields are important here, like Time
and Data Source. The Time field stores import time and is a standard
ServiceNow Time field that can be customized based on your need. The
Data Source is the table in which import data is being stored. The schedule
import runs at specific times and stores the data in the SCCM computer
identity imp_sccm_computer_identity in the ServiceNow instance and is
imported from the following tables of SCCM:

v_GS_COMPUTER_SYSTEM
v_GS_SYSTEM

v_GS_OPERATING_SYSTEM

v_GS_SYSTEM_ENCLOSURE

v_GS_WORKSTATION_STATUS

v_GS_PC_BIOS

v_GS_COMPUTER_SYSTEM_PRODUCT

v_Gs_BASEBOARD

You might be interested to know that SCCM computer identity


imp_sccm_computer_identity is the first table that imports the data, and based on

transform logic, data is processed and new CI items are created in


ServiceNow environment. Once the data import is completed, data is
further processed in other relevant tables. We already know that SCCM is
an agent-based system that stores the devices based on client software, so
let's move on a little further and look at what details are being stored in
ServiceNow:

Operating system: ram, os_version, os_service_pack

Processor: cpu_core_thread, cpu_speed, os_address_width, cpu_manufacturer,


cpu_core_count, cpu_name, cpu_type, cpu_count

Disk: disk_type, short_description, manufacturer, device_id,


last_discovered, name, computer, disk_space

Network: dhcp_enabled, mac_address, name, cmdb_ci, netmask,


last_discovered, ip_address, default_gateway

Software: Software packages installed like Microsoft Office,


Adobe Photoshop, iTunes, and so on
Let's now take a look at how an SCCM import is configured. Here, there
are a couple of things that you will need to be mindful of, such as the Run
field and Time, as these may impact the production environment's
performance:

SCCM data import schedule

From an automation point of view, a custom JDBC connection is


important. Out of the box, ServiceNow provides a JDBC connection
jdbc_connection table to store the connections. In order to create a new JDBC
connection, navigate to Orchestration | Credentials & Connection | JDBC
Connections | and click New. This should direct you to a JDBC connection
configuration page, as shown in the following screenshot:
New JDBC connection
Agentless software
In an agentless approach you don't need to install any software in the
target machine, as it works based on network protocols. SNMP and WMI
monitoring are two very common protocols that are used in agentless
monitoring. It is important to note that network bandwidth consumption is
higher in agentless environments than agent-based. You would be
interested to know that the Discovery product of ServiceNow is an
agentless product with a separate subscription that launches different types
of probes to collect information from the network. Although Discovery
and the MID server will be discussed in the next chapter, let's look at the
Discovery product briefly.

We know that Discovery is an agentless system, but how does Discovery


interact with a network device? Through measurement, instrumentation,
and the Discovery (or MID) server. This acts as a Windows service or
UNIX daemon and resides in the corporate network to enable secure
communication between the ServiceNow instance and corporate network.
The MID server communicates over HTTPS and uses SOAP web services.
It uses the network protocols UDP and TCP to establish communication
with network devices. You might be interested to know that under UDP's
umbrella, Simple Network Management Protocol (SNMP) exists, which
can establish communication with network routers, switches, printers, and
so on. Under TCP's umbrella, SSH, WMI, and PowerShell are available. It
is also important to remember that credentials will be required to access
network devices.

Here I would like to introduce two major pillars of Discovery: the first one
is probe and the second is sensor. Probe is used to collect data from
network devices and sensor is used for processing collected data. Out of
the box, ServiceNow provides different types of probes, such as the CIM
Probe, Multiprobe, SNMP Probe, and so on, and similarly sensors, such as
classifier, Java, JavaScript, XML, and so on.
It is time to introduce another term, that is, external communication queue,
or in short, ECC ecc_queue. Discovery's probe and sensor get instructions
from the ECC queue only. It is important to note that Discovery's probes
and sensors work in synergy, and for each and every probe there must be a
sensor to process the collected data. Furthermore, being an agentless
system, whenever Discovery finds a device, it explores its configuration,
which is then stored in the ServiceNow CMDB.

Let's take a closer look at an agentless system to understand it better. Out


of the box, ServiceNow Discovery works in the following four mentioned
phases:

Scanning phase: Whenever Discovery is initiated, port scanning


(the SHAZZAM probe) is launched. This is a brute force scan and
it runs on the MID server to collect active devices.

Classification phase: Once active devices are detected,


ServiceNow launches another probe that is a classifier, such as
Unix Classify, SNMP Classify, and Windows Classify. As
mentioned earlier, a sensor should be there to process the collected
information so it can determine the next probe for the device.

Identification phase: By virtue of classified device information,


the identity sensor processes the information queried by the
ServiceNow CMDB for its matching CI before ServiceNow
launches the exploration probe.

Exploration phase: Finally, the MID server starts a probe for


more information from the device to send back to ServiceNow.
Exploration sensors process the result and update the CMDB
accordingly.

Let's now take a look at all four of the phases step-by-step:


Discovery with probe and sensor (high level process)
Help the Help Desk
You will have probably seen this on a ServiceNow instance; it is a limited
edition feature of Discovery and applicable to one machine only. Let's
explore the functions that Help the Help Desk can provide us:

Automatic Discovery on user login

Manually-initiated Discovery

Windows workstation

Windows server

Open the ServiceNow instance in Internet Explorer and navigate to Self


Service | Help the Help Desk. Click on Start the Scan to Help the Help
Desk to install in Discovery.hta a file in the local machine. Once you open
the file, it auto-populates the fields, as shown in the following screenshot:
Discovery Help the Help Desk

Information is populated by executing a simple file, but how is it done? In


the background, Windows Management Instrumentation (WMI) script
runs on the machine to collect information and installed software that is
available in a ServiceNow instance as helpthehelpdesk.js. It is important to
mention that in ServiceNow there are a couple of ways for performing
authentication, but in this book, we are going to look at SOAP
authentication only. In order to perform SOAP authentication, you would
need a user account, which can be created by navigating to the User
Administration application | Users | Create | Fill in the form, entering a
User ID, Name, and Password and clicking on the Submit button. After
having created the user, assign the soap_ecc role. You can also use an
existing account.

Now, navigate to the System Definition application and click on Help the
Help Desk. Configure the form shown as follows and click on the Save
button:

Help the Help Desk

Refer to the Help the Help Desk screen, which now clearly states all of the
machine's information. This information is being passed to the
ServiceNow instance through the MID server. Your account (soap.guest) is
saving the record in the ECC queue ecc_queue, which you can view by
navigating to ECC | Queue. Filter it by Created by =soap.guest account, as
shown in the following screenshot:

ECC queue

Once records are saved in the ECC queue, ServiceNow then processes it
and updates the CMDB with the asset number. Moreover, by viewing Help
the Help Desk's status you can navigate to System Definition | Help the
Help Desk Status | click on it, and you should be directed to the page
shown in the following screenshot:

Help the Help Desk status

A standard Discovery product provides more functionality and capabilities


for your corporate networks as compared to Help the Help Desk; they are
as follows:

Automatic Discovery by schedule

Manually-initiated Discovery

Windows workstations
Linux systems

Unix systems (Solaris, AIX, HP-UX, Mac (OS X))

Network devices (switches, routers, UPS, and so on)

Printers

Automatic Discovery of computers and devices

Automatic Discovery of relationships between processes running


on servers

We just have learnt the capability of Discovery but I believe it is not


enough. Let's understand Discovery in more detail. You might be
interested to know that two types of Discovery are available; they are as
follows:

You will have seen monitoring centers that monitor all the services
at the organization level. I would like to introduce a term, that is,
vertical Discovery. Vertical Discovery serves the monitoring
purpose better as it creates a relationship with other CIs. Service
mapping is the best example of this. Technically, service mapping
uses horizontal Discovery to search devices in its first two phases
(scanning and classification) and from the top down for a business
map.

We have just learnt a new term, that is, horizontal Discovery. The
disadvantage of horizontal Discovery is that it doesn't pull
relationships with other CIs for business service maps. From a
network probe point of view, horizontal Discovery searches the
devices on the network and related attributes as well.
In the previous sections, we have seen the Discovery probe and sensor that
are used to collect and process the information and process before storing
it in the CMDB. This is why there are multiple probes and sensors in the
ServiceNow platform. However, probes and sensors are replaceable with
patterns. Instead of the identification and exploration processes, patterns
can be used. Out of the box, there are many patterns available that can be
utilized, and new patterns can even be created by navigating to Pattern
Designer | Discovery Pattern, as shown in the following screenshot:

Pattern designer

In an agentless system, a whole network can be found even though not all
the information is important to capture. Instead of finding the whole
network, you can stick to IPs that will help you to balance the network
traffic, CMDB design, and maintenance. Let's look at a couple of key
modules in the Discovery application. Navigate to Discovery as follows:
Discovery application

The Dashboard module of Discovery is like a standard ServiceNow


dashboard but specific to Discovery. Out of the box, ServiceNow provides
pre-defined reports like Active Discovery, Discovery schedule, total
discovered applications, and so on, but this can obviously be modified as
per your need. To view it, navigate to the Discover | Dashboard module,
which should direct you to the Discovery homepage.

In an agentless environment, gaining access to a target system is


important, so here I would like to introduce the Credentials
Discovery_credentials module; this holds the credentials of the target machine
to log into. Navigate to Discovery | Credentials | click on it, and that
should direct you to a Credentials UI page. This UI page has many options
for creating credentials, such as AWS, SSH, VMware, Windows, and so
on. It is important to note that while first creating the credential, choose
the MID server carefully.

The second module is Discovery schedules, which determines the runtime


of the Discovery like any other scheduled job that runs at a specific time.
Navigate to Discovery | Discovery Schedule, click on it, and that should
direct you to the Discovery Schedule configuration page. A couple of
fields are important here, like Shazzam batch size, MID server selection
method, and max runtime. Let's take an example to understand the
Shazzam batch size. Discovery provides this field to improve the
performance that is used for port-scanning (Shazzam processing) for
dividing IP addresses into defined batches. It has a default value of 5000
that can be as low as 256. With MID server selection, ServiceNow
provides four options by default: Auto select MID server, Specific MID
cluster, Specific MID server, and Use Behavior. Finally, with Max run
time, Discovery must be scheduled outside of business hours to minimize
load on the ServiceNow production instance. If Discovery is utilizing too
many hours, we can use its max runtime option. It's important to note that
if a schedule is not complete in that time-frame, ServiceNow will cancel
all remaining tasks:

Discovery schedule part 1


Discovery schedule part 2

Quick ranges are very useful, as discussed in previous sections, as it may


not be necessary to find an entire network. Quick range supports IP
address range, IP networks, and individual IPs as well. If you click on the
Quick ranges option, it will open a pop-up where you can add your IPs; by
virtue of this Discovery schedule, it will consider those IPs only. After
creating the Discovery schedule and opening the Discovery record again,
you should be able to see the following Related Links. Quick range can be
defined by simply clicking on Quick ranges Related Links:

Quick ranges of IP

After defining quick ranges, click on Discover now Related Links, which
kick off the Discovery process. After its completion, a Discovery status
record should be created.

Once Discovery is complete, how can you view discovered devices? In


order to view the discovered devices, navigate to Discovery | Status. If you
scroll down you should be able to view a couple of related lists that store
relevant information such as Discovery log, devices, and ECC queue:

Discovery status

What relates these list stores? The related devices list stores devices, as the
name, states, but in the CMDB CI column you can also refer to classes such as
switch and Windows server. The ECC queue stores all records that were saved
during the Discovery scheduled, like Shazzam, WM Runner, Multiprobe,
and so on, along with the input or output parameter. The Show Discovery
timeline button should direct you to a UI page that fetches data from the
ECC queue to draw a timeline of the Discovery execution.

In essence, both agent-based and agent-less systems have their own


advantages and disadvantages, but most organizations use a balance of
both to have greater control over their infrastructure.
Summary
Before the deep dive into ServiceNow automation, in this chapter, we
learnt about various challenges that are being faced in IT infrastructure
management. After that, we saw how you take leverage of automation for
a better IT rating and achieve better control on a process governance and
auditing. Furthermore, We learnt about the agent-based and agent-less
system that include Microsoft SCCM and discovery to populated the
CMDB which is key of automation as without CI information automation
can't be performed. If you know the basics of the external system with
which the ServiceNow workflow interact, your automation maintenance
job is made a lot easier. It is not recommended to automate all possible
processes, because sometimes the required efforts are larger than the
output so in such circumstances I don't recommend automation. In the next
chapter, we shall learn about various ServiceNow automation applications
rather plugins that is available for automation, automation process, mid
server installation configuration, probe technologies and CMDB.
ServiceNow Configuration
In this book, you are going to learn the basics of the automation capacity
of ServiceNow and the important pillars of automation. The purpose of
this chapter is to get the general idea of all infrastructure components that
are supported by the platform and what kind of basic setup is essential for
performing the automation task.

This chapter will cover the following points:

ServiceNow automation applications

Configuration management database

Configure management tools

Understand the MID Server and architecture

Discovery and Probe technologies

Understanding the ServiceNow automation process


ServiceNow automation
applications
In the ServiceNow platform, automation applications come under
operations management. ServiceNow has the capability to interact with all
infrastructure components such as application, database, and hardware, as
well as the orchestration application. With this, you get the privilege to
extend the catalog item workflow beyond your ServiceNow environment.
Orchestration helps in automating the following (VMware, Amazon EC2,
Windows Active Directory Microsoft exchange, and so on) components of
an organization's infrastructure that come under an automation service
request and can be requested by a catalog item.
VMware
There may be very few IT professionals who might not know about
VMware; if you are one of them, then you can simply type some keywords
into Google to learn about it. Before moving on to the automation of
VMware, let's quickly look at an example. Most of the time, if a new
virtual machine is required by a project team or operations team, then a
request is logged on the help desk tool, and then an orthodox process
begins, which assigns a task to a technician after approval, and he/she goes
to ODCs or the requester's place, but just imagine how good it would be if
a virtual machine could be set up automatically through ServiceNow, by
extending the service catalog's workflow to VMware through APIs.
Amazon EC2
Amazon Elastic Compute Cloud is Amazon's cloud computing platform
by which individuals or organizations can obtain virtual machines on lease
to run their applications for provisioning, deletion, and so on. Now, with
ServiceNow, the lifecycle of the Amazon EC2 can be managed through
ServiceNow service catalog requests without going on the Amazon EC2
interface.
Windows Active Directory
Windows Active Directory can be automated as well through the
ServiceNow service catalog orchestration workflow. With Active
Directory, activities on Active Directory's objects such as users, group and
so on can be created, deleted, and so on. It is important to understand that
an MID Server must be configured to use PowerShell scripts.
Microsoft Exchange
Active Directory and Exchange server work very closely in an
organization's infrastructure. All the group mailboxes, individual
mailboxes, and distribution lists reside in Exchange server, so Exchange's
manual work can be automated. However, it is important to note that
orchestration activity can't be performed on Microsoft Exchange Online.
Puppet
Although Puppet will be discussed in Chapter 4, Automation with Puppet, I
would like to talk briefly about Puppet here. So, Puppet is a server
management application and ServiceNow provides a Puppet plugin to
manage the Puppet server and its nodes. If we talk about Puppet's
architecture, Puppet works on the client-server principle, rather than agent
and master. The Puppet master is installed on the server and the Puppet
agents or nodes are installed on the machines that need to manage.
Chef
Although Chef will be discussed in Chapter 5, Automation with Chef, I
would like to talk briefly about it. Chef is a server management
application, like Puppet, that utilizes the client-server architecture as well.
Out of the box, ServiceNow provides a Chef plugin to manage the Chef
application and its node.
Web services
Web services should not be new to you. Through web services,
applications communicate with each other and ServiceNow provides web
services both inbound and outbound. Through ServiceNow orchestration,
automation can be performed on the third-party applications as well.
Configuration management
database
Configuration management databases, or CMDBs, are widely spoken
about in the IT service industry. Many organizations consider their CMDB
as an asset, but the big question is, how many organizations can take
advantage of them? The answer would probably be very few. If you are
reading this book, then you are most probably part of a ServiceNow
project and will have seen CMDB tables where various kinds of
configurations of devices are stored. As you will have heard, a CMDB is a
set of tables that store all the asset's configurations and relationships in the
environment. A CMDB can store computers, devices on the network,
software contracts and licenses, business services, and so on. A strong
CMDB will provide adequate information to track the state of assets and
to understand the relationship.

Before we get deeper into the CMDB, we must understand that, for a
greater return on investment, a strong CMDB is essential for automation.
So, if an automation project is being planned, then effective CMDB
implementation should be part of it to gain the maximum output of
automation.
A few concepts
If you have not worked closely with CMDBs before reading this book, we
are going to see some major pillars of CMDBs that will help you to
understand CMDBs. Explicitly, CMDB data modelling is crucial for
successful CMDB implementation, which holds the logical grouping of
configuration items, or in short, CI types. Very often, CIs are grouped into
hardware, software, and network and then into different types of classes,
but there may be other custom CMDB classes as well.
CMDB tables
As we know, a CMDB acts as a central repository of all the configuration
items. From a technical point of view, it is important to note that
ServiceNow provides a base class cmdb_ci, that stores basic attributes of
CIs, and a CI relationship class cmdb_rel_ci that holds the relationships of
CIs. Out of the box, ServiceNow has many tables to store such
information and you can even extend the CMDB by creating a new table
and extending to the base table:

CMDB data model, Source: ServiceNow


Configuration item attributes
Although a CI is an individual item, a CI may have many attributes, listed
as follows, that can be stored in a CMDB:

Identification code

Tag name

Description

Ownership
Configuration management
tools
In this book, ServiceNow automation with Puppet and Chef will be
discussed in Chapter 4, Automation with Puppet, and Chapter 5, Automation
with Chef, respectively. There are many vendors in the industry that
provide automation tools, some of which are listed as follows, for
reference only:

BM Rational ClearCase

CFEngine

SaltStack Enterprise DevOps

Chef

Puppet

Vagrant

ServiceNow

BMC Atrium CMDB


Populating the ServiceNow
CMDB
We just spoke about the configuration management software, but how will
the organization CIs be populated in the ServiceNow CMDB and how will
the automation process work? The answer is that to bring data in the
CMDB tables, ServiceNow discovery can be used, which is available as a
separate subscription. Alternatively, you can use an external CMDB as
well to populate the CIs in the CMDB, or you can even import the
information using other data sources.

Before going deeper into populating a CMDB, we must understand that


ServiceNow is capable of auto-populating CIs in the CMDB tables, but to
align with a CMDB data model you should be more careful while creating
a technical design.
Populating the data with
discovery
ServiceNow discovery is capable of automatically populating a CMDB.
There are three major pillars of this process, which are MID Server,
probes, and sensors. The MID Server resides in an organization's network
to collect the configuration information using probes and sensors, as
shown in the following diagram:

Populate ServiceNow CMDB, Source: ServiceNow


Populating a CMDB by
importing information from
another source
If discovery is not being used in the organization, then, as an alternative,
you can choose the manual import option, which provides the same
functionality but requires more effort. If you have any prior experience
with ServiceNow, then you will be able to correlate with the load data
option (which supports XML, Excel, and CSV formats), where you can
import the data in the ServiceNow import set table and transform it into
the target CMDB table:

Manual import
Populating CMDB by existing
external CMDB
In the beginning of this chapter, we saw that many CMDB products are
available in the market, so if an organization prefers to be on an external
CMDB, then you have an option to integrate with an external CMDB.
Microsoft SCCM is one of the popular CMDBs. Out of the box,
ServiceNow provides an SCCM plugin, with which you can simply
integrate with an SCCM database:

External CMDB integration, Source: ServiceNow


Understanding the MID Server
If you have any previous experience with the ServiceNow platform, then
you have probably heard of the MID Server before, but might not have
configured it. If a VPN connection is not being used, then, as an
alternative, a MID Server is the best option to establish communication
and movement of data securely between the customer's network and the
ServiceNow instance. Measurement, instrumentation, and discovery, or in
short, the MID Server, is a Java-based application that resides in the
customer's network as a Windows service or a Unix daemon to facilitate
communication.
MID Server minimum
requirements
Every server installation has some basic requirements. From an
automation point of view, the requirements for installing a MID Server are
as follows:

8 GB of available RAM per MID Server

2 GHZ CPU with a multicore CPU

4 GB of disk space per MID Server

1,024 MB of memory
Configuring the MID Server
To configure the MID Server, you first need an admin role, then you need
a host machine on which the MID Server will run as a Windows service or
Linux Terminal. The major pillars for configuring a MID Server are as
follows:

Download the MID Server from the ServiceNow instance

Configure the MID Server on the ServiceNow instance

Configure the MID Server Config file on the host machine

Configure the wrapper-override file on the host machine

Start the MID Server Windows service from the host machine

Validation

You can download the MID Server files from the MID Server application
by simply logging into the ServiceNow instance. Let's see the process in
more detail:

1. Navigate to the MID Server application. Now, select the


Downloads module, as shown in the following screenshot:
MID Server download

2. Now, you will be directed to the MID Server download page, from
where you can download the MID Server files based on the
configuration of the host machine. By default, ServiceNow
provides the Windows and Linux operating systems to host the
MID Server:

MID Server download option

3. I have taken Windows operating system for demonstration


purpose. Once the MID Server files are downloaded on the host
machine. Create a folder in the drive where the operating system
is being hosted. For explaining the installation, I have named it MID
Server, as shown in the following screenshot:
Create MID Server folder in C drive

4. Once the MID Server files are downloaded on the host machine,
copy the server files in the MID Server folder and give proper
names based on the organization's standard. In the following
screenshot, you can see that I have placed MID server
MIDServer_dev15570 files in the MID Server folder as the host machine is

my laptop only:

Rename MID Server original folder

5. Now, some technical work starts. At first glance, the MID


folder structure may look very complicated, as no
Server_dev15570

executable file .exe is available. But, take a deep breath, because


you don't need to worry about all the files and other folders. Focus
on the config file, which is in the XML format. You can open the
config file in any text editor such as Notepad or Notepad++, but I

have opted for Notepad++:


MID Server config file

6. If you don't know about XML then don't worry, as it hardly


matters during MID Server configuration. So, what do we need to
do with this config file? We need to configure this file with the
ServiceNow instance address, username, and password, as
follows. Oops, we don't know the username and password, so
what do we do now?

Config file

7. As we don't know the username and password, we must create a


user account, but this must not be an AD account to avoid Active
Directory-related issues. This account is going to be a manual
account; so, let's create one from ServiceNow User Administration
application, as shown in the following screenshot:

MID Server account

8. Hooray! An account has been created. But wait for a second, are
we missing anything here? Probably yes. A user account without
any role is an end user account and end users can't perform any
administrative activities and such account does not make any
sense for the MID Server. So, out of the box, a mid_server role is
available in the role table sys_user_role, which our newly created
account must have. Now, let's quickly add a role in the MID
Server account. It is important to note that the moment a mid_server
role is granted to a user account, 11 additional roles are also
assigned in the account such as soap, soap_query, soap_update,
, and so on, so this account can be used for other
soap_delete

purposes as well:
Assign MID Server role

9. Great news, our config file of the MID Server is ready to configure
as we now have the instance address, username, and password.
Let's quickly configure it. Replace https://YOUR_INSTANCE.service-
now.com with your instance address; here, I am using my personal

instance address of https://dev15570.service-now.com. Replace


YOUR_INSTANCE_USER_NAME_HERE with your newly created user account

user ID, which is MID_Server_Connect, and replace


YOUR_INSTANCE_PASSWORD_HERE with mid@connect, as shown in the
following screenshot:

Config file configuration

10. You may have noticed that, in the preceding screenshot, just after
the encrypt label, true is written, which means your password will
be encrypted and can't be decrypted again. ServiceNow
recommends keeping it this way. You have the option to change it
to false, but this will add a disadvantage. Remember, the host
machine doesn't belong to you, so if any host machine
administrators open the config file, the password should not be
visible to them.
11. The MID Server account is configured in the config file, but the
configuration is not completed yet. The config file has a
YOUR_MIDSERVER_NAME_GOES_HERE parameter, so it is time to create a MID

Server on the instance:

MID Server name

12. Remember, so far we haven't done anything on the ServiceNow


instance; so, let's create a MID Server on the ServiceNow instance
by clicking on Servers under the MID Server application, as
shown in the following screenshot:

MID Servers module


13. The Servers module opens a MID Server configuration form. But
wait for a second, I can see that many fields are read-only, so how
can configuration be completed, or have I done something wrong?
Well, time to take a deep breath again because nothing went
wrong; this is what a MID Server configuration form looks like.
You need to enter a name, which is MID Server in my case, but
you can choose your own names as well. Now, simply click
Submit to create this record:

MID Server configuration on ServiceNow instance

14. Now, it's time to complete the config file configuration. Simply
replace YOUR_MIDSERVER_NAME_GOES_HERE with MID Server, as it is our MID
Server name, as shown in the following screenshot, and save it:

Add MID Server in config file

15. Hooray! config file configuration has been completed. However,


MID Server configuration has not been completed yet. Now, I
would like to introduce another file, the wrapper-override file,
which is again in XML format, but you need to configure a few
lines only, as shown in the following screenshot:

Wrapper file configuration

16. Hooray! You need to replace wrapper.name and wrapper.display name


with your MID Server name, which is MID Server in our case, as
shown in the following screenshot, and after replacing the text
simply save the file:

Configured wrapper file

17. We are done with the MID Server file configuration, and it's time
to start the MID Server services on our host machine. To do it, you
need a little help from the Command Prompt. Open the Command
Prompt as administrator, as follows:
Open command prompt

18. Wait for a second, where do we go on this black screen? You have
to go in the MID Server folder that you created during MID
Server installation, and then in our dear MID Server. If you are not
sure how to do it, simply use the cd command to change the
directory. Once you are in the MIDServer_dev15570 (remember this, as
this was renamed), type dir to view the files under this folder, as
shown in the following screenshot:
MID Server structure on CMD

19. At first glance, this might look like a busy screen, but you don't
need to care about all the files, just focus on the start.bat file,
which will start your MID Server service on the host machine. So,
simply type start.bat on the Command Prompt, as shown in the
following screenshot, and after a couple of seconds, you will see a
message saying MID Server started. Hooray, the MID Server has
been started:

Start MID Server

20. To validate the MID Server service on the host machine, type
in the run window (Windows + R), as follows:
service.msc

Run window

21. Furthermore, this command will open all the services of the host
machine, as shown in the following screenshot. Carefully search
for the MID Server and check the status that must be running:

Host machine service

22. As you can see in the preceding screenshot, the MID Server is on
the running status on the host machine, so let's view the MID
Server in the ServiceNow instance. You will notice that read-only
fields are populated on the MID Server form, but MID Server is
not validated, as shown in the following screenshot:
MID sever without validation

23. To validate the MID Server, you need to click on the Validate
button that is available under Related Links on the MID Server
form, as shown in the following screenshot:

Related Links - Validate

24. After having clicked on the Validate button, wait for some time for
the validated field to turn green. The end status of the MID Server
configuration will be as follows. You probably know that
ServiceNow releases new versions on a regular basis, so a
question may be asked here, do we need to update the MID Server
on regular basis along with new version upgrades? The answer to
this will be covered in later sections:

MID Server up and validated

25. If you open the MID Server and scroll down, you will be able to
see a related list with the MID Server such as MID Server Issues,
Configuration Parameters, Supported Applications, IP Ranges,
Capabilities, and so on. We are going to explore some important
related list (IP ranges, properties, and so on):

MID Server related list

26. Internet protocol (IP) is assigned to each device for


communication purposes within the network. From an automation
point of view, if a MID Server does not have an IP range, then by
default it will be ALL (IP range), as shown in the following
screenshot:

IP range
27. Although installing multiple MID Servers will be discussed in the
following topic, from a load balancing point of view of
automation activities, multiple MID Servers are recommended,
and out of the box, ServiceNow provides auto-assignment. You
need to navigate to the MID Server range auto-assignment IP
ranges. This will direct you to the configuration page where a
slush bucket is available, and by simply moving the available MID
Server in a selected box you can complete this task. Now, as an
output of this subnet, discovery is executed and automatically
assigns the MID Server:

Auto-assignment

28. The behavior of the MID Server is controlled by the MID Server
properties. An important point to note here is that the MID Server
properties can override the MID Server parameter as well. MID
Server properties are used to control the behavior of the probe and
payload of the MID Server. You can even create a new property of
the MID Server by navigating to MID Server | Properties | New.
But please note, for JDBC connection properties, you should have
knowledge of PL/SQL:

MID Server properties

29. The configuration parameter controls the behavior of a MID


Server. It is important to note that parameters have less weighting
than properties. Out of the box, ServiceNow provides many
parameters such as the MID Server CIM parameter, MID Server
connection parameter, MID Server credentials parameter, MID
Server debug parameter, MID Server DNB parameter, MID Server
FTP connection parameter, SSH discovery parameter, and so on.
So, any one of these can be used to control the behavior of the
MID Server:
MID Server configuration parameter
Multiple MID Servers
installation criteria
After having installed the MID Server, you might wonder why we need
multiple MID Servers. You may also be asking if we need them, how do
we determine how many MID Servers will be enough for our corporate
network? The answer to this is performance. As we know, the MID Server
facilitates communication between the ServiceNow instance and external
data applications, data sources, and services, but what about network
performance, traffic, and so on? That's how the multiple MID Server
installation concept comes into the book. So, let's see a couple of scenarios
where multiple servers will be required for the corporate network from an
automation point of view:

1. Orchestration: By and large, we understand that, with


orchestration activities, automation can be performed. But, there
are a couple of elements that can force us to install multiple MID
Servers:

1. ServiceNow launches the probes to search the devices in


the network, but one probe is not enough to search, such
as Windows OS, Linux OS, routers, switches, and so on.
So, there are different type of probes to search devices,
such as WMI for Windows or SSH for Linux. In such
cases, you may require additional MID Servers for
dedicated probes.

2. The demilitarized zone, or in short, DMZ, is solely for


adding a security layer in the corporate networks, which
may be a physical or logical subnetwork that is exposed to
external services, so you may need to install MID Servers
in the demilitarized zone as well.

3. In a wide area network or WAN, bandwidth plays a


critical role in determining the number of MID Servers.
Let's take a simple case; if a WAN is slow then it is not a
good idea to launch the probes on it. It is advisable to
install MID Servers on each LAN to discover local
devices.

2. Capacity: The MID Server and discovery work in synergy.


Millions of devices should not be surprising in an enterprise-scale
network, but the gathering of such large-scale CIs devices rather
information for the ServiceNow CMDB may be challenging due
to capacity. So, to defeat the capacity issue, multiple MID Servers
will be required.
3. Others: There may be other elements that can force you to install
multiple MID Servers such as load balancing, security,
integrations, and so on. It is worth noting that performance testing
is a crucial part of MID Server deployment.
MID Server architecture
By now, we understand that MID server behaves like a middle layer for
facilitating the communication. As the MID Server is bounded by time, I
would like to introduce some terms here, monitor, worker, ECC, and PPS,
that will help you understand the MID Server architecture:

Monitor: It is a time-bounded process and runs periodically to


execute the tasks. After the execution, the result of the monitor is
sent to the ECC queue.

External communication channel (ECC): The ECC channel is


just a task placeholder for the MID Server. The ECC saves all the
tasks that need to be performed by the MID Server in the queue
ecc_queue. Here, you may ask a question about how the MID Server

determines if a task is pending for execution. Before getting


deeper into this, I would like to introduce one term, asynchronous
message bus, AMB, which updates the MID Server for the
pending task based on a sequence number. The MID Server
checks for the messages published by the asynchronous message
bus (AMB) client, which informs the MID Server of pending
tasks. Before getting deeper into the AMB, let's conclude what we
have learned so far. If we have to conclude with one statement,
then we can say that the AMB holds messages for the MID Server.

Now, let's see how it is done. For better understanding, let's


introduce some key terms here, which are the sysId of the MID
Server and polling time. As we know, the MID Server resides in
the customer's network and runs as a Windows service/Unix
daemon, so the Windows service (MID Server) checks the ECC
queue every 40 seconds by default, connects an instance through
the AMB, and when a message or record is inserted in the ECC
queue, an AMB message is sent to the MID Server:

ECC queue polling process, Source: ServiceNow

Worker: We have seen that monitor is pushing results in the form


of tasks in the ECC queue, but a process has to take ownership of
the ECC's tasks as well, the worker thread is invoked after having
read by ServiceNow.
Installing multiple MID Servers
After having decided that multiple MID Servers are required for the
corporate network, you need to decide where you want to install the MID
Servers. Out of the box, ServiceNow supports multiple MID Servers on
one host and multiple MID Servers on separate hosts as well, regardless of
a physical or virtual server. It is important to note that the process of
multiple MID Server installations is the same as the single MID Server
installation process, and you should run each Windows service or Unix
daemon separately. As it has been advised to use multiple servers to load
the balance, what if no MID Server is set for orchestration? That's how
setting up a default MID Server comes into the book. Setting up a default
MID Server has been explained in previous sections.
Upgrading the MID Server
Upgrading is an essential part of ServiceNow maintenance in order to
include new functionalities and applications in the ServiceNow instance.
So, how will a ServiceNow MID Server be upgraded? Do we need to
upgrade a MID Server manually? The answer is no, you don't need to
upgrade the MID Server, as, during the version upgrade process, the MID
Server settings are taken care of automatically, however that may take
some time to reflect. After that, you could see new versions/patches, for
example, Jakarta and Kingston. You don't need to put in any extra effort to
upgrade the MID Server, but it is worth noting that after ServiceNow
version upgrade you might face issues with the MID Server and discovery.
Discovery and probe
technologies
Let's begin with some simple questions. Would you able to log a request
without a CI item? Would you be able to get information about your
CMDB CIs without data population? Last but not the least, how would a
third-party automation tool find out which CI (server, application, and so
on) should be automated? The answer to all of these questions is probably
no.

If you have just begun with ServiceNow automation, then you may want
to ask how ServiceNow gets to know the configuration of devices in the
enterprise network. For the answer, I would like to introduce a couple of
terms again which we have seen in the last chapter as well but now we'll
take the deeper look into it now, terms are the probe, sensor, ECC
(external communication queue) and PPS (post processor script).
Probe
The word is self-explanatory; a probe is a method of searching the devices
in the corporate network. However, in a corporate network numerous type
of devices may be available, so how do we search the devices in such a
huge network? To collect information on the network many protocols are
available, but in this book we are going to discuss a few important
protocols:

SSH is a network protocol that communicates on port 22 for the


Linux operating system. So, if you are wondering how the
ServiceNow catalog item workflow interacts with the Linux
operating system, the answer is SSH protocol. ServiceNow
orchestration or discovery use it to access the target machine.

WMI is used by discovery for Windows devices. WMI's access


may be an issue sometimes, so it is advisable to use a
domain/administrator account to run queries. It is important to
note that whenever discovery detects any action on port 135, it
launches WMI.

SNMP is a protocol that is used for searching network devices,


routers, switches, printers, and so on.

Web-Based Enterprise Management (WBEM) may be very


new to you, but from an automation point of view, you need to
know that the Shazzam probe launches the WBEM port scanning,
which detects actions on ports SLP 427, CIM 5989, and 5988:
Orchestration capabilities, Source: ServiceNow
Sensor
Data is collected by the probes that are triggered by the MID Server, but
sensors are responsible for processing the collected data. It is important to
note that, for each probe, there must be a sensor, so that collected data can
be processed. If we move a little further on to the types of sensors, then
mainly two sensors are used to process collected data. One is JavaScript
and the other is XML. Please note that XML data can trigger additional
probes to collect information for processing.
PPS
The script accepts probe results as an input and outputs a JSON string that
is sent back to the ServiceNow instance for a sensor to use as input.
Understanding the ServiceNow
automation process
Before getting deeper into the automation process, we should make sure
that we know whether orchestration plugins are active or not. If
orchestration is not active, then it can be activated via Hi portal of
ServiceNow (https://hi.service-now.com). After the activation of it, the
following workflow activities should be activated. It is important to note
that each plugin will have some dependencies, and without completing the
dependencies, automation may not be feasible:

Orchestration: Active Directory

Orchestration: Asset lease management

Orchestration: Azure Active Directory

Orchestration: Client software distribution

Orchestration: Exchange

Orchestration: F5 network management

Orchestration: Infoblox DDI activity pack

Orchestration: PowerShell

Orchestration: ROI

Orchestration: ROI premium

Orchestration: Runtime

Orchestration: SFTP
Orchestration: SSH

Orchestration: System center configuration management

Orchestration: Workday

Now, let's explore the ServiceNow orchestration application that must be


available after activation of the plugin with all other applications:

The Workflow Editor must be familiar to you and you have


probably used it many times, but the orchestration workflow is a
little bit different from a standard workflow. So, how is this
different from a standard workflow? The answer is that the
orchestration workflow looks similar to any standard workflow,
but it comes with additional packs that hold orchestration
activities.

The Workflow Schedules might be new to you. If you have


worked on scheduled jobs, then you would probably be able to
correlate this functionality with orchestration. The same
functionality can be achieved by it as well by excluding task and
catalog task functionality.

The Activity Definition (Deprecated) holds the definitions of all


activities of the workflow; if you are not sure about any activity,
then you can always refer to this repository:
ServiceNow orchestration application

MID Server Configuration becomes crucial when multiple MID


Servers are deployed. We know from the Installing multiple MID
Servers section that, by default, the MID Server is critical from an
automation point of view. So, let's explore some options here.
Navigate to Orchestration | MID Server Configuration | MID
Server Properties and give the name of the configured MID
Server. You must remember that our configured MID Server name
is MID Server and click on the Save button:
Setting up default MID Server
ServiceNow automation
workflow
If you have any prior experience with ServiceNow, then you have
probably heard of or worked on a workflow graphical canvas for standard
service catalog items' development. Similarly, if you are creating any
virtual machine setup catalog item, then you can embed orchestration
activities in the workflow to automate a technician's manual work. I have
activated the orchestration plugin on my personal instance, and similarly,
you can also activate it on your personal instance. After activation of the
orchestration plugin, you will be able to view the orchestration activity in
the workflow packs, as shown in the following screenshot, which you can
drag and drop in your workflow like any other core activity such as
approval, run script, and so on:

Orchestration activity
ServiceNow automation
process
Just imagine you are trying to set up a virtual machine by a simple request;
you might be interested in knowing how that workflow will be executed.
Let's explore the hidden process behind the workflow. In the previous
topics, we have learned about the major pillars of automation: probe, MID
Server, and ECC queue. Let's take an example of a virtual machine set up
by a catalog item. When any end user logs a virtual machine setup request,
a REQ sc_request number, a RITM sc_req_item number, a task number are
generated. Afterwards, the workflow/orchestration activity starts and
triggers a probe that scribes in the ECC queue. From the MID Server
architecture, we know that the MID Server subscribes messages from the
AMB, which updates the MID Server on the pending task, Now, the MID
Server executes the probe on the target machine to perform the task, which
sends back the result to the ECC queue, which acts an output of the probe
rather than a result. Finally, the ECC passes result to the workflow for
further processing:

Automation process
Summary
In this chapter, we have learned about the basic setup of the automation
and how it works both within and outside of the ServiceNow boundaries.
We have seen MID Server architecture, MID Server installation, how to
start the MID Server, why the MID Server is important, the significance of
multiple MID Servers, and so on. Furthermore, we have also seen the
importance of discovery in the automation space and important pillars
such as probes, sensors, and orchestration packs (workflow activities). In
addition, we also saw how ServiceNow interacts with other systems such
as Windows, Linux, routers, and so on. Finally, we learned how the
ServiceNow automation process works and its various stages of execution.
In the next chapter, we will cover topics such as the introduction of a
virtual machine, virtual machine configuration, catalog item for
automation with the change request use case.
Manage Virtual Machine with
ServiceNow
Within the IT space, virtual machines are very common and are used or
recommended on many occasions. I have assumed that you know about
the virtual machine that is installed on top of the operating system layer
rather than software and imitates dedicated hardware. In this chapter, we
will discuss the virtual machines; perhaps, virtualization should not be a
new term for you. With regard to virtualization, you should understand
that it is an abstraction layer, segregates physical hardware from the
operating system, and supports more than one type of operating system,
such as Windows or Linux. Particularly, in the cloud space, ServiceNow
provides dedicated plugins for managing the end-to-end lifecycle of cloud
machines. You may be aware that in the vendors' space there are many
giants, such as VMware, Amazon, and Microsoft.

This chapter will cover the following topics:

Introduction to virtual machines

Virtual machine plugins and applications

Basic terms and concept

Cataloging items for automation

The ServiceNow VMware application

A use case with a change request


Introduction to virtual
machines
In the previous section, we saw that virtual machines are very common in
the IT space; however, we should understand that, when the count of
virtual machines increases, then their management of virtual machines
become a challenging task. To overcome such challenges, there are many
tools available in the market. However, as this book is focused on
ServiceNow, let's look at how you can take leverage of the ServiceNow
platform to manage virtual resources.
The VMware architecture
Before getting deeper into virtual machines, let's look over the important
components of the VMware architecture to help you understand VMware.
It is important to note that VMware resources are managed by the
ServiceNow VMware Cloud application.

VMware ESX server: This is a virtualization layer running on the


physical layer rather hardware and abstracts processor, memory,
and so on for virtual machines.

Virtual machine filesystem: This is a high-performance cluster


file system for virtual machines

VMware virtual symmetric multi-processing: This enables a


virtual machine to use more than one physical processor at the
same time

Virtual center management server: From the ServiceNow point


of view, this is important, as this a single place, where
ServiceNow will interact for configuring, provisioning, and
managing a virtual machine

VMware motion: This is used for migration purposes only and


enables live migration of a running virtual machine without any
downtime:
VMware architecture, Source: VMware
The Amazon Web Service
cloud and ServiceNow
You can manage Amazon Web Service (AWS) with the ServiceNow
Amazon AWS cloud, which allows you to manage the following
applications:

The Amazon virtual machine

The Amazon Elastic Block Store (EBS)

The Amazon Virtual Private Cloud (VPC)

The Amazon CloudFormation

Amazon billing

ServiceNow and Amazon Web Services, Source: https://docs.servicenow.com/


Microsoft Azure and
ServiceNow
By integrating the ServiceNow Microsoft cloud application with Microsoft
Azure, you can manage the following services:

The life cycle management for an individual virtual machine

The life cycle management for a resource group

The cost and usage report of the Microsoft Azure data

Amazon and ServiceNow, Source: https://docs.servicenow.com/


ServiceNow and virtual
machines
We have taken an overview of VMware, AWS, and Microsoft Azure, but
this is not the end of the virtual machine world. There are many other
vendors as well in the market and ServiceNow supports them through
APIs. Before moving on, let's understand the ServiceNow offering for
managing virtual machines. If we talk on the application level, then
ServiceNow provides a cloud management application that enables users
to manage the life cycle of virtual machines, and you may be interested to
know that most things can be done by the self-service portal from the end
user point of view:

Cloud Management, Source: ServiceNow

Furthermore, the cloud management application provides the following


features:

Abstraction of virtualization systems: Just imagine, you are


working in a project development environment, and you need a
virtual machine to test a process or an entity. How good would it
be if you need request a virtual machine through a service catalog
interface without knowing the details of the specifications of the
virtual machine, as you want a virtual machine only? Out of the
box, ServiceNow provides cloud resources under the self-service
application to request new virtual machines.

Reuse of virtual machine configuration: ServiceNow utilizes


external vendors rather third-party templates to create reusable
service catalogs. I would like to introduce some terms: VMware
template and Amazon image. In the VMware space, there is a pre-
configured value set to set up a virtual machine that is known as a
template, and for the Amazon EC2, the pre-configured values set
is known as images to set up a virtual machine.

Service catalogs: You should be very familiar with service


catalogs, as they are the most frequent method for logging a
request within the ServiceNow environment and the same applies
to virtual machines as well. A catalog item can be created by
simply clicking on the related template links (this will be
discussed in the later sections).

Role-based access: Access rather role is a part of the core


ServiceNow system design ensuring the correct access for the
users and group. The cloud management application comes with
three roles: cloud administration, cloud operator, and cloud user.
On the group side, the cloud management group comes with a
virtual machine provisioning groups (this will be discussed in the
later sections).

Service portals: In the Jakarta release, cloud portal became more


useful, and the virtual machine can be monitored in the cloud
portal by navigating to the cloud management application | Cloud
User Portal that should direct to the following page. The
ServiceNow portal displays the overall status of the virtual
machine such as provisioning task, SLA, and virtual machine:

Cloud User Portal

Lease duration: By default, ServiceNow applies a default end


date to all virtual machines.

Discover a virtual machine: In previous chapters, you learned


about the discovery that is used to discover CIs within the
organization network and build relationships as well such as
between VMware components and vCenter.

Modify a virtual machine: Once a virtual machine is


provisioned, you can manage it using the Cloud User Portal.
Actions such as pause, stop and terminate with the change
management.

Others: There are other features as well such as automatic cost


adjustment. They are embedded in the cloud management
application to manage the life cycle of a virtual machine within
the ServiceNow environment, which delivers the excellent value
of a product.
Virtual machine plugins and
applications
In previous sections, we have seen the basics of the VMware and Amazon
architecture and the ServiceNow offering for cloud application
management. Now, let's move into the ServiceNow space, rather plugins
to understand what is required to enable these functionalities within the
ServiceNow instance.

Cloud Management: Cloud management application is


responsible for managing the end-to-end lifecycle of cloud
resources, rather virtual machine from the organization point of
view. If you use a personal development instance, then you can
activate it by navigating to https://developers.servicenow.com | Manage
| Instance | Action | Cloud management. If you want to activate the
plugin in your organization's ServiceNow instance, then you can
go to https://hi.service-now.com | Activate Plugin | type name of the
plugin Cloud management. It is important to note that the cloud
management application is available as a separate subscription and
you will be charged for organizational use.

Orchestration: If you think that cloud management plugin


activation is enough for managing cloud resources, then you are
wrong, as you need the orchestration plugin along with the cloud
management application. It is also available as a separate
subscription.

VMware Cloud: In the VMware section, you learned that the


vCenter is central-point to manage a virtual machine. VMware
Cloud is a ServiceNow application to manage a vCenter server.
VMware Cloud integrates with vCenter, which allows the
ServiceNow users to request a virtual machine through the
ServiceNow interface. You must be interested to know that
VMware Cloud comes as a feature of orchestration and is
available as a separate plugin.

Microsoft Azure Cloud: Out of the box, ServiceNow provides


the Microsoft Azure Cloud application, which can be integrated
with Microsoft Azure to allow management of Azure Cloud.

Amazon Web Service Cloud: Out of the box, ServiceNow


provides the AWS Cloud application to manage Amazon cloud
resources.
Basic terms and concepts
In previous sections, you learned about the ServiceNow offering, which
includes role-based access; after this, we explored the required plugins.
Now, let's look at some basics from the cloud management operational
point of view:

Cloud management application roles: In the role-based access


section of ServiceNow and virtual machines, you learned that
there were three roles: cloud administrator, cloud operator, and
cloud user. So, let's look over these to understand the purpose of
these roles:

1. Cloud administrator: In an organizational environment,


there may a designated cloud administrator within the
ServiceNow environment. So what do they do? The
answer is that they are responsible for primarily two
activities—the configuration of cloud management
services and monitoring and managing the cloud services.

2. Cloud operator: Users who are assigned the cloud


operator role are enabled to perform operators' action on
virtual machines.

3. Cloud user: Users who are assigned the cloud user role
are virtual machine requesters. Just imagine, in a project
environment, where a virtual machine is needed, a project
member must have a cloud user role to request a virtual
machine and even to manage this virtual machine.
Furthermore, let's look at the graphical representation of the
previously mentioned roles in the operational environment:

Cloud management role, Source: ServiceNow

Moving further, in the previous diagram, we have seen that a


couple of terms have been mentioned, such as cloud user, cloud
operators, and cloud administrator. These are the ServiceNow
groups and can be seen by navigating to User Administration |
Groups | name contains cloud (filter) and you will be directed to the
following screen:

1. Virtual provisioning of a cloud administrator: A cloud


administrator is responsible for the end-to-end
management of cloud resources and for performing many
actions such as modify VM, update lease end, pause VM,
stop VM, start VM, cancel VM, terminate, takes snapshot,
restore from snapshot, delete snapshot, define vCenter,
define catalog offering, set prices, define provisioning
rule, define change control parameter, approve change
request, set properties, setup networking information, and
monitor request.

2. Virtual provisioning of a cloud approver: A cloud


approver role holder can approve or reject a cloud
resource request.

3. Virtual provisioning of a cloud operator: A cloud


operator role holder can work on provisioning of tasks
that are queued in a cloud operation portal and can act on
tasks such as modify VM, update lease end, pause VM,
stop VM, start VM, cancel, terminate, take snapshot,
restore from snapshot, delete snapshot.

4. Virtual provisioning of a cloud user: A cloud user role


holder can request a virtual machine from self-service and
can use my virtual asset portal to manage a virtual
machine. Furthermore, they can act on tasks such modify
VM (Azure/VMware), update lease end, pause VM, start
VM, cancel, terminate, take a snapshot, restore from
snapshot, and delete snapshot:
Virtual provisioning of groups
Catalog items for automation
In the previous section, we have seen that the fulfillment of requests is
driven by a standard service catalog that is available for end users to
request any item within the organization. However, in a virtual machine
space, requests are driven by cloud resources in the self-service
application, and once the request is submitted, it follows a particular life
cycle. In this life cycle, there may be many processes or rather steps, such
as the line manager's approval, group approval, and specific conditions,
but all these activities are executed within the ServiceNow platform
boundaries only. However, when you want to automate any manual task,
or rather process, outside ServiceNow platform boundaries, ServiceNow
utilizes web services to communicate with the target machines through the
MID server as given in the following diagram:

Communication Process
Creating a catalog item
In previous sections, we saw how a cloud resource catalog item works
within a ServiceNow environment and outside of the environment as well.
Initially, we discussed the cloud resources catalog item. So, a couple of
questions may be asked here: how do we create a cloud resource catalog
item or are there any specific guidelines or rather a process, to create it?
However, the answer is a big no; you don't need to work hard at creating a
cloud resource item, as it can be created by simply clicking on Create
Catalog Item button that is available under the related links of the template
(Configuration | VMware | Virtual Machine Template | Scroll down to
view related links) as follows. It is important to note that these templates
must be auto-populated by VMware Discovery. Furthermore, the detailed
steps are stated later in this chapter:

Creating a cloud resource catalog item


The ServiceNow VMware
application
The ServiceNow platform provides an application or rather a plugin for
managing VMware in the ServiceNow environment. The plugin is called
VMware Cloud as shown in the following screenshot, and as usual, there
are many modules available as a part of the VMware Cloud plugin. As we
have seen in the previous sections, the sole purpose of ServiceNow cloud
management is to manage the organization's virtual machine at a single
place. We will explore the VMware Cloud in the later sections, but for
now, we should understand how VMware CIs are populated in the CMDB
or rather ServiceNow CMDB and how the new VMs are provisioned:

The ServiceNow VMware application


VMware and ServiceNow
CMDB
As Discovery is a ServiceNow product, they work great together. In the
previous chapters, you learned about Discovery and mid server, so we
must apply the same knowledge again to understand it. It is important to
note that when the VMware plugin is activated, in the Configuration
application (in the application navigation pane), a new VMware section
should be added to store VMware-related CIs. You can view related CIs
by navigating to Configuration application | VMware as shown in the
following screenshot:

VMware-related CIs

As you might know, the base table is CMDB cmdb_ci and can be further
extended to store new classes of devices, so virtual machine CIs are stored
in the same way as well. In Chapter 1, The IT Challenges and Power of
Automation, you learned Discovery, which is used for populating
ServiceNow CMDB through data collection and processing or rather
probes and sensors. Furthermore, you should understand that certain
information is needed to populate the VMware-related CIs. Although
some additional terms will be discussed in the following sections, for now,
I would like to introduce some terms here that may be familiar to you,
such as discovery schedule and target machine and credentials. You may
be interested to know that once the plugin is activated, no data should be
available on the tables as the plugin is not configured yet. Furthermore, the
VMware modules of the Configuration application, such as Vcenter
cmdb_ci_vcenter, datacenter cmdb_ci_vcenter_datacenter, Clusters
cmdb_ci_vcenter_cluster, ESX Servers cmdb_ci_esx_server, Templates

cmdb_ci_vmware_template, and Instances cmdb_ci_vmware_instance, should be auto-


populated based on the successful Discovery run. Let's look at what can be
discovered by the Discovery product on VMware:

The VMware Discovery collection

Furthermore, the following CIs are discovered as well:

Discovery of vCenter servers

Discovery of ESX Server

Discovery of ESX resource servers

Discovery of VMware virtual machine


Discovery of Datastore
The ServiceNow VMware
Discovery setup
You probably know that VMware vCenter is installed on the Windows
machine, so to perform the discovery on VMware, you would need the
credentials of the Windows machine; otherwise, Discovery will result in
an error.
Windows credentials
To store the credentials on the ServiceNow instance, you need to navigate
to Discovery application | Credentials. You should be directed to the
following page:

The Discovery Credentials module

Now, click on the New button and it should direct you to the UI page that
indicates many options for storing the credentials discovery_credentials, such
as AWS, Azure, Windows, JDBC, VMware, and CIM. It is important to
mention that these credentials are used during the probe phase of
Discovery to retrieve the device or rather the CIs information. You may be
interested to know that credentials may be in the form of username and
password or certificates. With regards to credentials, there are a few
important terms that I would like to introduce that are the security of
stored credentials and service credentials.
Security of stored credentials
Firstly, let's look at the security of stored credentials. Apparently, the
security is the concern on the cloud application, but ServiceNow has a
robust password securing mechanism, as once a password is entered in the
credentials discovery_credentials table, it can't be viewed.
Service credentials
Now, let's look at service credentials. Try to recall Chapter 2, ServiceNow
Configuration; you learned about the MID server. We know that the MID
server facilitates communication with the external system. You might be
interested to know that the MID server utilizes stored credentials only, but
service credentials must have domain or local administration rights to
avoid access-related issues. Now, we have introduced a new term: service
credentials. Try to recall Chapter 2, ServiceNow Configuration; in the MID
server configuration, local administrator or domain accounts were used as
the MID server user account for connecting with a ServiceNow instance,
but service credentials are used by the MID server to connect with the CIs
of the networks as shown in the following diagram:

Service credentials

Moreover, we have just seen that the entered credentials can't be viewed,
so how will ServiceNow process them? So, let's see how this is done in the
ServiceNow space:

Firstly, the credentials are decrypted on the instance with the


password2 fixed keys

The credentials are re-encrypted on the instance with the MID


server's public key

The credentials are encrypted on the load balancer with SSL

The credentials are decrypted on the MID server with SSL

The credentials are decrypted on the MID server with the MID
server's private key

By now, we should understand service credentials, but if many credentials


are stored in ServiceNow, how are these managed? How does ServiceNow
know which credential needs to be used at what time? How does
ServiceNow know the credentials order? The answer for this is out of the
box; on the Windows credentials form, an order field is available, and this
is used for determining the sequence. It is important to note that if the
order is not entered on the credentials form, then the ServiceNow platform
performs random checks until it finds the perfect match. Once the
credentials are matched with the devices, an affinity is created in the
Credential Affinity dscy_credentials_affinity table, and once it is created, this
is used by Discovery and orchestration as well; if the credentials are
changed again, then a new affinity will be created. Furthermore, in the
credentials table, many credentials are stored that bring importance to the
order field, so let's take an example of Windows servers. Just imagine if
200 windows servers' credentials are stored in ServiceNow and 20 of them
are used for logging into 80% of the machines, then these credentials must
have a lower number for sequence processing.
Windows Discovery
To create credentials, navigate to Discovery | Credentials, click on the
Windows Credentials as ESX server runs on the Windows machine. This
is why you should know the Windows Credentials in the first place. I have
used my development environment for demonstration purpose, so click on
Windows Credentials on the following page:

The Credentials page

After having clicked on the Windows Credentials button, you should be


directed to the configuration page as shown in the following screenshot,
where you can enter the domain username and password. Moreover, here I
would like to bring your attention to the Applies to the field that is
applicable to MID servers. Try to recall Chapter 2, ServiceNow
Configuration, in which you learned about multiple MID servers that can
be installed at many geographic locations based on the organization's size.
You can select any available MID server, and the selected one will utilize
the submitted Windows credentials; as an alternative, you can leave them
as all MID servers:

The credentials configuration

Congratulations! The submitted credentials have been saved successfully.


However, what guarantee is there that the entered credentials will work
during discovery? There might be a typo or the entered credentials may
not have sufficient privileges for performing activities on the target
machine, so this is the sole reason for recommending a test for credentials
in the first place. Under the related link, you can notice the Test Credential
UI action; click on it and ServiceNow will pop up the following dialog
box. Moreover, it is important to note that in the Target field, the IP
address of the target machine should be entered because the same IP
address will be utilized by the MID server to discover the host.
Furthermore, make sure that your organization's MID server is up and
running and PowerShell is installed on the host machine; otherwise,
successful authentication can't be achieved:

Test Credential

We have seen in previous sections that vCenter runs on Window machines.


This means that we should bring the Windows server information or rather
CIs on the ServiceNow CMDB. In the previous sections, you learned that
the IP address of the host machine is required. To find the IP address of the
host machine, open the Command Prompt and type ipconfig; this command
returns the network information of the machine. Now, find the IPv4
address of the machine and enter this while performing the credential test.

In Chapter 1, The IT Challenges and Power of Automation, you learned


about Discovery and Discovery schedule. If you are running a complete
Discovery schedule, then all probes will be kicked off, which is a quite
time-consuming process. Rather than running a complete Discovery, a
specific Discovery schedule should be created to discover only virtual
machines. It is important to note here that you should create a dedicated
MID server for discovering virtual machines. Navigate to Discovery |
Discovery schedule and create a dedicated Discovery schedule as shown
in the following screenshot. A lab environment has been created with a
dedicated MID server VM-MID Server for discovering virtual machines as
shown in the following screenshot:

VMware Discovery schedule


Discovery options
Furthermore, let's look at the Discovery schedule configuration to
understand it better. The Discover field is very critical while creating a
new Discovery schedule, as here you can decide that what new schedule
will discover. There are a couple of options available such as
Configuration items, IP addresses, and Networks. In previous sections,
you learned about the ServiceNow CMDB cmdb_ci and these options are
related to it. If you want to discover the CIs and update the CMDB, then
you can select the Configuration items option in the Discover drop-down.
Sometimes, it is not important that you want to discover and update the
CIs in the CMDB rather interested in only searching active IPs (alive
devices) so that the IP addresses option can be chosen in the Discover
drop-down to scan the devices even without the credentials. Furthermore,
the network (Discover router and switches) option of the Discover drop-
down is to populate the IP network cmdb_ci_network table. Moreover, there
are other options available such as services to discover services for the
service mapping application that is available as a separate subscription:

The Discover field configuration


Kick off discovery
After having filled in the Discovery configuration form, save the record
and click on the Discover now button under the related link, and this
action must kick off Discovery to populate the Windows server
information. How can you monitor the status of Discovery? Out of the
box, ServiceNow creates a Discovery status record; you can view it by
navigating to Discovery | Status and it should direct you to the Discovery
status page. It is important to note that every time the discovery schedule
is kicked off, a new discovery status discovery_status record with the DIS
prefix is created, as shown in the following screenshot, showing the
respective status, such as Starting, Canceled, and Completed. Moreover,
once the discovery status is created, it creates three related list records
underneath is—discover log, devices, and ECC queue:

The Discovery status record

Now, click on a discovery status record that provides the discovery


summary as shown in the following screenshot, where you can view
various details, such as Discovery status Number, Schedule, and State.
Furthermore, the started and complete counts indicate probes started and
completed like during my discovery schedule 13 probes were started and
completed as well. It is important to note that it is not important that the
started and completed numbers are matched:
The Discovery status summary

Furthermore, it is important to note that Discovery will not be started


without the IP address, so for executing the Discovery, I have given my
lab environment IP address or rather the IPv4 address. Try to recall Chapter
1, The IT Challenges and Power of Automation; this explains about the
quick ranges and takes IP ranges as an input, separated by a simple comma
(,). As an alternative, IP ranges can be created by clicking on the New
button, which should direct you to the Discovery IP Range configuration
page. The configuration page is driven by the Type drop-down, which
contains three options out of the box—IP Address Range, IP Address List,
IP Network, and each one of them has a different purpose as shown in the
following screenshot:

Discovery
IP range Used for
type

For individual addresses; and should not be


IP Address
included in any existing IP range for discovery
List
to query

IP Address Support selected segments of network for


Range discovery to query

Includes the network address and broadcasting


IP Network
address as well as discovery to query
Type of IP ranges
Furthermore, sometimes there might not be adequate information during
the creation of the discovery schedule. This shouldn't stop you discovering
the CIs, so ServiceNow provides discover options that have been seen in
the previous sections. While creating a new discovery schedule (to
navigate, click on Discovery | Discovery Schedule | Fill form | Discovery
IP Ranges), you must choose the Discover Type as Network with the MID
server. For a lab environment, the IP Address Range option was selected
as the IP address of the machine or host that was known to me:

Discover IP Range

After having submitted the form, an IP address range record must be


attached to the discovery schedule as shown in the following screenshot
and it will be utilized during the discovery execution:

Discovery schedule IP ranges summary


Discovery log, devices, and
ECC queue
In the previous section, you learned that Discovery Schedule creates three
related lists so let's understand how to read it from a troubleshooting point
of view. Let's look at discovery log discovery_log first; discovery logs
provide details on three levels—Information, Warning, and Error with a
created date and ECC queue input details as well. In Chapter 2, ServiceNow
Configuration, you learned about the ECC queue and how it works. I have
created a lab environment and since in the IP range I gave my machine IP
address, all probes were launched as you can see in the following
screenshot:

The Discovery log

Let's move on to devices discovery_device_history related list of discovery


status record. It is important to note that during discovery, it tracks the
current and completed activities; when the discovery is completed, it
returns the results in this table. Moreover, a successful discovery returns
an updated or created CI message and the failed one returns some
messages such as Active, couldn't classify with details under the Issues
column. In my lab environment, I gave my machine IP and ran the
discovery schedule and it returns my system name (hp) and Class
(Computer) as shown in the following screenshot with the Created CI
remark. Let's understand what just happened. Discovery schedule ran on
my machine IP and launched multiple probes to retrieve device
information, and finally, a new CI (hp) is created and stored in CMDB
cmdb_ci_computer:

CMDB CI created

If you click on the CMDB CI item (hp in this case) from the devices-
related list, this should direct you to the actual CI item (form) as shown in
the following screenshot, where you can view various fields such as
Name, Manufacturer, and Model ID. These are a result of various probes,
triggered by discovery through the MID server:

The newly created CMDB item

If you scroll down the CI page, then you should able to view the various
related lists that hold related information of the CI such as Network
Adapters, Storage Devices, Software Installed, and Running Processes. As
it is the lab environment, you can view my machine network adapter as
given in the following screenshot. Moreover, you can also view all the
process execution results on my local machine:

Network Adapters

The software installed cmdb_software_instance is a related list and stores all


the software that is installed on the host machine:

Software Installed

The ECC queue related list indicates a series of records that was created
during the distinct phases of the discovery execution. Try to remember Chap
ter 1, The IT Challenges and Power of Automation, where we learnt about
the four phases of discovery—Scanning phase, classification,
identification, and exploration, which can be seen in the Topic columns
(DNS, Powershell, WMIRunner, and so on) and in the Queue column
ecc_queue.queue. Results are stored as output and input, so what does this
mean for us? The answer, it is a message from the ServiceNow instance to
another system classified as output and a message from the external
system to the ServiceNow instance classified as input, as you can view in
the following screenshot:

ECC queue
Discover VMware vCenter and
process classifier
Coming back to our ECC queue, you will notice that a VMware
exploration probe VMwareCenterProbe has been triggered by a process
classifier during discovery. To view the process classifier record, navigate
to Discovery Definition | CI Classification | Processes, and there you
should able to view many out-of-the-box process classifiers, including
vCenter as shown in the following screenshot:

The vCenter process classifier

If you click on vCenter, then it should direct you to the following page,
where you can view many details such as Name, Table (VMware vCenter
Instance) in which data will be populated, Relationship type (Runs
on::Runs) that is with the host, Condition (Command contains vpxd) that
indicates the actual process that has been found and contains vpxd as
shown in the following screenshot:
The process classifier

Furthermore, if the condition is true (the Command contains vpxd), then


an additional probe will be triggered as shown in the following screenshot
to interrogate the vCenter console:

Process classifier triggered probes

Already, we have discovered the Windows server that runs the vCenter
application but the VMware information hasn't populated it in our CMDB.
So, to bring the VMware information to the CMDB, we need the VMware
credentials so that during the discovery scan, it can be searched. So, again,
navigate to Discover | Credentials | New | select VMware Credentials, and
you should be directed to the following page, where you need to configure
the VMware credentials as shown in the following screenshot:

vCenter credentials

In the previous sections, we executed the discovery schedule to bring the


Windows server information in the ServiceNow CMDB. By now, we
know that vCenter is running on the windows server but the vCenter
information was not retrieved during the last Discovery Schedule run.
Now, as we have created a VMware credential, execute the same
Discovery Schedule (VMware-Discovery) by navigating to Discovery |
Discovery Schedule | and clicking on the Discover Now button under the
related list. Wait for some time and navigate to Discovery Schedule |
devices related list, where you should notice a new CI that is ESX server.
Populating vCenter details in
the CMDB and dependency
view
Try to remember Configuration—the VMware application that was empty
before the Discovery Schedule execution. Now, navigate to Configuration
| Virtual Machine Instances, and you will notice that VMware instances
have been populated in the VMware Virtual Machine Instances
cmdb_ci_vmware_instance table as shown in the following screenshot. It is
important to note that other components such as ESX and templates have
been populated as well:

Configuration-VMware instance

Just imagine, if we use ServiceNow to automate the provisioning of


VMware, how would it be done? The answer is via a template. So, what is
a template and what does it do in a VMware environment? The answer is
that a template is a master copy of a virtual machine and is used to create
many clones. You may be interested to know that a clone is a virtual copy
of a virtual machine. Navigate to Configuration | VMware | Virtual
Machine Templates after successful discovery; it should be auto-
populated:
Configuration—VMware Virtual Machine Templates

If you open any virtual machine instance by navigating to Configuration |


VMware | Virtual Machine Instances | VM INS3, then you can view all the
related information on the form such as Name, State, and Used for, as
shown in the following screenshot:

The VMware virtual machine instance information

On the same page (VMware Virtual Machine Instance VM INS3), if you


scroll down, then you should able to view all the related items as shown in
the following screenshot, which explains all the relationships with other
items:
The CMDB items relationships

Now, open the CI item and click on Dependency view, and it should direct
you to the dependency UI page, where you can view all the relationships
with the host, such as VMware vCenter, datacenters, ESX Server,
VMware virtual machine templates, disks, and the network:

The view relations button

Now, click on Dependency view to display an infrastructure view for


configuration items and business services that are associated with CIs;
furthermore, Dependency view also indicates the status of the
configuration item and supports access to the CI's related alerts as well.
After clicking, it should direct you to the dependency UI page, where you
can view all the relationships with the host, such as VMware vCenter,
VMware vCenter datacenters, ESX server, VMware Virtual Machine
Templates, Disks, and Network in the graphical mode. ServiceNow
provides many out-of-the-box visual options such as vertical, horizontal,
redial, force, groups and finally details, and with each click, the visual
representation changes:

Graphical relationship options

Furthermore, I have used horizontal visualization as shown in the


following diagram, which represents CIs in a horizontal tree pattern based
on an upstream and downstream relationship:

A graphical dependency view

So far, you have learned about the various stages that are involved in
Windows and VMware data population in ServiceNow CMDB. Hooray!!
The VMware virtual machine information has been populated in the
ServiceNow CMDB, but what now? What do we need to do with the
populated details? The answer is that we will utilize the populated
VMware virtual machine and the related components for configuring the
cloud resource offering in the following sections of the chapter.
A use case for change
requests
You are probably aware that a CR, or change request, is a standard
practice in IT operations; likewise, with the virtual machine, a change
request is mandatory for making changes in the virtual machine
environment. Here, it is important to note down whatever options are
being chosen, such as terminate VM or stop VM; a change request should
be mandatory. Before exploring change requests in more detail, let's
understand how a virtual machine catalog item looks, as catalog items are
the standard way of logging requests on ServiceNow.
Virtual machine catalog item
A virtual machine catalog item can be created with a VMware template. In
previous sections, we have seen that templates can be populated in the
CMDB and stored in the VMware Virtual Machine Template
cmdb_ci_vmware_template module. Navigate to Configuration | VMware |
Virtual Machine Template, and it should direct you to the list view of
VMware machine templates. Open any template and scroll down; under
the related link, you should able to view options such as the Create
Catalog Item, View Catalog Items, and Subscribe buttons, as shown in the
following screenshot:

Related link To VM

Under the related link of the virtual machine template, click on the Create
Catalog Item button; ServiceNow should direct you to the newly created
item as shown in the following screenshot. Moreover, ServiceNow auto-
populates fields such as Name, Catalogs, Workflow, and Category, but
some fields need your attention, such as VM size and Provisioning mode.
We will explore these in the following sections:
VMware catalog item

On the VMware Catalog Item (VMware Instance - Template 1) page, if


you scroll down, you should view the VM size and Provisioning mode
fields. Provisioning mode is a drop-down field and determines the
provisioning type such as Manual or Automatic. VM size is a reference
field and refers to the VMware Sizes Definition vmware_size table. You will
learn more about this in the following sections:

The VMware Catalog Item field

VM sizes are controlled by the Sizes module of VMware and can be found
by navigating to VMWare cloud | Size, as shown in the following
screenshot, where each record explains the configuration of the virtual
machine; for example, small has 1 vCPU, memory 512 MB, and data disk
size 10 GB. The new size can be created by clicking on the New button, as
shown in the following screenshot:
VM size definitions

When ServiceNow directs you to the newly created item, the catalog item
is in a deactivated state and you need to publish it by clicking on the
Publish button. Once the catalog item is published, it should be added to
the Cloud Resource catalog. For cloud-related roles cloud_operator,
cloud_admin,cloud_user, a module and a Cloud Resource catalog should be
available under the self-service application to log a virtual machine
request. Navigate to the Self-service | Cloud Resources catalog:

The Cloud Resources screen

In previous sections, we created a new catalog item (VMware instance -


Template 1) by clicking on the Create Catalog Item related link button and
this should be added in Cloud Resources | VMware Virtual Machines.
Click on VMware Virtual Machines to view all the VMware virtual
machines that are available for ordering on the ServiceNow platform.
Now, you should be able to view the VMware Instance - Template 1
catalog, as shown in the following screenshot, and this can be ordered to
create a new virtual machine:
A new virtual machine

If you click on VMware Instance - Template 1, then you should be


directed to the form as shown in the following screenshot, where you see
many fields, such as Lease start date and end date, Virtual Resource Size,
Used for, Business purpose, CMDB attributes such as assigned to,
business services, assignment group, and cost center as shown in the
following screenshot. You might be interested to know that these are out-
of-the-box fields and you can even modify the form. Now, fill in the form
and click on the Submit button:
The virtual machine catalog item form

A request has been logged, and REQ and RITM numbers are generated.
Based on the selected options (manual/automation), approval should be
done; after approval, the new virtual machine should be set up at the
VMware vCenter end automatically. It is important to note that if you run
the discovery schedule again, you should view the newly created VM
within the ServiceNow CMDB. At the ServiceNow end, how can you
monitor the status of a virtual machine? The answer is very simple: by my
virtual asset portal, we have seen the cloud portal in the previous sections
and virtual asset portal resides in it only. Navigate to Cloud management |
Cloud User portal |, click on View dashboard |, and click on the My Virtual
Assets dashboard, and there you can view the overall status of your virtual
assets, virtual asset requests, metrics, resource optimization, and so on, as
shown in the following screenshot:
My Virtual Asset dashboard

Furthermore, you can even manage virtual machine operations, such as


Start, Pause, or even Stop with a simple click. You only need to right-click
on the virtual machine name and ServiceNow should display virtual
machine management options as shown in the following screenshot; the
ServiceNow workflow will now be extended to vCenter to perform the
task as per the selected options. It is important to note that ServiceNow
gives you the facilities to perform these tasks on the target machine but
change requests should be mandatory based on the provisioning rule:
VM Management

Furthermore, ServiceNow provides dedicated sub-modules for managing


the virtual machine under the VMware Cloud as shown in the following
screenshot. Navigate to VMware Cloud | Managed Resources | Virtual
Machine Instances and click on any record:

VMware cloud-managed resources

After having clicked on Virtual Machine Instances, you should be directed


to list view of record, and you should choose one virtual machine instance;
after this, you need to scroll down to view given related links options of
VM INS3, such as Modify VM, Start VM, and Terminate VM and so on,
and you can click on any option to kick off that workflow within the
ServiceNow platform. To explore a simple example (terminating the
virtual machine), so let's click on the Terminate VM button:
VM management options

After having clicked on the Terminate VM button, you should view a pop-
up window as shown in the following screenshot. It is advisable to contact
your VMware administrator before terminating any virtual machine to
avoid any issue; after the confirmation from the VMware administration
team, you can simply click on the OK button to start the termination
process:

Terminating VM

After clicking on the OK button on the pop-up, you should be directed to


the change request screen as follows, where you can view the
Configuration item details, Category, and Assignment group:
Change request for terminating a virtual machine

Furthermore, the change request should follow the Change Advisory


Board (CAB) approval to implement the change as shown in the
following screenshot:

Change request approval

After the change approval, the change request should move to the
scheduled state and you need to simply click on the Implement button to
apply the change in the vCenter environment. Having clicked on
Implement button, you should be directed to the record as shown in the
following screenshot, where you should note the VM's terminating state:

Terminating virtual machine

In the previous section, we explored provisioning, termination, and so on


at the ServiceNow end, but what about at the VMware vCenter end? So,
you may be interested to know that at the vCenter end, the same action
(terminate VM INS3) should be taken within a couple of minutes. At last,
you can have the insight of virtual machine termination's workflow as
shown in the following screenshot by clicking on Show workflow (under
related links) on the VM termination page:

The virtual machine termination workflow


Summary
In this chapter, you learned the basics of various virtual machines and
vendors such as VMware, Amazon, and Microsoft and how they interact
with the ServiceNow platform. After this, we explored basic concepts such
as cloud management application roles and related groups from the
operational point of view. After that, we entered in the VMware cloud
application, where you learned about orchestration and significance of the
discovery, then you learned how to configure the credentials of a Windows
machine as our vCenter is installed on it. Then, we spoke about the
security of credentials, service credentials, and the order of service
credential execution. Then, you learned about the process classifier that
classifies vCenter components in ServiceNow CMDB. We also saw how
cloud resource catalog items are created (through the template) and how to
publish them to add under self-service | cloud resources. Furthermore, you
learned about discovery schedules, credentials, various probes, and
sensors to populate CMDB CIs tables. We have also seen that how you
can request new VMs from a service catalog and maintain them from the
ServiceNow environment. It is important to note that to kick off the
VMware automation process, you must have the VMware details in the
ServiceNow CMDB that is feasible by VMware credentials and discovery.
Once the VMware CIs are available in ServiceNow, then you can play
around with them. Finally, we explored how a new virtual machine request
was generated and the new virtual machine was set up at the VMware
vCenter end. Furthermore, we saw how you can log change requests for
terminating the virtual machine with workflow.
Automation with Puppet
This book is meant to be for ServiceNow professional, perhaps it doesn't
mean that they should only know about the ServiceNow platform. We
already know that ServiceNow is a service management platform; on
many occasions, you may face a situation where you are not sure about the
platforms managed by ServiceNow. In this chapter, we'll learn about
Puppet; I don't expect you to know a lot about it, but it is one of the widely
popular server management tools. This chapter will cover the following
topics:

Introduction to Puppet

Puppet installation and configuration

ServiceNow Puppet basic concepts

ServiceNow Puppet menus and modules

A use case for change requests


Introduction to Puppet
Puppet might be a new term for you as a ServiceNow professional, so let's
understand first what it does. So, Puppet is a configuration management
tool for managing Linux and Windows system configurations. You may be
interested to know that Puppet has its own language, known as the Puppet
declarative language, and it does support Ruby domain-specific language
(DSL) as well. Although a few terms will be discussed in later sections,
for now let's acquire a basic understanding. Information is stored in a file
that is known as the Puppet manifest, and Puppet discovers Puppet node
information with a utility known as facter, which complies the Puppet
manifest into a catalog that holds resources and dependencies and is
applied to the Puppet node rather the target system to update itself. Before
getting deeper into the concept, let's understand some more basics with
regard to the Puppet application.

Puppet is a company like ServiceNow and has a variety of products; in


particular, Puppet has two versions, so let's explore them. Firstly, there is
an open-source version known as open-source Puppet, and secondly there
is an enterprise version, and here I assure that you surely understand the
open source version that it supports part consequence, as it doesn't have
official support from any vendor but community members do answer
questions. When it comes to Puppet enterprise, then you will have proper
support from the vendor (Puppet), but it is licensed and costs you based on
the number of nodes. Furthermore, Puppet follows the client-server
architecture. In the Puppet application space, the client is called the agent
and the server is called the master, or Puppet master, as shown in the
following diagram:
Introduction to Puppet
Push and pull configuration
Coming to the types configuration, there are two industry standards in the
client-server architecture—push configuration and pull configuration in
the client. So, let's get an understanding of both. Push configuration may
not be a new term for you. As we have seen in previous sections, it
follows the client-server architecture, so you can think of a server as a
control room or a governing body for all activities. In the push
configuration, the control room contains all the configuration copy and it
pushes to the client, or nodes. Just imagine that you want to configure
three nodes in a push configuration. The server will push configurations to
the nodes, as shown in the following diagram. You may be interested to
know that Ansible and SaltStack, two other configuration management
tools, use the push configuration:

Push configuration

As the name states, the pull configuration is the opposite of push


configuration. In the pull configuration, the client or nodes update
themselves dynamically by polling the control room, or centralized server,
periodically for updates, which means that no command execution is
required within the centralized server. You may be interested to know that
both Puppet and Chef utilize the pull configuration. Now, just imagine that
you want to configure three nodes in the pull configuration; the nodes will
poll the server, as shown in the following diagram:
Pull configuration
Puppet architecture overview
Architecture will be discussed in detail in later sections, but for now let's
acquire a basic understanding to help you to understand Puppet from a
ServiceNow point of view. In the previous section, we spoke about the
pull configuration, explaining that nodes are responsible for updating
themselves by periodic polling. So, for a better understanding, I will like
to introduce some terms here that are Facts and compile catalog. Before
getting deeper into an overview of the Puppet architecture, it is important
to note that there is an SSL connection between the Puppet master and the
Puppet node, which will be discussed in later sections. Now, let's
understand the Puppet facter. You can imagine facter as an overall status
report, or data of the node, that gives the Puppet master information such
as the IP address, hardware, operating system, network address, and so on.
As we know, in the pull configuration communication is initiated by the
node, and the same approach is being followed here as well—the Puppet
node sends the facts to the Puppet master, which holds configuration
information, as we have seen just now. Before moving on, I will like to
introduce another term, manifest. Facts are available as variables in the
manifest of the Puppet master. Now, the ball is in the Puppet master's
court, so what does the Puppet master do with the facts? So, here again I
will like to introduce a new term, catalog. We have just learned that the
Puppet master receives client node information in the form of facts, so the
Puppet master uses the facts to compile a catalog that specifies how the
client node, or Puppet node, should be configured. A compiled catalog is
sent back to the Puppet node, and finally the Puppet node sends back a
report that indicates the Puppet node's configuration has been completed
and indicates how it was configured. Furthermore, a catalog holds the
desired states of the Puppet nodes, or managed resources:
Puppet node update process, Source: Puppet
Certificates
Just now we have seen an overview of the Puppet architecture, so moving
on, we should understand how the Puppet master and Puppet nodes
communicate with each other. Communication is established by a Secure
Sockets Layer, or SSL. It is important to note that certificates play a
crucial role, as mutual authorization (the Puppet master and Puppet nodes)
is mandatory. So, just imagine that you want to add a new Puppet node
with the Puppet master, then the Puppet nodes send a request to the Puppet
master for the Puppet master's certificate, and after receiving the
certificate request, the Puppet master provides its certificate to the Puppet
node. Furthermore, now it's the Puppet master's turn, so the Puppet master
requests the Puppet node's certificate; after receiving the certificate request
the Puppet node sends its certificate. After the mutual authorization, your
Puppet node is on-boarded.

The Puppet node's request for data from the Puppet master is given in the
following diagram. You will have noticed that we mentioned certificates in
this section, but how will SSL certificates be generated for mutual
authentication? So the answer is very simple: both the Puppet master and
the Puppet node have the ability to generate SSL certificates within their
environment, and you don't need to go to any third-party SSL certificate
publishers:
SSL connection between Puppet node and Puppet master
Others
A more detailed explanation will be given in the Puppet installation and
configuration section, but for now let's familiarize ourselves with some
more terms: manifest and classes. In the previous section, we discovered
manifests, but what do they do? A manifest, or rather a Puppet manifest, is
actually a file that contains the Puppet-specific .pp extension and stores
information. Furthermore, we are already aware of facters. A facter is a
utility file through which the puppet node system information generates its
configuration. You might be interested to know that in the manifest, all
resources are declared to be checked or to be changed. Resources might be
services, packages, and so on. If I were to say that anything that you wish
to change at the client node, or puppet node, can be considered a resource,
then I probably wouldn't be wrong. Moving on to classes, I personally
don't think a class needs any introduction; if you have any programming
background then the term class won't be new to you. But, if you don't have
a programming background, then you can think of a class as a container
that holds different resources.
Puppet installation and
configuration
Now, we are moving out of the ServiceNow space and coming into the
Puppet space. This might be very different from your current
understanding, but as this chapter is dedicated to Puppet automation
through ServiceNow, we must have some knowledge of Puppet. So, let's
start with some key terms and its architecture.
Architecture
In the Puppet space, there are two types of installation available; the first
is monolithic and the second is split. Monolithic installation is easier
compared to split installation, as in the monolithic the master, console, and
Puppet DB are installed on one node only and such installation is easier
from the maintenance point of view activities such as install or upgrade. I
assume you may have attended a Puppet automation meeting and you may
have faced situations where you have heard the following terms:

Master of Master (MoM) is the Puppet master with which


ServiceNow interacts. You can imagine the Puppet master as a
control room that compiles Puppet code to create an agent catalog
and signs and verifies SSL certificates as well. MoM also holds
the compile master and Puppet server, and if the node count is
increased then additional compile masters can be added for
workload balancing.

With regard to the Puppet server, you may be interested to know


that the Puppet server runs on the Java virtual machine (JVM)
and powers the catalog compiler.

Catalog compiler; you may be interested to know that how


managed nodes are configured? so the answer is puppet agent
utilizes catalog document to configure them-self from puppet
master. In addition, as we have seen in previous sections, the
catalog also holds the desired state of every resource that is going
to be managed on the node.

File sync is a component that syncs Puppet code across multiple


compile masters.

The certificate authority plays a significant role while adding a


node to the Puppet master. The Puppet certificate authority accepts
certificate signing requests (CSRs) from nodes and accepts
commands to sign or revoke certificates.

The Puppet agent is a background service that runs on the machine


and communicates with the Puppet master. Furthermore, let's
understand how the Puppet agent works. In previous sections, we
saw that the Puppet master is like a control room and the Puppet
agent sends facts to the Puppet master and requests for catalog. To
send the response to the agent, the Puppet master compiles the
catalog by using available information and sends it back to the
agent. After receiving the catalog, the agent applies it by checking
each resource. It is important to note that if, any resources are not
in the desired state, then the agent corrects them and sends a report
to the master. I think it is worth noting that agents come with the
MCollective and facter, shown as follows:

MCollective: It provides better control over the


infrastructure. Nodes with MCollective listen to
commands from the message bus and independently take
action on authorized requests from the Puppet agent.

Facter: It is the cross-platform system profiling library in


Puppet. It discovers and reports per-node facts, which are
available in your Puppet manifests as variables. Before
requesting a catalog, the agent uses facter to collect
system information about the machine it's running on.

Console: It is a web-based interface to manage systems


and can manage user access, analyze events and reports,
and so on.

RBAC: It is a user administration application of


ServiceNow for managing user access in the Puppet
space.

Node classifier: Puppet comes with a node classifier that


is built into the console. Just imagine that you want to
synchronize managed nodes, then you can group those
nodes in the node classifier.

PE database: Puppet use PostgreSQL for database


services.

Furthermore, let's explore the MoM architecture:

Puppet enterprise architecture, Source: Puppet


Puppet installation
prerequisites
In the pull configurations, there are servers and clients, so as usual Puppet
must be installed on both machines, where one machine can act as the
Puppet master and the second can act as the client. To set up the lab
environment, you will need the following elements:

Software prerequisites:

Oracle VirtualBox

Linux operating system (to install the Puppet Master)

Internet connectivity

Puppet master installation file (can be downloaded during


installation from the internet)

Puppet agent installation file (can be downloaded during


installation from the internet)

Hardware prerequisites:

Two-core processors with 1 GB RAM

Two-four processor cores and 4 GB RAM (to serve 1,000


nodes)
Puppet installation
You can configure the lab environment on your own machine as well. For
configuring a lab environment, you will need two machines. You can
simply achieve this with virtual machines, where one machine will act as
the Puppet master and the other will act as the Puppet client, or Puppet
node, as we have seen in previous sections:

1. Download Oracle VirtualBox from https://www.virtualbox.org/wiki/Do


wnloads and choose the option based on your operating system and

install it. After successful installation, open the application and


you should be able to see the following page. Click on the New
button and set up the virtual machine:

Oracle VM VirtualBox Manager

2. Now, click on the New button and the virtual machine


configuration window should pop up, as shown in the following
screenshot, requesting information such as Name, Type, and
Version. To demonstrate Puppet master installation, the Linux OS
has been used:
Create virtual machine

3. Click on the Next button, configure the memory and hard disk
size, and click on the Create button. After having clicked on the
Create button, the Puppet Master - ServiceNow virtual machine
should be added on the left-hand side, as shown in the following
screenshot. Likewise, you need to create a Puppet client as well.
After creating the virtual machines, you should be able to see both
VMs on the left-hand side, as shown in the following screenshot,
but in a powered-off state:

Installation
4. It's time to start the virtual machines, but before that, you need to
supply the path of the Linux .iso file. In order to do so, right-click
on the virtual machine and then select the Settings button; now
you should able to view the Settings window containing many
sub-sections. Now, click on the Storage button, click on the icon
(next to the Optical Drive field) to provide the path of the ISO file,
shown as follows, and finally click on the OK button:

Loading the ISO file on the virtual machine

5. Coming back to the Puppet Client - ServiceNow, the same steps


need to be followed, and one machine will act as the Puppet
master and the second will act as the Puppet client. Now, click on
the Start button to install virtual machine:
Start VM

6. After having clicked on the Start button, the virtual machine


should start booting and you should be able to view the
installation options screen. Here, choose the option as per your
convenience, and after some time you should notice that the
installation has been begun. Soon after, you will be asked to
provide the hostname of the virtual machine, as shown in the
following screenshot. For demonstration purposes, I have given
the Puppet master as a localhost:

Configuring the hostname

7. Now, it's time to configure the network connection of the virtual


machine. On the same page, click on the Network Configure
button (bottom left), and then you should able to view the
following screen. Here, you need to check the Connect
automatically option, as shown in the following screenshot:
Configure network

8. Finally, you will be asked to give the Root Password, and here you
can type any desired password with which to log in to the virtual
machine. It is important to note that root is the admin account, and
it will be used during the Puppet master installation as well:

Enter the root password

9. Now, after completing the installation, click on the Reboot button;


after the reboot, you will be asked for the admin account and
password, so enter the root (username) and the root password to
log in to the virtual machine:
Installation

10. The same steps can be followed to set up another virtual machine
(Puppet node). Now, both our Linux machines are ready. But, keep
in mind that both the Puppet master and Puppet client must be
accessible; an entry must be made in etc/hosts for the resolution of
the DNS issue on both nodes or you can configure DNS to resolve
the IP. Disable the firewall on both Puppet master and Puppet
client, to avoid any issues, and type the systemctl stop firewalld and
systemctl disable firewalld commands on the Linux Terminals.

Finally, make sure both the Puppet master and the Puppet client
have internet access to install packages from the Puppet lab
repositories. You may be interested to know that Puppet has its
own repository (Puppet Forge) from where you can install the
different packages, code, and so on.
11. Once Linux is installed, you should be directed to the command
line, or Linux Terminal screen (the Puppet master - Linux
Terminal), where you need to enter all the commands to install the
Puppet master:
Puppet master virtual image

12. In previous sections, we have read that we must make sure


internet connectivity is up-and-running on both the Puppet master
and the Puppet client. We just have installed a new Linux
operating system (Puppet master), so let's check the internet
connectivity by typing ping www.google.com (for testing purposes), as
shown in the following screenshot:

Internet connection testing

If the ping command returns unknown host www.google.com, then you


must configure the network on the Puppet master.

13. You will have observed that there is no graphical user interface
(GUI), so how can we keep the Terminal clean in the Linux
space? You can use the clear command to clean the screen, as
shown in the following screenshot:

Clear screen
14. Moving on to the network setting part, now we are going give a
static IP to the Puppet master for connectivity purposes. To do so,
type the following command in the Terminal and press Enter:

Static IP configuration command

15. After pressing the Enter button, you should be directed to the
following screen, where you should able to view various network
details such as ONBOOT, BOOTROTO, UUID, and so on; however, we should
modify these, as shown in following screenshot. But, don't forget
to save the new settings. To save the settings, press the Esc button
first on your keyboard, and type the :wq command to gain control
again:

Static IP configuration

16. After getting the control back, you need to start the network
service again by typing the command shown in the following
screenshot:

Network service restart

17. Now, after hitting the Enter button, you should see a series of
messages, as shown in the following screenshot. So, wait until
command execution is completed:

Restart network service

18. Once the previous step is completed, you can type any web
address in the Linux Terminal and you should able to view the
response of the ping command, as shown in the following
screenshot:

Internet connection test


Installing the Puppet master
Congratulations! Now, we are ready to install the Puppet master on the
Linux operating system. In this section, you will learn about the Puppet
master installation:

1. In the previous section we explored some prerequisites. You may


be interested to know that iptables is the firewall that is available in
most Linux systems. We are going to disable the firewall rules
first for a smooth installation of the Puppet master. To do so, type
the iptables -F command and then save the services by typing the
service iptables save command and clicking Enter, as shown in the

following screenshot:

Disable firewall rule

2. The next step is to get the Puppet repository. Open any standard
browser and type yum.puppetlabs.com, and there you can view the
repositories for our OS, such as puppetlabs-release-el-6.noarch.rpm, as
shown in the following screenshot:
Puppet repository

3. Now, copy the link location for future use. Now, open the
Terminal and type the command, as shown in the following
screenshot with web address, and press Enter:

Install Puppet master 1

4. After pressing the Enter button, you will notice the retrieving
message, as shown in the following screenshot, and a warning
message (ignore the warning message):

Installing Puppet master 2

5. Now, we are ready to install the Puppet master. To do so, type the
command, as shown in the following screenshot, and press Enter.
Execution of the command should show you a series of
installation messages:

6. Soon after, you will be asked some questions, as shown in the


following screenshot. Type y to continue the installation of the
Puppet master. After entering y, the package installation will
begin:

Puppet master 3

7. Congratulations! The Puppet master has been installed


successfully, as you can see in the following screenshot, and with
it you can view all the dependencies:

Puppet master 4
Installing the Puppet agent
In previous sections, we saw that the hostname was configured as
puppetmaster, so during the installation of the second machine, the hostname
was puppetagent while executing the command on the Linux Terminals. I
think it is worth mentioning that you should not forget about the network
configuration, where the Connect automatically checkbox should be
checked. So now, log in to the Puppet agent machine by using root and the
root password that was used during the Linux installation. After successful
installation, you should see the following screen:

Puppet agent Terminal

Now, let's install the Puppet agent on the machine:

1. As a first step, check the internet connectivity of the Puppet agent


by executing the ping command; if it doesn't return any errors then
we can move on.
2. On the puppetagent, disable the firewall's rule by executing the same
command that was used in puppetmaster, as shown in the following
screenshot:

Disable firewall
3. Moving on, we must enable the Puppet repository on the
. To do so, execute the following command in the
puppetagent

Terminal, as shown in the following screenshot:


puppetagent

Enabling the Puppet repository

4. You should now be able to view the following screen:

Command execution result

5. Now we are ready to install the puppetagent on the Linux virtual


machine. So, to install the Puppet agent, execute the following
command in the puppetagent Terminal:

Install Puppet agent

6. After pressing the Enter button, you will see a series of messages
as a part of the Puppet agent installation process:
Puppet agent installation

7. Congratulations! The Puppet agent has been installed successfully


on the machine, as shown in the following screenshot:

Puppet agent installation


Puppet master and Puppet
agent configuration
In the Puppet master and Puppet agent configuration, we must edit the
host files, but you will need an IP address:

1. Open the Puppet master Terminal and type the command shown in
the following screenshot, which will return the IP address of the
machine. And, in the eth1 (Ethernet interface), you can view the IP
address of the Puppet master:

IP address of the Puppet master

2. Now, we must edit the host machine name in the vi editor. We are
giving a hostname to the puppetmaster machine. Type the command
in the Terminal, as shown in the following screenshot:

Edit host name command

3. After executing the command, you should able to view the


following screen, and now we are going to give a name to the IP:
Configure DNS

4. To assign the domain name to the IP address, follow the


screenshot and save and quit:

Assign domain name

5. The next step is to edit the Puppet configuration file, using the
command shown in the following screenshot:

Editing the Puppet configuration file

6. After executing the command, you should able to view the


following screen. In the screen, you can explicitly see that there
two sections, main and agent. We must add our domain so that the
server can respond:
Executed command screen

7. In the main section, add the DNS and certificate name, shown in
the following screenshot, and save it:

Add DNS and certificate name

8. Congratulations! Now we are ready to start our Puppet master. To


do so, you can execute the command shown in the following
screenshot:
Starting the Puppet master

The Puppet master configuration has been completed, and as a next


step, we need to configure the Puppet agent to communicate with
it.

9. In the previous section, we have already executed a command to


find out the IP address of the Puppet master; we can use the same
command again to get the IP address of the Puppet agent:

Puppet agent IP address

10. On the next screen, you can view the IP address in the eth1 file, so
copy it somewhere so that it can be used while editing the
configuration file.
11. Now, open the host file of the Puppet agent by executing the
following command:

Editing the Puppet agent

12. After executing the command, you should able to view the
following screen. Now, you have to give a name to the IP address
of the Puppet master and the Puppet agent, as shown in the
following screenshot:
Assigning a domain name to the Puppet agent

13. Now it's time to edit the Puppet agent configuration file. This can
be done by executing the following command:

Editing the Puppet agent configuration file

14. After executing the command, you should able to view the
following screen where the main and agent sections are available.
Now, as it is the Puppet agent configuration, we'll make the
changes in the agent section:

Executed command screen

15. In the agent section, you must add the server with which the
Puppet node is going to communicate:
Adding a server name to the Puppet agent

16. Congratulations! It's time to start the Puppet agent, and this can be
done by executing the following command:

Starting the Puppet agent


Generating an SSL certificate
In previous sections, we learned about the significance of SSL certificates
for communicating between the Puppet master and the Puppet agent, so
now let's look at how certificates can be generated and signed from the
master and agents:

1. Open the Puppet master virtual machine to generate the certificate.


You must stop the Puppet master by executing the following
command:

Stop ping the Puppet master

2. Our Puppet master has been stopped successfully, and now it's
time to generate the certificate by executing the following
command:

Generating the certificate

3. Soon after executing the command, you should able to view the
SSL certificate, as shown in the following screenshot. After seeing
Notice : Starting Puppet Master Version 3.8.7, you need to exit it, as

Puppet master can't start:


Generate SSL certificate

4. Congratulations! A certificate has been generated, so it's time to


start the Puppet master by executing the command shown in the
following screenshot:

Get Puppet master command

Let's move on to the Puppet agent and generate the certificate as part of
the configuration:

1. First, go to the Puppet agent virtual machine and stop the Puppet
agent by executing the command shown in the following
screenshot:

Stop ping the Puppet agent


2. The Puppet agent has been stopped, so now we need to generate a
certificate signing request for the Puppet agent by executing the
command shown in the following screenshot:

Generate Puppet agent certificate

3. Now it's time to go back to our Puppet master to check the


certificate with it, and you can view it by executing the following
command. This command will show you all certificate requests:

View certificate request

4. Now, the Puppet master must sign the certificate from the Puppet
agent. To sign the request, you should execute the following
command with the name of the Puppet agent:

Signing the Puppet agent certificate

5. The Puppet agent certificate request has been signed, so now go to


the Puppet agent and turn it on. You can do so by executing the
command shown in the following screenshot:
Restarting the Puppet agent

6. In the previous section, we have seen that the Puppet master has
signed the Puppet agent's certificate, but when there are many
nodes it is important to verify that the correct certificate has been
signed; you can validate it by executing the command shown in
the following screenshot:

Verifying the certificate command

7. After executing the command, you will view the certificate, which
must be matched with signed one.
8. Finally, the Puppet agent must update itself from the Puppet
master to get recent changes. To do so, use the command shown in
the following screenshot:

Updating the Puppet agent


Discovery of Puppet
Before moving on, try to remember to revisit Chapter 3, Manage Virtual
Machine with ServiceNow, that explains how to set up virtual machines in
the ServiceNow environment. We ran a discovery schedule to discover the
actual CIs with help from the credentials; likewise, we should create a
discovery schedule for the puppet CI as well so that our ServiceNow
CMDB can be populated, because without the CI information no action
can be performed over the Puppet master or node. Now, let's see how this
works. First, we'll look at how the discovery of Puppet works, so if you
are a single driver of the project, then it will help you up to certain a
extent. From the previous sections, we know that the Puppet agent is
installed on Windows and *inx (Linux, Unix, and so on) operating
systems, and if we consider the discovery of Puppet, then ServiceNow
supports Puppet masters that run on the Linux servers and uses Secure
Shell, or SSH, to collect the CI information. In the ServiceNow space, you
can navigate to Configuration | Automation servers | Puppet Master; if you
click on the module then you should be directed to the Puppet masters list
records.
Puppet credentials
In the previous section, we saw that SSH commands are used to collect
Puppet information. There are certain things that are important to keep in
mind while working with Puppets; for example, you should have proper
role access to perform operations on both applications. Like in the
ServiceNow environment. You must have the puppet_admin role, and in the
Puppet environment, the user must have adequate roles. Let's move on to
the configuration part. Navigate to Orchestration | Credentials, and now
you should be directed to the credentials UI page; there you need to select
the SSH Credentials option, as shown and highlighted in the following
screenshot:

Puppet credential configuration

Furthermore, after having clicked on the SSH Credentials option, you


should be directed to the credentials configuration page, and you need to
configure it properly. It is important to note that you will have to
coordinate with your Puppet counterpart to get these details because a
Puppet administrator can assist with login credentials:

Puppet master credential configuration

When you are done with the credentials configuration, it is advisable to


test it before submitting. As we have seen in previous sections, credentials
can be tested by clicking on the Test Credentials link, and, after successful
validation, click on the Submit button, which should direct you to the
credentials list view where you can view your newly submitted
credentials. It is worth noting the importance of the MID Server for
Puppet automation, which will be discussed in upcoming sections.
Puppet discovery schedules
To populate Puppet information in the ServiceNow CMDB, a discovery
schedule can be set up to import information at a regular interval. To
create a new discovery schedule, navigate to Discovery | Discovery
schedule and click on the New button. Now, complete the discovery
schedule configuration form, as shown in the following screenshot, and
click on the Submit button. In my lab environment, I have created a
separate MID Server for Puppet-related operations from the performance
point of view:

Puppet master-Discovery schedule

After having clicked on the Submit button, you should be directed to the
discovery schedule list view where you can view your newly created
discovery schedule. Now, click on the discovery schedule record and
scroll down to where you can see two options under related links. The first
is the Quick range, which takes IP ranges as an input, discovery should
begin its probe process in the entered IPs (quick ranges) only after having
clicked on Discover now button, keep in mind that you must have IP
addresses of your Puppet masters in the discovery schedule. In my lab
environment, I have installed Puppet on my machine only; that's why I
have given my system, and you can do it as well. After kicking off the
discovery schedule, you will notice the Puppet master has been discovered
in the devices related list in the discovery schedule.
Furthermore, you may ask: How is the Puppet master discovered by the
discovery schedule? From the previous sections, we know that Discovery
is an agent-less system and works based on probes and sensors, so we
should understand how Puppet master probes work. So, from the previous
chapters, we know that out of the box there are many probes available in
Discovery applications, and you can find them by navigating to Discovery
definition | probes and filtering out UNIX - Active Process, which runs
behind the discovery of the Puppet master, as shown in the following
screenshot:

UNIX - Active Processes probe for Puppet master

But keep in mind that there are certain conditions that must be met, so let's
explore the conditions. Either name of process is pe-httpd or parameter
contains puppet master and name of the process is ruby. And, if one of the
conditions is met, then a record is inserted in the ServiceNow Puppet
master table cmdb_ci_puppet_master. Furthermore, we have seen in previous
sections that a series of probes and sensors is triggered to collect
information about the discovered devices; likewise, once a new CI (Puppet
Master) record is created, an addition Puppet - Master Info probe is
triggered, as shown in the following screenshot, to collect more
information:

Puppet - Master Info probe

Although Puppet configuration management will be discussed later, we


must understand that once collected information is passed to the Puppet -
Master Info, then the related sensor triggers two more probes
simultaneously, where the first is Puppet - Certificate Requests and the
second is Multiprobe Puppet - Resources. Let's understand what they do.
The sensor of the Puppet - Certificate Requests probe populates the Puppet
certificate request puppet_certificate_request_table table, which holds open
requests. Moving on, multi-probe Puppet is group of following probes:

The Puppet module probe's sensor populates records in the Puppet


module table puppet_module

The Puppet manifest probe's sensor populates the Puppet manifest


table
puppet_manifest

We have learned that, by default, the discovery schedule can identify the
Puppet master that is running on the Unix system, but here it is important
to note that credentials rather user account must have rights to execute the
following commands:

For the Puppet - Master probe, the user must have privileges to
execute the puppet, echo, and hostname commands

For the Puppet - Credentials Requests probe, the user must have
Puppet privileges to execute the puppet command

For the Puppet - Manifests probe, the user must have Puppet
privileges to execute the puppet echo, sed, and find commands

For the Puppet Module probe, the user must have Puppet
privileges to execute the puppet command

If you have some experience with Unix, then you will have heard about
the sudo command that allows users to run programs with the security
privileges of another user, and by default that is the super user. You might
be interested to know that ServiceNow does support sudo as well, but you'll
have to make some effort to configure it. Navigate to Discovery | probe |
and filter puppet-related probes. For demonstration purposes, I have taken
the Puppet - Master Info probe, so open it and scroll down to view the
Probe Parameters related list, as shown in the following screenshot:

Probe parameter of sudo configuration

Furthermore, click on the New button and configure the page, as shown in
the following screenshot, and click on the Submit button. It is important to
note that you must add the must_sudo parameter with each probe that
requires it:

Sudo command configuration


Data collection by Discovery
In previous sections, we have seen the discovery schedule in specific IP
addresses or range, which means that, when the discovery schedule is
kicked off, a series of probes will be launched on the target machine. Out
of the box, ServiceNow provides a dedicated plugin for the Puppet, Puppet
Configuration Management, and it supports more data collections through
Discovery:

Name

Path

Inherits class

Selectable

Default value
ServiceNow Puppet menus and
modules
In the previous sections, we have learned about the Puppet and its
architecture along with various components including the discovery but
now let's move into the ServiceNow space to understand how it works
with ServiceNow.
Puppet plugin
You might be interested to know that the Puppet Configuration
Management plugin is available as a separate subscription, but I think it is
worth noting that the Orchestration plugin must be activated as well. You
can contact your ServiceNow account manager to get the Puppet
Configuration Management plugin activated on production and non-
production instances, and it may take a few days. If you don't have an
account manager, then you can log in to the ServiceNow customer support
portal at https://hi.service-now.com and navigate to Service catalog | Activate
Plugin, as shown in the following screenshot, and fill in the requested
details:

Activation of Puppet configuration management by Hi Portal

However, if you are using your personal developer instance, then you can
log in at https://developers.service-now.com and click on Manage | Instance; on
the page click on Action and activate the Puppet Configuration
Management Core plugin, as shown in the following screenshot:
Activating the plugin on the personal developer instance
ServiceNow Puppet application
Once the plugin is successfully activated, on the left-hand side you will
notice the Puppet application, as shown in the following screenshot, where
you can view related modules such as Puppet master, resources and setup,
and so on. Modules will be discussed in upcoming sections:

ServiceNow Puppet application

For Puppet masters, ServiceNow behaves as an external node classifier, or


in short ENC, but you should be interested to know what an ENC is. So, in
the Puppet space, an ENC is an executable that Puppet can call. It is worth
noting that it does not have to be written in Ruby:

1. To define the Puppet master in the ServiceNow instance, you will


need an IP address of the Puppet master, as we have installed the
Puppet master on the Linux virtual machine. So, navigate to
ServiceNow Puppet Application and click on Puppet Master, and
you should be directed to the form, as shown in the following
screenshot, where you can view many fields, but Name and IP
Address must be entered to move further. So, enter the name and
IP, as shown in the following screenshot, and click on the Submit
button:

ServiceNow Puppet master

2. After having clicked on the Submit button, you should be directed


to the list view of the Puppet master, and there you can notice the
newly created Puppet master. Now, click on the record and scroll
down to where you can view related links, as shown in the
following screenshot. Furthermore, click on the Discover Puppet
Details button to kick off the discovery as an alternative:

Discovering Puppet details

3. Once discovery is completed, related list records, such as


Modules, Manifests, and Classes, should be auto-populated:
Puppet master related lists

In the Resources module, there are the following sub-modules:

Module [puppet_module]: We have seen modules in previous


sections. We know that modules hold resources such as classes,
files, and templates, but from a ServiceNow point of view, it is
important to note that you must not create the ServiceNow
instance.

Manifests [puppet_manifest]: It holds the classes, variables, and


defined types, and is written in the Puppet language only. From a
ServiceNow point of view, you should know that you should not
create any manifest on the ServiceNow instance; it must be
discovered.

Classes [puppet_class]: Like modules and manifests, classes must be


discovered only and should not be created on the ServiceNow
instance. As we have seen in previous sections, classes are a single
logical unit and are applied to a Puppet agent by node definition.

Node Definitions [puppet_node_definition]: This is the collection of


the resources that are applied to the Puppet agent.

Setup section:

ENC Web Services. We have seen in previous sections that ENC


is a scripted web service and acts as a moderator between the
Puppet master and ServiceNow instance.
The ENC Script module holds the ENC script that needs to be
executed on the Puppet master so that ServiceNow can act as an
ENC, as shown in the following screenshot. Here, you can easily
notice that there are two scripts that come out of the box,
snc_enc.py, and snc_enc_with_proxy.py. To add additional security, you

can use a proxy snc_enc_with_proxy.py to send ServiceNow puppet


requests:

ENC script

Furthermore, click on snc_enc.py and then you should be directed to the


page shown in the following screenshot. Take a deep breath, don't worry,
you are not supposed to make many changes in the script because here we
only need to configure the username and password. It is advisable to
download a copy of the script in the first place, by clicking on the
download related link:
snc_enc.pyscript

Moving on, you need to enter the Puppet credentials in the try block, as
shown in the following screenshot:

Furthermore, there are two main roles that are associated with the
ServiceNow Puppet application. The first role is the Puppet user,
puppet_user, and the second is the puppet administrator, puppet_admin. Now,

let's understand what these roles do. A puppet_user role holder can assign
node definitions to Puppet nodes, view all Puppet records, and request
changes in existing nodes. A puppet_admin role holder can create and modify
node definitions, modify Puppet properties, and perform Puppet user
actions.
ServiceNow and Puppet
interaction
In the next section, we will view a more detailed interaction. Let's
understand some main components of the interaction that are driving the
entire interaction. ServiceNow and the Puppet master are integrated by a
scripted web service that can be viewed by navigating to Puppet | Setup |
ENC Web Service that acts as an endpoint. So, how it does work? The
process begins by defining the Puppet master and discovering the various
components such as modules, classes, and so on; whenever any request
comes to the Puppet master from Puppet nodes with fully qualified
domain names (FQDNs), the Puppet master invokes a web service call to
ServiceNow and passes an FQDN. ServiceNow looks in the CMDB with
matching FQDN for responding to Puppet master, and that goes to the
Puppet node, as shown in the following diagram:

Interaction of ServiceNow and Puppet master


Integration with Puppet
Now, you may have understood that Puppet and ServiceNow integration is
not as simple as you may have thought. In the previous section, we learned
about the ENC script that is available as a module in the Puppet
application, and you can navigate to it via Puppet | ENC script. It is
important to note that ServiceNow provides an ENC script so that it can be
run on the Puppet master to take the ServiceNow instance as an external
node classifier. Once the Puppet master designates ServiceNow as an
ENC, you can maintain a single source of record and give privileges to
perform the following tasks:

Classify Puppet nodes. ServiceNow can be used as an external


node classifier.

Create a Puppet workflow. As we already know, orchestration


helps us to extend the workflow, and the same behavior can be
achieved by Puppet workflow activities.

Control changes. Although it will be discussed in later sections,


you may be interested to know that you can govern change control
of Puppet by a ServiceNow instance:
Puppet workflow
A use case for change
requests
In global organizations, change management and configuration
management are very matured compared to other organizations. Let's look
at some numbers that may help you to understand that scale of the
enterprise. Imagine that there are 20,000 nodes. I assume by now you
understand that 20,000 nodes mean machine regardless virtual or physical,
so managing such a large-scale infrastructure is a challenging task and is
rather problematic as there will require a huge effort if you want to apply a
change on the all the nodes. In such a case, we should be thankful to have
a product such as Puppet that makes our life easier. In previous sections,
we have learned how Puppet can be integrated with ServiceNow from the
governance.
Managed nodes
We have already seen managed nodes in previous sections. They hold the
node records, or CIs, virtual or physical, which are being managed by
ServiceNow. Navigate to Configuration Automation | Managed Nodes, as
shown in the following screenshot:

Managed nodes

After having clicked on Managed Nodes, you should be able to view all
the records that were discovered by the discovery schedule in the
discovery phase, as shown in the following screenshot. It is important to
note that in each of the nodes, or machines, Puppet agents are installed
and, as per the pull configuration approach, it checks with its Puppet
master and returns checks with ServiceNow to know what configuration
should be applied to the Puppet agent:

1. In the managed node, you can view the node definitions that are
applied to the nodes. A node definition is a configuration template
that is applied to the nodes. You may be interested to know that
Puppet automatically configures these node definitions, and they
can be viewed by navigating to Puppet | Node Definitions:
Node definition

2. Now, if you change the node definition, then all the related servers
with that node definition will be changed. So, let's take a record
from the node definition, as shown in the following screenshot,
and click on the Checkout Draft button to make changes in the
related servers:

Node definition

3. If you scroll down, then you can view the classes that were
discovered on the Puppet master by the discovery schedule:

Puppet definition class

4. Just imagine that you want to add Ruby to related servers. Making
a manual change will be a very challenging task, but through
ServiceNow, you can easily add Ruby on all related servers that
are using the same node definition.
5. After checking it out, you should able to view Node Definitions in
the draft mode. To add a Ruby class, click on the New button for
the Class Declarations related list and add the Ruby class. To
apply the changes, you need to publish the node definition. So,
click on the Publish button again, but this time this addition will
pass through the standard change management process. After
having clicked on the Publish button, a pop-up should come up on
the screen, as shown in the following screenshot; to proceed with
the change, click on the OK button:

Change request

6. After clicking on the OK button, you should be directed to the


change request form, as shown in the following screenshot, where
you can see that the description box has been filled in
automatically:
Change request

7. Now, this change request will follow the standard approval


process; after that, you can navigate to the node detection to check
its status. It is important to note that after change approval, you
can view the two related links, Proceed With change and Cancel
Change, and you need to click on the Proceed with Change button
only.
8. As we know, Puppet agents are installed on servers and
periodically check for updates. So, when the Puppet agent checks
for updates, Ruby should be applied to the servers.
Summary
This chapter was divided into two sections. The first half focused on the
Puppet architecture and Puppet architecture components (MoM,
MCollective, facter, catalog compiler, file sync, certificate authority,
puppet agent, the console, RBAC, PE database, and so on), which helped
us to develop a basic understanding of Puppet and how it works. The
Puppet master is installed on the Linux machine and the Puppet agent can
be installed on RedHat, Ubuntu, Microsoft Windows, and so on.
Furthermore, we saw that Puppet is a configuration management
application and two versions are available for installation. As a
ServiceNow professional, while working on Puppet ServiceNow
automation projects, you should understand the Puppet architecture and
various components that work along with Puppet and ServiceNow. We
explored the configuration management application approaches such as
push and pull, and we got to know that Puppet works on the pull
configuration. Furthermore, we explored the installation of two Linux
operating system on the Oracle virtual machine, where one Linux virtual
machine acted as a Puppet master, and the second acted as a Puppet agent.
After virtual machine installation, we saw how you can install a Puppet
master and Puppet agent on Linux machines, along with SSL generation
and signing Puppet node certificate requests from the Puppet master, and
vice versa. We should make sure both machines meet the prerequisites
before beginning the installation process. Furthermore, we explored the
ServiceNow Puppet plugin and discovering Puppet master details via
Discovery along with ENC scripts. Finally, we have seen that Puppet can
leverage ServiceNow change management applications to deploy changes
on managed nodes in one go.
Automation with Chef
In Chapter 4, Automation with Puppet, we learned about the approaches of
the push and pull configuration management software tool. As we already
know, Puppet works on the pull configuration where Puppet nodes update
themselves from the central server, or Puppet master. Likewise, following
the path of Puppet, Chef follows a pull configuration as well, and it is used
to bring configuration items to the desired state. In this chapter, the
following topics will be covered:

Introduction to Chef

Chef installation and configuration

ServiceNow Chef basic concepts

ServiceNow Chef menus and modules

Use case for change requests


Introduction to Chef
The chef may be a new term for you as a ServiceNow professional, and I
don't expect you to know much about it. Let's understand what Chef is. So,
Chef is a configuration automation application that is used to bring the
configuration items of infrastructure components, such as Windows or
Linux machines, to the desired state. Furthermore, it provides a way to
change infrastructure into the code that means management of
infrastructure by writing codes and IT personnel don't' need to perform
any manual process. You may be interested to know that Chef uses a pure
Ruby, domain-specific language (DSL) for writing the configuration of
the systems. And you can manage infrastructure, application deployment,
and configurations across the network as well. Moving on, let's take a look
at the types of Chef server.

Chef product comes with open sources that are known as open source
Chef and enterprise as well that is known as Chef enterprise and
obviously, the enterprise product comes with official support.
Chef architecture
The Chef architecture mainly revolves around three major components.
Here, I would like to introduce some key terms that will help you to
understand Chef. The terms are Chef server, workstation, and nodes. Let's
explore these terms and understand what role they play:

Chef server: If you are given the additional responsibility of


managing the Chef server then from the Chef admin point of view,
you should understand that the Chef server holds rather than
manages all the connected nodes from the Chef server only. Chef
nodes pull the configuration from the Chef server. Furthermore,
there are other components such as Chef Store, cookbooks, knife
commands, APIs, supermarkets, and so on.

Workstation: As we have seen in previous sections, the Chef


server manages all the connected nodes and all Chef nodes pull
the configuration from it only. But you may ask where all Chef
development items Chef reside. So, the answer is the workstation.
Furthermore, a text editor and Chef Development Kit (DK)
should be installed on your workstation.

Nodes: These are the virtual or physical servers that are


maintained by the Chef servers and, as per the pull configuration,
Chef nodes pull the updated configurations from the Chef server
periodically:
Chef architecture
ServiceNow and Chef
automation
In previous sections, we have seen that the Chef architecture helped us to
understand the basic components of Chef. You will be interested to know
that Chef can be integrated with ServiceNow instances for achieving a
high degree of control over the infrastructure components and the process
that begins when you activate the Chef Plugin. To activate the Chef plugin
on the ServiceNow instance, open the customer service management
portal (https://hi.service-now.com/) and click on Server Catalog, and that
should direct you to a page where you can view all the available service
catalogs that are part of ServiceNow's offering. You should select the
Activate Plugin service catalog, as shown in the following screenshot:

Activate plugin service catalog

Soon after, a standard service catalog will be displayed, and you need to
complete it as shown in the following screenshot. ServiceNow requires at
least two working days to activate the Chef plugin on your instance:
Chef configuration management plugin activation

With regard to the plugin, you need to know that Chef configuration
management is available as a separate subscription, and along with that
you also need Orchestration Activities Chef and the Orchestration Plugin,
which again are available as a separate subscription. We have learned
about Chef Plugin activation, so moving on let's understand how
ServiceNow and Chef communicate with each other for archiving better
control over the infrastructure:

As ServiceNow is only acting as a service manager, the process


begins from Chef only where the administrator or developer
defines their resources such as cookbook and recipes and so on.

From the ServiceNow point of view, the process begins from


defining the Chef server and Chef user in the ServiceNow
environment and setting up a user account on the Chef server as
well. We will cover configuration details in the next sections.

As a ServiceNow administrator, you may be only concerned about


the ServiceNow and Chef management part. By and large, we
should understand that most Chef components such as cookbook,
recipes, and so on are discovered by the Discover Chef workflow.

Once the plugin is activated, out of box ServiceNow provides new


roles and group for the users so let's look on it. With regard to the
user role, two roles are created; the first is Chef user chef_user and
the second is Chef administrator, chef_admin. A Chef user role
holder can assign node definitions to nodes and request changes to
existing node definition assignments. Furthermore, a Chef
administrator role holder can create a modify node definition
record and can perform all Chef user actions as well:

Chef process
Chef installation and
configuration
In previous sections, we have seen the architecture of Chef, which helped
us to understand some Chef basics. Now, let's move on to the installation
part of Chef. There are three configurations available for installing Chef:

Standalone, which as the name states is standalone where


everything is on a single machine that is available with Chef
manager, Chef push jobs, a reporting kind of feature. Moreover, if
you have planned to use more than 25 nodes, then a license will be
required. You may be interested to know that mainly standalone
systems are configured for proof of concept, or in short POC, or
development or testing on the virtual machine.

High availability should not be a new term. It is used for the


highest level of operational agreement or uptime of the system. In
the Chef space, backend and frontend systems are configured that
allow failover on the backend and load balance on the frontend.
Furthermore, you may be interested to know that two types of
configuration are available with regard to Chef. The first one is
high availability using Amazon Web Services, and the second one
is high availability using DRBD. It is important to note that both
configurations have unique requirements and setups as well.

Tiered also follows in high availability's footsteps, but there are a


single backend system and multiple frontend machines.
In the following screenshot, you can view the various solutions of Chef as
a product:

Chef automation solution, Source: https://docs.chef.io/chef_overview.html

We have just seen the Chef automation solution that helped us to


understand what can be achieved through Chef. Moving on, at the
beginning of the chapter, we have seen that the Chef solution is a
combination of three entities: workstation, Chef node, and Chef server, but
if we drill down a little bit more, then we find a larger picture of the
solution, as you can see in the following screenshot:
Chef components, Source: https://docs.chef.io/chef_overview.html

So, let's explore some critical components of Chef:

Ohai

Knife

Chef

Chef supermarket

In previous sections, we have seen that three machines are required. In our
lab environment three virtual machines are being used. The process begins
with installing VirtualBox. I have used Oracle VirtualBox, and you can
download it by visiting https://www.virtualbox.org/wiki/Downloads and
downloading the options as per your operating system:
Download VirtualBox

After the download, install VirtualBox and add three virtual machines
(Chef server, Chef workstation, Chef client) as Oracle virtual machines.
After adding three virtual machines, you should able to view something
like the following screenshot, which shows three virtual machines:

Virtual machines
Chef server installation
We have just created a Chef server virtual machine record, but we have to
load our Linux image (.iso file) as well to install the Puppet server on top
of it. Click on the Settings button, and you should be directed to the
following screen where you need to give the path of the .iso file, as shown
in the following screenshot, and click on the OK button:

ISO file path

Now, let's begin the Chef server machine installation:

1. Click on the Start button to start the installation. During


installation, you should be directed to the following screen. Here,
you can define the localhost, such as a Chef server with a local
domain. For the purposes of this lab, I have used Chef server:

Local host configuration


2. It is important to note that every machine (Chef server,
workstation, and Chef node) should be able to communicate with
each other. For example, the Chef workstation should able to
communicate with the Chef server, and likewise, the Chef node
server should able to communicate with the node and Chef
workstation. That's why network configuration is critical to avoid
any glitches. On the same screen, if you scroll down, you should
able to view the Configure Network option, as shown in the
following screenshot, and click on it:

Network configuration

3. After having clicked on the Configure Network button, a window


should pop up, as shown in the following screenshot; select
System eth0 and then select the Edit... button:

Network connection
4. After having clicked on the Edit... button, you should able to view
the following screen, and here you only need to checkmark the
Connect automatically field and click on the Apply button:

Enable connect automatically

5. Finally, click on the Next button to begin the install. Once the
installation is done, you should be able to view the Reboot option,
and click on it. Finally, our Chef server machine is ready for the
Chef server to be installed on it:

Chef server login

6. Well done! We are done with the basic configuration. Now it's
time to install the Chef server. So, open any standard web browser
and open https://www.chef.io/ and click on the Get started button.
Then, you should be directed to the Options page where you
should select the appropriate version as per your machine, as
shown in the following screenshot. Click on the options and you
will be asked to submit some basic details, after which the Chef
server should be downloaded on your local machine:

Chef server installation

7. Before moving on with the installation, let's understand some


basics. If you are not using DNS, you can edit that host file as
well. For example, 192.163.1.100 may be your Chef server (chefserver
.servicenow.in) IP address, make sure Chef server is able to

communicate with the Chef workstation and the Chef node. Flush
the IP tables to avoid any communication-related issues. To do it,
type the command as shown in the following screenshot:

Flush IP tables

8. Furthermore, the Chef server files are installed on the local


machine (desktop/laptop) and the Chef server is on the virtual
machine, so you must make a folder and share adequate
permissions for it. Similarly, you need a guest account on the
virtual machine as well so that the locally download Chef server
file can be shared with the virtual machine. It is important to note
that you must have access to the shared drive, otherwise the Chef
server file will not be visible to the Chef server virtual machine. In
the lab environment, I have the Chef package already, but if you
are not sure how to check it, then type the command shown in the
following screenshot:

Check status

9. Furthermore, the Chef server installation file has been installed on


the local machine and the Chef server setup file has been shared
with the Chef server virtual machine, and it can be installed on the
Chef server virtual machine by simply typing the following
command:

Install chef sever

10. After having entered the command, a warning message should be


displayed on the screen, but take a deep breath because nothing is
wrong with the Chef server installation. It is important to note
that, once the server is installed, it gives this message thank you for
installing chef server!, and gives the next instruction as well as sudo

chef-server -ctrl reconfigure; however, as I am logged in as root, sudo


is not required.

11. Once the package installation is completed, type the following


command in the Terminal and click on the Enter button. This
usually takes a few minutes to complete. This command is being
used to start the services, and after successful completion, you
should see a message Chef-Server Reconfigured so type same on the
Terminal:

Start service

12. Finally, if you want to check the status of the Chef server, then it
can be done by typing the following command. The status displays
all the PIDs rather than processes that are running, such as chef-
server-ui, chef-solar, and so on:

Chef status

13. Well done! Your Chef server has been configured successfully.
Now, the next move is to install the workstation on the Chef
workstation virtual machine.
Chef workstation installation
In previous sections, we configured the Chef server machine on which the
Chef server was installed; likewise, we must make the Chef workstation
machine ready as well, and the process is similar, but it is important to
note that there should be a different hostname. For example, I have given
the Chef workstation the name shown in the following screenshot, which
will help me to differentiate between the machines:

Chef workstation

Now, let's move on to the installation part of the Chef workstation:

1. As we know, the Chef workstation must able to communicate with


the Chef node and the Chef server, and that's why network
configuration is quite critical. And again, you need to scroll down
and click on the Network Configuration button and select the
System eth0, and then click on the Edit button. Finally, check the

Connect automatically option, as shown in the following


screenshot. Click on the Apply button, then click on the Next
button:
Network configuration

2. Finally, give the Root Password that acts as an administrator


account, as shown in the following screenshot. It is important to
note that this password will be used to access the workstation
machine:

Root password

3. Soon after, installation will be completed and you will be asked to


reboot the machine. So, click on the Reboot button, and then after
restarting the machine, you should be able to view the following
screen where you need to enter the login as root (administrator
account) and the root password:

Chef workstation login


4. Well done! The Chef workstation machine is ready to install the
Chef workstation. But, before doing so, check whether the
machine can communicate with the server or not, and that it can
be validated with the command shown in the following
screenshot. If it is able to communicate, then you should be able to
ping it; otherwise it will throw an error:

Ping Chef server

5. Before moving on, to avoid any issues during the installation let's
flush the IP tables first, and that can be done with the command
shown in the following screenshot:

Flush IP tables

6. I already have a copy of the chef package on the workstation


machine so we can simply moving on with installation part of the
Chef package, but if you are not sure whether packages have been
installed or not then you can simply check by typing the following
command in the terminal of the Chef workstation terminal as well:

Check installation status


7. Now we are ready to install the chef package on the Chef
workstation machine so type the command as given in the
following screenshot. Once Chef is installed, it gives a message
Thank You for Installing Chef :

Install chef package

8. You may be interested to know that communication between the


Chef server and the workstation is done by certificates and for that
you need to copy the certificates from the Chef server to the Chef
workstation. Now, let's move on to the Chef server so if you go
into Chef server using the command as shown in the following
screenshot then you should able to view certificates such as chef-
validator.pem, chef-webui.pem, and admin.pem. These files are important

for the workstation machine and need to be copied:

Go incertificate directory

9. Let's start from the admin.pem and type the command to copy the
admin.pem certificate, as given in the following screenshot; you will

be asked for the password of the Chef workstation so type it and


wait for a while:
Copy admin.pem certificate

10. Now move to chef-validator.pem and type the command as given in


the following screenshot to copy the chef.validator certificate; again
you will be asked for the password of the Chef workstation so
type it and wait for a while:

Copy chef-validator.pem certificate

11. Finally, repeat this for the chef-webui certificate. Type the command
as given in the following screenshot to copy the chef.validator
certificate; again you will be asked for the password of the Chef
workstation so type it and wait for a while:

Copy chef-webui.pem certificate

12. We have just copied three files from the Chef server to the Chef
workstation and you can view them in the workstation by simply
entering the ll command.
13. Time to switch the machine again; type the command as given in
the following screenshot to create a new directory on the Chef
workstation and copy all certificates in it:
Make directory

14. To copy the certificates in the chef directory, type the command as
given in the following screenshot; likewise you can also copy the
other two certificates in it:

Copying certificates in .Chef

15. Now, if you move inside .chef by giving the following command,
you should able to view all three certificates:

Move in .chef directory

16. To establish communication between the Chef server and the Chef
workstation, we are going to use the command as given in the
following screenshot:

Connect Chef server and workstation

Soon after, you will be asked to enter the location of the server that
you can give https://chefserver.servicenow.in:443 (defined in previous
sections). It is important to note that here you will be asked for a
username; if you already have a user like I have my account ashish
on the Chef server then moving on. You will be asked for an admin
account as well, which is obviously admin. As a response to Please
Enter the existing admin name, type admin and after that you should

respond to Please enter the location of existing admin's private key:. In


previous section, we have copied the certificates into the .chef
folder so we need to give the same path, so as a response to the
location, you need to enter /etc/chef-
server/admin.pem]/root/.chef/admin.pem. After it, you will be asked to

enter a validated client name, which can only be the default chef-
validator. After validating the client name, you will be asked its
location; thus, you can give the path of the folder as /etc/chef-
server/chef-validator/pem]/root/.chef/chef.validator.pem:

17. You need to fetch the certificate as well; type the command as
given in the following screenshot and wait for a while:

Fetch certificate

18. To check the status, you need to type the command as given in the
following screenshot, which should return the message successfully
verified from chefserver.service.in :

Check status

19. As a next step, type the following command again and enter Y as a
response to overwrite /root/.chef/knife.rb and the Chef server
should be https://chefserver.servicenow.in:443; the name of the new
user should be ashish, the existing admin name should be admin, and
the existing admin's private key should be /etc/chef-
server/admin.pem]/root/.chef/admin.pem. In essence, we are just

repeating our previous section answer here:

Knife configure

20. Well done! Your Chef workstation has been configured


successfully and to validate it you can simply type a command as
given in the following screenshot and it should return admin and
ashish:

Check user list

21. Moving on, let's install the required package; type the following
command in the terminal:

Required package installation

22. Soon after you will see the following screen indicates the status of
the required package installation:
Download required in process

23. Now it's time to install the Chef development package on the Chef
workstation machine and that can be done by using the wget
command, as shown in the following screenshot. You can get the
URL from the Chef site:

Download Chef DK

24. After entering the command, you should able to view the
installation process as given in the following screenshot:

Chef development kit installation

25. Once the Chef development kit is installed, you will see the Chef
package, as given in the following screenshot, that will be used to
install the Chef development kit on the machine:

Chef development kit downloaded

26. Finally, time to install the Chef development kit on the Chef
workstation machine. You need to use the command as given in
the following screenshot. It is important to note that we are using
the same package name chef dk-2.3.4-1.e16.x86_64.rpm:

Install development kit

27. Well done! The Chef development kit has been installed on the
workstation machine successfully but we must view all the
components of the Chef development kit, and this can be done by
entering the command as given in the following screenshot. You
may be interested to know that all the components must be on the
successful state:

Chef verify
Chef client installation
The Chef client, rather than the Chef node, is the last machine that we are
going to configure now. As per the pull configuration, it's the Chef node's
responsibility to update itself from the Chef server, but as we know we
need a machine for it and that will be called Chef client for the purposes of
this lab. In previous sections, we have seen the installation of the Chef
server and Chef workstation so let's install the Chef node now:

1. During the virtual machine setup, you should be directed to the


following screen where you need to enter the hostname of the
machine; for the purposes of this lab I have given it the name as
shown in the following screenshot:

Chef node machine

2. On the same page, you should able to see the Configure Network
button, as well as what we have seen in previous sections. Here
again we are making changes in the network configuration to
avoid any connection glitches. To make changes, click on the
Configure Network button as given in the following screenshot:
Configuring the Chef node network

3. After having clicked on the Configure Network button, you should


be able to see the following popup. Here you need to enable the
Connect automatically checkbox and then click on the Apply
button, and finally click on the Next button to moving on into the
installation process:

Connect automatically

4. Moving on, now you will be asked to enter the Root Password, as
given in the following screenshot, which will be used to log into
the Chef node machine:

Chef node account

5. After installation of the Chef node virtual machine, you will be


asked to reboot the machine, after which you will be asked to
enter the username and the password as using the following
screenshot; so root will be the login name and you need to enter
same password that was entered during the installation:

Chef node login

6. Check the IP tables and, if required, then flush them by simply


typing the command in the terminal as given in the following
screenshot:

Flush IP tables

7. Now check the host files by typing the command as given in the
following screenshot, and validate whether you are receiving a
ping or not from the Chef workstation and Chef server as well:

Host file configuration

8. The Chef package is available on my Chef node machine but it is


not installed so we must install the Chef package, but before
installing the Chef package you should check whether the package
has been installed or not in the machine by typing following
command:

Check package installation status

9. Now it's time to install the Chef package on the Chef Node
machine; the package can be installed on the machine by typing
the command as given in the following screenshot. Soon after
hitting the Enter button, the Chef package should be installed:

Installing the Chef package

10. Furthermore, you may be interested to know that communication


with the Chef server or the Chef workstation is based on SSL
certificates. For this, you should create a directory using the
command as given in the following screenshot and you need to
copy the chef-validated file from the Chef server:

Make directory

11. Time to switch machines. Now go onto the Chef server machine.
On the Chef server terminal, type the command as given in the
following screenshot to copy the file:

Copy Chef validator

12. If you use the command ll then you should be able to view the
Chef validator file in the Chef node machine file. Now it's time to
change the machine again, so go to the Chef node machine again
and type the command as given in the following screenshot:

Copy In Chef folder

13. Now if you go to the chef directory of the Chef node machine, then
you can view the Chef validator file that is the Chef server
signature. Moving on, now you need to fetch the certificate from
the Chef server so use the knife command as given in the
following screenshot:

Fetch certificate

14. Now, if you go to your home directory and then the chef folder, you
should able to see trusted_certs, which is a certificate copied from
the Chef server machine:
15. Furthermore, you can validate it by typing the command as given
in the following screenshot to check that everything is correct.
Finally, if you receive a message saying that the certificates are
successfully verified from chefserver.servicenow.in, then everything
is correct:

Validation

16. Now, to establish the communication between the Chef server and
the Chef node, you need to create a file, so now go back into the
chef directory by typing the command, as shown here:

Switch folder

17. Once you are in the chef directory, type the command as given in
the following screenshot to create a new file:

Creating a client file

18. Once you are in the file, add the attributes in the file as shown in
the following screenshot. Here, it is important to note that we have
mainly given the Chef server URL and certificate definition here
so that the node can communicate with the server, and to save
type: wq as we have seen in previous sections as well:
Entries in the client file

19. Now, we need to join the Chef node with the Chef server, which
means utilizing the certificate. To join it, type the command in the
terminal as given in the following screenshot; after successful
completion you should see Chef client finished message:

Joining Chef Node with Chef server

20. But how you can verify it? So, to verify it you need to utilize the
knife command again, as given in the following screenshot, but not
from the Chef node machine. It will be validated from the
workstation machine only, so it is time to switch to the
workstation machine and type the command as given in the
following screenshot. This command should return the Chef node
name along with the service's name:

Validation

21. If you able to view the Chef node along with services, this means
that the Chef node has joined the Chef server successfully.
ServiceNow Chef basic
concepts
In previous sections, we have seen how the Chef Plugin is activated so
moving on from that, we should understand some basics from the
ServiceNow point of view as well. Let's understand some important terms
here such as Chef server, cookbook, recipe, attributes to help you to
understand the Chef Plugin of ServiceNow:

Chef server; this is the server that maintains all the connected
Chef nodes that are available in your infrastructure.

Node definition; this holds a copy of the configuration items that


are applied on the Chef nodes as per the pull configuration.

The recipe, a fundamental configuration item that is a collection of


all the resources.

Cookbook, a fundamental configuration item and it is populated


through discovery.

Attributes, used by that Chef client to understand the status of the


Chef nodes.

Chef key store, to establish the connection with the Chef


resources, the ServiceNow instance requires an X.509 certificate.
In the following sections, we will view its configuration.

Furthermore, just imagine there are thousands of servers in your


organization; to manage the Chef nodes rather than the Chef client, from
the ServiceNow environment you need a list of nodes that are being
managed by the Chef server. The answer to this is the Managed Nodes
module, which is available under the Configuration Automation
application. This module acts as a single repository and you may be
interested to know that a node may a virtual machine or an actual machine
as well. Navigate to Configuration Automation | Managed Nodes as given
in the following screenshot to view all the servers that are being managed
by Chef:

Managed node
ServiceNow Chef Discovery
In the previous chapters, we read about the discovery and its distinct
phases but to populate the ServiceNow CMDB, you do need the
ServiceNow discovery product license, but that is not the case with Chef.
You might be interested to know that discovery of Chef components is
done by Discover Chef workflow, which can be executed after defining
the Chef server within the ServiceNow instance. Furthermore, more than
one Chef server can be defined and ideally each discovery should be done
separately.
ServiceNow Chef user setup
In previous sections, we have read about the Chef user account
(ServiceNow) that is used to interact with the Chef server. To create a user,
navigate to Chef | Setup | UserAccount | Users | Create New. Now you
should be directed to an empty user form so enter the desired name (you
can even give a name related to Chef), as given in the following
screenshot and click on the Submit button. It is important to note that you
must define and validate the user by clicking on the Validate key store
alias, which is available under Related Links. You will be interested to
know that Chef change management is controlled by two properties (the
sys_properties table) that are false by default. Let's look at these properties:

: By default, the value


glide.config_auto.chef.node_definition.use_change

property is false that holds change management on node


definitions for updates

: By default, the value of


glide.config_auto.ci_assignment_use_change

property is false that holds control over Chef Definition to CIs

Furthermore, to control changes on the Chef server by ServiceNow, you


can change the values of both properties by simply changing false to true;
that is standard practice for ServiceNow system properties:
Chef user account
ServiceNow Chef menus and
modules
In previous sections, we have already explored Chef plugin activation in
ServiceNow. Moving on, once the plugin is activated, as usual, the Chef
application is available in the application navigation bar where you can
view various modules of the Chef application, as given in the following
screenshot. Within the ServiceNow space, the Chef application is divided
into three sections, Chef Servers, Resources, and finally Setup:

ServiceNow Chef application

The Chef server cmdb_ci_chef_server holds all the Chef server records within
the ServiceNow space. To create the Chef server in ServiceNow, navigate
to Chef | Chef Server, and after having clicked on it, you should be
directed to the blank Chef configuration form, which can be configured as
given in the following screenshot:
Chef server

Here you will need assistance from your Chef counterpart, or if you have
additional responsibility for the Chef server then you can refer to the Chef
server configuration on the virtual machine or actual machine. After filling
in the information, simply click on the Submit button and you should be
directed to the list view for the Chef servers. Click on the newly created
Chef server and scroll down; you should be able to see related lists, as
given in the following screenshot. To discover the Chef server details,
click on Discover Chef Details UI action as given in the following
screenshot:

Related links

You may be interested to know that on the Chef form, you have can switch
from the view to the dashboard. On the form, simply click on the
Dashboard UI action, which should direct you to the dashboard as shown
in the following screenshot:
Chef dashboard

After kicking off Chef discovery from ServiceNow, you should wait for
some time. The related lists, Chef Environments, Chef Cookbooks, Chef
Recipes, Chef Roles, and Chef Attributes for the Chef server, should be
auto-populated from the Chef server as given in the following screenshot:

Chef related list

Although key validation will be covered in the next section it is important


to note that, without key validation, you will not be able to move on,
which means the following error will be thrown by ServiceNow just after
clicking on Discover Chef Details:

Key validation
Setting up key stores
In previous sections, we have seen the significance of key stores for the
Chef application. To create key stores for Chef, navigate to Chef | Setup |
Key Stores as given in the following screenshot:

The Chef Key Stores option

After having clicked on Key Stores, you should be directed to the


X.509Certificates list view where you can view various key stores as
given in the following screenshot:

X.509certificates

To upload a new certificate, click on the New button, which should direct
you to the following form where you can enter the desired Name and short
description. I think it is important to note that two types of format are
available for key stores: DEM and PEM. Particularly for Chef, the format
should be key store PEM and the type should be key store:

New X.509 Certificate

Finally, upload the chef_user.pkcs12 certificate by clicking on the


Attachment button and click on Validate Stores/Certificates under Related
Link.

The user account chef_user_account holds the information about the user's
account that is used by Chef resource for interacting with the Chef server.
Use case for change requests
The node definition holds all the configurations that are applied to the
Chef nodes rather than the Chef clients in the ServiceNow space. If you
click on the node definition, then you should be directed to the list view of
the node definitions of Chef and that all should be in a published state. In
the previous section, we have seen that, for the lab environment setup,
three virtual machines have been used where one is acting as a Chef
server, the second is acting as a workstation, and the third and last is acting
as the Chef node. Now, I want to apply a change to the Chef node that is
being managed by a node definition, as given in the following screenshot:

Node definition

Now, if you click on the node definition then you should able to view all
the related lists such as Chef Node Components, Chef Node Attributes,
and finally Managed Node. Just imagine if you want to apply a change on
the Chef nodes, then you can simply check out the draft; just after clicking
on it, a popup will be displayed on the screen, as given in the following
screenshot:

Change request popup

As per the system properties of the Chef, a control process rather than
change management is being applied here. Now simply click on the OK
button and you should be directed to the Change Request form as given in
the following screenshot. It is important to note that ServiceNow
automatically fills in most of the fields:

Change request

Once the change request is submitted, the approval request will be sent to
all the stakeholders of the change and after the approval. Navigate to Chef
| Node Definition to view its status. After the approval, you should able to
carry out one more UI action along with an old one (Cancel Change) and
proceed with the change. It's time to apply the changes on the Chef nodes
and that can be done by simply clicking on the Proceed with Change
button. As per the push configuration, we know that Chef nodes update
themselves periodically, so changes will be applied in the same cycle as
nodes but must be synced with the master.
Summary
In this chapter, we have learned that Chef is a configuration management
tool and follows the pull configuration. In the pull configuration, it is the
node's responsibility to update itself from the server. This chapter has been
divided into two sections, mainly Chef and ServiceNow. Firstly in the
section of Chef, We have learned three major components (Chef server,
workstation, and chef node) rather than machines, are required to set up a
lab environment. In this book, we have used Oracle VirtualBox to install
all three machines (server, workstation, and node), and Chef components
have been installed on top of the Linux machine. Certificates play an
important role for authentication and joining the workstation and the Chef
node with the Chef server. Furthermore, in the ServiceNow section, we
saw the different modules of the Chef application and how the Chef
components can be discovered and stored in the ServiceNow CMDB along
with plugin activation on the ServiceNow customer management portal.
We have also learned about the Chef and user roles that must be used to
interact with the Chef server along with the importance of the X.509
certificate. In the Chef environment, better control can be achieved by the
change management application but, in the Chef space, change
management is controlled by two properties; to enable change
management these properties must be true.
Other Books You May Enjoy
If you enjoyed this book, you may be interested in these other books by
Packt:

ServiceNow Application Development


Sagar Gupta

ISBN: 978-1-78712-871-2

Customize the ServiceNow dashboard to meet your business


requirements

Use Administration and Security Controls to add roles and ensure


proper access

Manage tables and columns using data dictionaries

Learn how application scopes are defined within ServiceNow

Configure different types of table to design your application

Start using the different types of scripting options available in


ServiceNow
Design and create workflows for task tables

Use debugging techniques available in ServiceNow to easily


resolve script-related issues

Run scripts at regular time intervals using the Scheduled Script


Execution module

ServiceNow IT Operations Management


Ajaykumar Guggilla

ISBN: 978-1-78588-908-0

Step by step guide in setting up each features with in ServiceNow


ITOM

Install and configure the required application or plugin

Integrate with other provider services as deemed appropriate

Explore Orchestration capabilities and how to analyze the data

Learn about the ServiceNow graphical interface

Integrate with other applications within ServiceNow

Aims to cover the fundamentals concepts to advanced concepts

Best practices and advanced features


Leave a review - let other
readers know what you think
Please share your thoughts on this book with others by leaving a review on
the site that you bought it from. If you purchased the book from Amazon,
please leave us an honest review on this book's Amazon page. This is vital
so that other potential readers can see and use your unbiased opinion to
make purchasing decisions, we can understand what our customers think
about our products, and our authors can see your feedback on the title that
they have worked with Packt to create. It will only take a few minutes of
your time, but is valuable to other potential customers, our authors, and
Packt. Thank you!

Potrebbero piacerti anche