Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
CodeInText indicates code words in text, database table names, folder names,
filenames, file extensions, pathnames, dummy URLs, user input, and
Twitter handles. Here is an example: "Status displays all pid rather process
that are running, such as chef-server-ui, and chef-solar."
Bold indicates a new term, an important word, or words that you see
onscreen. For example, words in menus or dialog boxes appear in the text
like this. Here is an example: "On the same page, you should be able to
see the Configure Network button as well, as we saw in the previous
sections."
Errata: Although we have taken every care to ensure the accuracy of our
content, mistakes do happen. If you have found a mistake in this book, we
would be grateful if you would report this to us. Please visit www.packtpub.com
/submit-errata, selecting your book, clicking on the Errata Submission Form
link, and entering the details.
Piracy: If you come across any illegal copies of our works in any form on
the Internet, we would be grateful if you would provide us with the
location address or website name. Please contact us at copyright@packtpub.com
with a link to the material.
Understanding IT challenges
Exchange
F5 network management
Infoblox DDI
Workday
There are other components which come under the orchestration umbrella,
such as password reset, client software distribution and so on, but they are
beyond the scope of this book. If, in your organization, one of the
preceding technologies are being used then you may recommend
automation through the ServiceNow application. In terms of costing and
licensing, orchestration is part of the IT operation management (ITOM)
suit and its cost is based on the number of nodes in a customer's
environment. It's important to remember that nodes can be a virtual or
physical server, so any node orchestration that is done directly by
ServiceNow, or a third party, requires an orchestration license. You will
also have to create a ROI matrix to determine the benefits of automation.
Response and delivery time
Many non-IT organizations use a rate card system to determine the
performance of IT teams rather services. I personally like that kind of
arrangement, as all in all, a happy customer is the key of any business.
Let's take an example of a traditional IT process: if John Snow wants to
access a shared drive to upload a file, he will probably have to follow the
following steps:
Once the form is submitted, a REQ number, a RITM number, and at last a
Task number, will be generated, which will be assigned to a support team
where a support group member will take the task and work on it. If
automation is implemented, then the new process will skip required
manual activities and the workflow will be extended to an external system.
This will enhance response and delivery time as well. If we consider the
service level agreement, or in short SLAs, with the business units, then by
implementing the automation, SLA time with the business units can be
decreased up to a great extent, which will enhance the user experience:
Request, request item, task generation process
Consistency and best practice
If a task is assigned to you that requires manual effort, as an individual
you may have your own choice of execution order. Each technician has
their own way of performing the task at hand, and in the absence of
consistency the desired output may vary, which can lead to critical issues
in the production environment. I have often observed that most IT teams
define their own process guide, which is a good practice for maintaining
the quality of the system. In the automation world, you can define the
process flow considering the best practice and that process will be
executed every time - without manual intervention.
Process audit
Global organizations are very particular about their processes. Many
organizations follow a standard professional body on which half-yearly or
yearly auditing is done. That's why many stick to their process and
continue to implement it. But what about the governance of the process?
That's why organizations have a process owner. With automation,
however, you can achieve better control over process governance.
Skills required for automation
ServiceNow administration experience is required before moving into
automation. If you want to learn about ServiceNow administration and
development then you can go through my book ServiceNow Cookbook,
which will guide you through the ServiceNow platform, as well as its
administration and development. It is important to mention that if you are
experienced in ServiceNow, automation may still be difficult to grasp at
first. So, let's unpack some of the technologies which will help you during
common automation activities:
If you want to call it a generic workflow then yes, you can. The
ServiceNow workflow is very powerful and capable of interacting
with external systems. Once the orchestration plugin is activated,
you should see additional tabs such as Packs, Custom, and Data in
the workflow, which are orchestration-activated tabs. You can
simply drag and drop these on your existing workflow canvas to
extend it to a desired supported system:
Orchestration workflow
SCCM stores all CIs in the SQL Server, which can be integrated with
ServiceNow to import data through scheduled jobs. Out of the box,
ServiceNow provides SCCM plugins, Microsoft SCCM 2007, and
Microsoft SCCM 2012 v2:
4. Now, it's time to save to our precious CIs database, which will
play a crucial role in automation. In order to do that, click on the
Test data source connections button that directs you to a UI page.
In the background, ServiceNow opens a JDBC connection via an
MID server and runs a series of SQL queries or commands
defined in the data source. If the test is successful then
ServiceNow is connected to the SCCM SQL database; otherwise,
you will see an error, as follows:
6. If you open a Data Source, then you will notice an SQL statement
that allows SQL queries to run on a target system. It is important
to note that you have the privilege to modify this query out of the
box:
Data source SQL queries
v_GS_COMPUTER_SYSTEM
v_GS_SYSTEM
v_GS_OPERATING_SYSTEM
v_GS_SYSTEM_ENCLOSURE
v_GS_WORKSTATION_STATUS
v_GS_PC_BIOS
v_GS_COMPUTER_SYSTEM_PRODUCT
v_Gs_BASEBOARD
Here I would like to introduce two major pillars of Discovery: the first one
is probe and the second is sensor. Probe is used to collect data from
network devices and sensor is used for processing collected data. Out of
the box, ServiceNow provides different types of probes, such as the CIM
Probe, Multiprobe, SNMP Probe, and so on, and similarly sensors, such as
classifier, Java, JavaScript, XML, and so on.
It is time to introduce another term, that is, external communication queue,
or in short, ECC ecc_queue. Discovery's probe and sensor get instructions
from the ECC queue only. It is important to note that Discovery's probes
and sensors work in synergy, and for each and every probe there must be a
sensor to process the collected data. Furthermore, being an agentless
system, whenever Discovery finds a device, it explores its configuration,
which is then stored in the ServiceNow CMDB.
Manually-initiated Discovery
Windows workstation
Windows server
Now, navigate to the System Definition application and click on Help the
Help Desk. Configure the form shown as follows and click on the Save
button:
Refer to the Help the Help Desk screen, which now clearly states all of the
machine's information. This information is being passed to the
ServiceNow instance through the MID server. Your account (soap.guest) is
saving the record in the ECC queue ecc_queue, which you can view by
navigating to ECC | Queue. Filter it by Created by =soap.guest account, as
shown in the following screenshot:
ECC queue
Once records are saved in the ECC queue, ServiceNow then processes it
and updates the CMDB with the asset number. Moreover, by viewing Help
the Help Desk's status you can navigate to System Definition | Help the
Help Desk Status | click on it, and you should be directed to the page
shown in the following screenshot:
Manually-initiated Discovery
Windows workstations
Linux systems
Printers
You will have seen monitoring centers that monitor all the services
at the organization level. I would like to introduce a term, that is,
vertical Discovery. Vertical Discovery serves the monitoring
purpose better as it creates a relationship with other CIs. Service
mapping is the best example of this. Technically, service mapping
uses horizontal Discovery to search devices in its first two phases
(scanning and classification) and from the top down for a business
map.
We have just learnt a new term, that is, horizontal Discovery. The
disadvantage of horizontal Discovery is that it doesn't pull
relationships with other CIs for business service maps. From a
network probe point of view, horizontal Discovery searches the
devices on the network and related attributes as well.
In the previous sections, we have seen the Discovery probe and sensor that
are used to collect and process the information and process before storing
it in the CMDB. This is why there are multiple probes and sensors in the
ServiceNow platform. However, probes and sensors are replaceable with
patterns. Instead of the identification and exploration processes, patterns
can be used. Out of the box, there are many patterns available that can be
utilized, and new patterns can even be created by navigating to Pattern
Designer | Discovery Pattern, as shown in the following screenshot:
Pattern designer
In an agentless system, a whole network can be found even though not all
the information is important to capture. Instead of finding the whole
network, you can stick to IPs that will help you to balance the network
traffic, CMDB design, and maintenance. Let's look at a couple of key
modules in the Discovery application. Navigate to Discovery as follows:
Discovery application
Quick ranges of IP
After defining quick ranges, click on Discover now Related Links, which
kick off the Discovery process. After its completion, a Discovery status
record should be created.
Discovery status
What relates these list stores? The related devices list stores devices, as the
name, states, but in the CMDB CI column you can also refer to classes such as
switch and Windows server. The ECC queue stores all records that were saved
during the Discovery scheduled, like Shazzam, WM Runner, Multiprobe,
and so on, along with the input or output parameter. The Show Discovery
timeline button should direct you to a UI page that fetches data from the
ECC queue to draw a timeline of the Discovery execution.
Before we get deeper into the CMDB, we must understand that, for a
greater return on investment, a strong CMDB is essential for automation.
So, if an automation project is being planned, then effective CMDB
implementation should be part of it to gain the maximum output of
automation.
A few concepts
If you have not worked closely with CMDBs before reading this book, we
are going to see some major pillars of CMDBs that will help you to
understand CMDBs. Explicitly, CMDB data modelling is crucial for
successful CMDB implementation, which holds the logical grouping of
configuration items, or in short, CI types. Very often, CIs are grouped into
hardware, software, and network and then into different types of classes,
but there may be other custom CMDB classes as well.
CMDB tables
As we know, a CMDB acts as a central repository of all the configuration
items. From a technical point of view, it is important to note that
ServiceNow provides a base class cmdb_ci, that stores basic attributes of
CIs, and a CI relationship class cmdb_rel_ci that holds the relationships of
CIs. Out of the box, ServiceNow has many tables to store such
information and you can even extend the CMDB by creating a new table
and extending to the base table:
Identification code
Tag name
Description
Ownership
Configuration management
tools
In this book, ServiceNow automation with Puppet and Chef will be
discussed in Chapter 4, Automation with Puppet, and Chapter 5, Automation
with Chef, respectively. There are many vendors in the industry that
provide automation tools, some of which are listed as follows, for
reference only:
BM Rational ClearCase
CFEngine
Chef
Puppet
Vagrant
ServiceNow
Manual import
Populating CMDB by existing
external CMDB
In the beginning of this chapter, we saw that many CMDB products are
available in the market, so if an organization prefers to be on an external
CMDB, then you have an option to integrate with an external CMDB.
Microsoft SCCM is one of the popular CMDBs. Out of the box,
ServiceNow provides an SCCM plugin, with which you can simply
integrate with an SCCM database:
1,024 MB of memory
Configuring the MID Server
To configure the MID Server, you first need an admin role, then you need
a host machine on which the MID Server will run as a Windows service or
Linux Terminal. The major pillars for configuring a MID Server are as
follows:
Start the MID Server Windows service from the host machine
Validation
You can download the MID Server files from the MID Server application
by simply logging into the ServiceNow instance. Let's see the process in
more detail:
2. Now, you will be directed to the MID Server download page, from
where you can download the MID Server files based on the
configuration of the host machine. By default, ServiceNow
provides the Windows and Linux operating systems to host the
MID Server:
4. Once the MID Server files are downloaded on the host machine,
copy the server files in the MID Server folder and give proper
names based on the organization's standard. In the following
screenshot, you can see that I have placed MID server
MIDServer_dev15570 files in the MID Server folder as the host machine is
my laptop only:
Config file
8. Hooray! An account has been created. But wait for a second, are
we missing anything here? Probably yes. A user account without
any role is an end user account and end users can't perform any
administrative activities and such account does not make any
sense for the MID Server. So, out of the box, a mid_server role is
available in the role table sys_user_role, which our newly created
account must have. Now, let's quickly add a role in the MID
Server account. It is important to note that the moment a mid_server
role is granted to a user account, 11 additional roles are also
assigned in the account such as soap, soap_query, soap_update,
, and so on, so this account can be used for other
soap_delete
purposes as well:
Assign MID Server role
9. Great news, our config file of the MID Server is ready to configure
as we now have the instance address, username, and password.
Let's quickly configure it. Replace https://YOUR_INSTANCE.service-
now.com with your instance address; here, I am using my personal
10. You may have noticed that, in the preceding screenshot, just after
the encrypt label, true is written, which means your password will
be encrypted and can't be decrypted again. ServiceNow
recommends keeping it this way. You have the option to change it
to false, but this will add a disadvantage. Remember, the host
machine doesn't belong to you, so if any host machine
administrators open the config file, the password should not be
visible to them.
11. The MID Server account is configured in the config file, but the
configuration is not completed yet. The config file has a
YOUR_MIDSERVER_NAME_GOES_HERE parameter, so it is time to create a MID
14. Now, it's time to complete the config file configuration. Simply
replace YOUR_MIDSERVER_NAME_GOES_HERE with MID Server, as it is our MID
Server name, as shown in the following screenshot, and save it:
17. We are done with the MID Server file configuration, and it's time
to start the MID Server services on our host machine. To do it, you
need a little help from the Command Prompt. Open the Command
Prompt as administrator, as follows:
Open command prompt
18. Wait for a second, where do we go on this black screen? You have
to go in the MID Server folder that you created during MID
Server installation, and then in our dear MID Server. If you are not
sure how to do it, simply use the cd command to change the
directory. Once you are in the MIDServer_dev15570 (remember this, as
this was renamed), type dir to view the files under this folder, as
shown in the following screenshot:
MID Server structure on CMD
19. At first glance, this might look like a busy screen, but you don't
need to care about all the files, just focus on the start.bat file,
which will start your MID Server service on the host machine. So,
simply type start.bat on the Command Prompt, as shown in the
following screenshot, and after a couple of seconds, you will see a
message saying MID Server started. Hooray, the MID Server has
been started:
20. To validate the MID Server service on the host machine, type
in the run window (Windows + R), as follows:
service.msc
Run window
21. Furthermore, this command will open all the services of the host
machine, as shown in the following screenshot. Carefully search
for the MID Server and check the status that must be running:
22. As you can see in the preceding screenshot, the MID Server is on
the running status on the host machine, so let's view the MID
Server in the ServiceNow instance. You will notice that read-only
fields are populated on the MID Server form, but MID Server is
not validated, as shown in the following screenshot:
MID sever without validation
23. To validate the MID Server, you need to click on the Validate
button that is available under Related Links on the MID Server
form, as shown in the following screenshot:
24. After having clicked on the Validate button, wait for some time for
the validated field to turn green. The end status of the MID Server
configuration will be as follows. You probably know that
ServiceNow releases new versions on a regular basis, so a
question may be asked here, do we need to update the MID Server
on regular basis along with new version upgrades? The answer to
this will be covered in later sections:
25. If you open the MID Server and scroll down, you will be able to
see a related list with the MID Server such as MID Server Issues,
Configuration Parameters, Supported Applications, IP Ranges,
Capabilities, and so on. We are going to explore some important
related list (IP ranges, properties, and so on):
IP range
27. Although installing multiple MID Servers will be discussed in the
following topic, from a load balancing point of view of
automation activities, multiple MID Servers are recommended,
and out of the box, ServiceNow provides auto-assignment. You
need to navigate to the MID Server range auto-assignment IP
ranges. This will direct you to the configuration page where a
slush bucket is available, and by simply moving the available MID
Server in a selected box you can complete this task. Now, as an
output of this subnet, discovery is executed and automatically
assigns the MID Server:
Auto-assignment
28. The behavior of the MID Server is controlled by the MID Server
properties. An important point to note here is that the MID Server
properties can override the MID Server parameter as well. MID
Server properties are used to control the behavior of the probe and
payload of the MID Server. You can even create a new property of
the MID Server by navigating to MID Server | Properties | New.
But please note, for JDBC connection properties, you should have
knowledge of PL/SQL:
If you have just begun with ServiceNow automation, then you may want
to ask how ServiceNow gets to know the configuration of devices in the
enterprise network. For the answer, I would like to introduce a couple of
terms again which we have seen in the last chapter as well but now we'll
take the deeper look into it now, terms are the probe, sensor, ECC
(external communication queue) and PPS (post processor script).
Probe
The word is self-explanatory; a probe is a method of searching the devices
in the corporate network. However, in a corporate network numerous type
of devices may be available, so how do we search the devices in such a
huge network? To collect information on the network many protocols are
available, but in this book we are going to discuss a few important
protocols:
Orchestration: Exchange
Orchestration: PowerShell
Orchestration: ROI
Orchestration: Runtime
Orchestration: SFTP
Orchestration: SSH
Orchestration: Workday
Orchestration activity
ServiceNow automation
process
Just imagine you are trying to set up a virtual machine by a simple request;
you might be interested in knowing how that workflow will be executed.
Let's explore the hidden process behind the workflow. In the previous
topics, we have learned about the major pillars of automation: probe, MID
Server, and ECC queue. Let's take an example of a virtual machine set up
by a catalog item. When any end user logs a virtual machine setup request,
a REQ sc_request number, a RITM sc_req_item number, a task number are
generated. Afterwards, the workflow/orchestration activity starts and
triggers a probe that scribes in the ECC queue. From the MID Server
architecture, we know that the MID Server subscribes messages from the
AMB, which updates the MID Server on the pending task, Now, the MID
Server executes the probe on the target machine to perform the task, which
sends back the result to the ECC queue, which acts an output of the probe
rather than a result. Finally, the ECC passes result to the workflow for
further processing:
Automation process
Summary
In this chapter, we have learned about the basic setup of the automation
and how it works both within and outside of the ServiceNow boundaries.
We have seen MID Server architecture, MID Server installation, how to
start the MID Server, why the MID Server is important, the significance of
multiple MID Servers, and so on. Furthermore, we have also seen the
importance of discovery in the automation space and important pillars
such as probes, sensors, and orchestration packs (workflow activities). In
addition, we also saw how ServiceNow interacts with other systems such
as Windows, Linux, routers, and so on. Finally, we learned how the
ServiceNow automation process works and its various stages of execution.
In the next chapter, we will cover topics such as the introduction of a
virtual machine, virtual machine configuration, catalog item for
automation with the change request use case.
Manage Virtual Machine with
ServiceNow
Within the IT space, virtual machines are very common and are used or
recommended on many occasions. I have assumed that you know about
the virtual machine that is installed on top of the operating system layer
rather than software and imitates dedicated hardware. In this chapter, we
will discuss the virtual machines; perhaps, virtualization should not be a
new term for you. With regard to virtualization, you should understand
that it is an abstraction layer, segregates physical hardware from the
operating system, and supports more than one type of operating system,
such as Windows or Linux. Particularly, in the cloud space, ServiceNow
provides dedicated plugins for managing the end-to-end lifecycle of cloud
machines. You may be aware that in the vendors' space there are many
giants, such as VMware, Amazon, and Microsoft.
Amazon billing
3. Cloud user: Users who are assigned the cloud user role
are virtual machine requesters. Just imagine, in a project
environment, where a virtual machine is needed, a project
member must have a cloud user role to request a virtual
machine and even to manage this virtual machine.
Furthermore, let's look at the graphical representation of the
previously mentioned roles in the operational environment:
Communication Process
Creating a catalog item
In previous sections, we saw how a cloud resource catalog item works
within a ServiceNow environment and outside of the environment as well.
Initially, we discussed the cloud resources catalog item. So, a couple of
questions may be asked here: how do we create a cloud resource catalog
item or are there any specific guidelines or rather a process, to create it?
However, the answer is a big no; you don't need to work hard at creating a
cloud resource item, as it can be created by simply clicking on Create
Catalog Item button that is available under the related links of the template
(Configuration | VMware | Virtual Machine Template | Scroll down to
view related links) as follows. It is important to note that these templates
must be auto-populated by VMware Discovery. Furthermore, the detailed
steps are stated later in this chapter:
VMware-related CIs
As you might know, the base table is CMDB cmdb_ci and can be further
extended to store new classes of devices, so virtual machine CIs are stored
in the same way as well. In Chapter 1, The IT Challenges and Power of
Automation, you learned Discovery, which is used for populating
ServiceNow CMDB through data collection and processing or rather
probes and sensors. Furthermore, you should understand that certain
information is needed to populate the VMware-related CIs. Although
some additional terms will be discussed in the following sections, for now,
I would like to introduce some terms here that may be familiar to you,
such as discovery schedule and target machine and credentials. You may
be interested to know that once the plugin is activated, no data should be
available on the tables as the plugin is not configured yet. Furthermore, the
VMware modules of the Configuration application, such as Vcenter
cmdb_ci_vcenter, datacenter cmdb_ci_vcenter_datacenter, Clusters
cmdb_ci_vcenter_cluster, ESX Servers cmdb_ci_esx_server, Templates
Now, click on the New button and it should direct you to the UI page that
indicates many options for storing the credentials discovery_credentials, such
as AWS, Azure, Windows, JDBC, VMware, and CIM. It is important to
mention that these credentials are used during the probe phase of
Discovery to retrieve the device or rather the CIs information. You may be
interested to know that credentials may be in the form of username and
password or certificates. With regards to credentials, there are a few
important terms that I would like to introduce that are the security of
stored credentials and service credentials.
Security of stored credentials
Firstly, let's look at the security of stored credentials. Apparently, the
security is the concern on the cloud application, but ServiceNow has a
robust password securing mechanism, as once a password is entered in the
credentials discovery_credentials table, it can't be viewed.
Service credentials
Now, let's look at service credentials. Try to recall Chapter 2, ServiceNow
Configuration; you learned about the MID server. We know that the MID
server facilitates communication with the external system. You might be
interested to know that the MID server utilizes stored credentials only, but
service credentials must have domain or local administration rights to
avoid access-related issues. Now, we have introduced a new term: service
credentials. Try to recall Chapter 2, ServiceNow Configuration; in the MID
server configuration, local administrator or domain accounts were used as
the MID server user account for connecting with a ServiceNow instance,
but service credentials are used by the MID server to connect with the CIs
of the networks as shown in the following diagram:
Service credentials
Moreover, we have just seen that the entered credentials can't be viewed,
so how will ServiceNow process them? So, let's see how this is done in the
ServiceNow space:
The credentials are decrypted on the MID server with the MID
server's private key
Test Credential
Discovery
IP range Used for
type
Discover IP Range
CMDB CI created
If you click on the CMDB CI item (hp in this case) from the devices-
related list, this should direct you to the actual CI item (form) as shown in
the following screenshot, where you can view various fields such as
Name, Manufacturer, and Model ID. These are a result of various probes,
triggered by discovery through the MID server:
If you scroll down the CI page, then you should able to view the various
related lists that hold related information of the CI such as Network
Adapters, Storage Devices, Software Installed, and Running Processes. As
it is the lab environment, you can view my machine network adapter as
given in the following screenshot. Moreover, you can also view all the
process execution results on my local machine:
Network Adapters
Software Installed
The ECC queue related list indicates a series of records that was created
during the distinct phases of the discovery execution. Try to remember Chap
ter 1, The IT Challenges and Power of Automation, where we learnt about
the four phases of discovery—Scanning phase, classification,
identification, and exploration, which can be seen in the Topic columns
(DNS, Powershell, WMIRunner, and so on) and in the Queue column
ecc_queue.queue. Results are stored as output and input, so what does this
mean for us? The answer, it is a message from the ServiceNow instance to
another system classified as output and a message from the external
system to the ServiceNow instance classified as input, as you can view in
the following screenshot:
ECC queue
Discover VMware vCenter and
process classifier
Coming back to our ECC queue, you will notice that a VMware
exploration probe VMwareCenterProbe has been triggered by a process
classifier during discovery. To view the process classifier record, navigate
to Discovery Definition | CI Classification | Processes, and there you
should able to view many out-of-the-box process classifiers, including
vCenter as shown in the following screenshot:
If you click on vCenter, then it should direct you to the following page,
where you can view many details such as Name, Table (VMware vCenter
Instance) in which data will be populated, Relationship type (Runs
on::Runs) that is with the host, Condition (Command contains vpxd) that
indicates the actual process that has been found and contains vpxd as
shown in the following screenshot:
The process classifier
Already, we have discovered the Windows server that runs the vCenter
application but the VMware information hasn't populated it in our CMDB.
So, to bring the VMware information to the CMDB, we need the VMware
credentials so that during the discovery scan, it can be searched. So, again,
navigate to Discover | Credentials | New | select VMware Credentials, and
you should be directed to the following page, where you need to configure
the VMware credentials as shown in the following screenshot:
vCenter credentials
Configuration-VMware instance
Now, open the CI item and click on Dependency view, and it should direct
you to the dependency UI page, where you can view all the relationships
with the host, such as VMware vCenter, datacenters, ESX Server,
VMware virtual machine templates, disks, and the network:
So far, you have learned about the various stages that are involved in
Windows and VMware data population in ServiceNow CMDB. Hooray!!
The VMware virtual machine information has been populated in the
ServiceNow CMDB, but what now? What do we need to do with the
populated details? The answer is that we will utilize the populated
VMware virtual machine and the related components for configuring the
cloud resource offering in the following sections of the chapter.
A use case for change
requests
You are probably aware that a CR, or change request, is a standard
practice in IT operations; likewise, with the virtual machine, a change
request is mandatory for making changes in the virtual machine
environment. Here, it is important to note down whatever options are
being chosen, such as terminate VM or stop VM; a change request should
be mandatory. Before exploring change requests in more detail, let's
understand how a virtual machine catalog item looks, as catalog items are
the standard way of logging requests on ServiceNow.
Virtual machine catalog item
A virtual machine catalog item can be created with a VMware template. In
previous sections, we have seen that templates can be populated in the
CMDB and stored in the VMware Virtual Machine Template
cmdb_ci_vmware_template module. Navigate to Configuration | VMware |
Virtual Machine Template, and it should direct you to the list view of
VMware machine templates. Open any template and scroll down; under
the related link, you should able to view options such as the Create
Catalog Item, View Catalog Items, and Subscribe buttons, as shown in the
following screenshot:
Related link To VM
Under the related link of the virtual machine template, click on the Create
Catalog Item button; ServiceNow should direct you to the newly created
item as shown in the following screenshot. Moreover, ServiceNow auto-
populates fields such as Name, Catalogs, Workflow, and Category, but
some fields need your attention, such as VM size and Provisioning mode.
We will explore these in the following sections:
VMware catalog item
VM sizes are controlled by the Sizes module of VMware and can be found
by navigating to VMWare cloud | Size, as shown in the following
screenshot, where each record explains the configuration of the virtual
machine; for example, small has 1 vCPU, memory 512 MB, and data disk
size 10 GB. The new size can be created by clicking on the New button, as
shown in the following screenshot:
VM size definitions
When ServiceNow directs you to the newly created item, the catalog item
is in a deactivated state and you need to publish it by clicking on the
Publish button. Once the catalog item is published, it should be added to
the Cloud Resource catalog. For cloud-related roles cloud_operator,
cloud_admin,cloud_user, a module and a Cloud Resource catalog should be
available under the self-service application to log a virtual machine
request. Navigate to the Self-service | Cloud Resources catalog:
A request has been logged, and REQ and RITM numbers are generated.
Based on the selected options (manual/automation), approval should be
done; after approval, the new virtual machine should be set up at the
VMware vCenter end automatically. It is important to note that if you run
the discovery schedule again, you should view the newly created VM
within the ServiceNow CMDB. At the ServiceNow end, how can you
monitor the status of a virtual machine? The answer is very simple: by my
virtual asset portal, we have seen the cloud portal in the previous sections
and virtual asset portal resides in it only. Navigate to Cloud management |
Cloud User portal |, click on View dashboard |, and click on the My Virtual
Assets dashboard, and there you can view the overall status of your virtual
assets, virtual asset requests, metrics, resource optimization, and so on, as
shown in the following screenshot:
My Virtual Asset dashboard
After having clicked on the Terminate VM button, you should view a pop-
up window as shown in the following screenshot. It is advisable to contact
your VMware administrator before terminating any virtual machine to
avoid any issue; after the confirmation from the VMware administration
team, you can simply click on the OK button to start the termination
process:
Terminating VM
After the change approval, the change request should move to the
scheduled state and you need to simply click on the Implement button to
apply the change in the vCenter environment. Having clicked on
Implement button, you should be directed to the record as shown in the
following screenshot, where you should note the VM's terminating state:
Introduction to Puppet
Push configuration
The Puppet node's request for data from the Puppet master is given in the
following diagram. You will have noticed that we mentioned certificates in
this section, but how will SSL certificates be generated for mutual
authentication? So the answer is very simple: both the Puppet master and
the Puppet node have the ability to generate SSL certificates within their
environment, and you don't need to go to any third-party SSL certificate
publishers:
SSL connection between Puppet node and Puppet master
Others
A more detailed explanation will be given in the Puppet installation and
configuration section, but for now let's familiarize ourselves with some
more terms: manifest and classes. In the previous section, we discovered
manifests, but what do they do? A manifest, or rather a Puppet manifest, is
actually a file that contains the Puppet-specific .pp extension and stores
information. Furthermore, we are already aware of facters. A facter is a
utility file through which the puppet node system information generates its
configuration. You might be interested to know that in the manifest, all
resources are declared to be checked or to be changed. Resources might be
services, packages, and so on. If I were to say that anything that you wish
to change at the client node, or puppet node, can be considered a resource,
then I probably wouldn't be wrong. Moving on to classes, I personally
don't think a class needs any introduction; if you have any programming
background then the term class won't be new to you. But, if you don't have
a programming background, then you can think of a class as a container
that holds different resources.
Puppet installation and
configuration
Now, we are moving out of the ServiceNow space and coming into the
Puppet space. This might be very different from your current
understanding, but as this chapter is dedicated to Puppet automation
through ServiceNow, we must have some knowledge of Puppet. So, let's
start with some key terms and its architecture.
Architecture
In the Puppet space, there are two types of installation available; the first
is monolithic and the second is split. Monolithic installation is easier
compared to split installation, as in the monolithic the master, console, and
Puppet DB are installed on one node only and such installation is easier
from the maintenance point of view activities such as install or upgrade. I
assume you may have attended a Puppet automation meeting and you may
have faced situations where you have heard the following terms:
Software prerequisites:
Oracle VirtualBox
Internet connectivity
Hardware prerequisites:
3. Click on the Next button, configure the memory and hard disk
size, and click on the Create button. After having clicked on the
Create button, the Puppet Master - ServiceNow virtual machine
should be added on the left-hand side, as shown in the following
screenshot. Likewise, you need to create a Puppet client as well.
After creating the virtual machines, you should be able to see both
VMs on the left-hand side, as shown in the following screenshot,
but in a powered-off state:
Installation
4. It's time to start the virtual machines, but before that, you need to
supply the path of the Linux .iso file. In order to do so, right-click
on the virtual machine and then select the Settings button; now
you should able to view the Settings window containing many
sub-sections. Now, click on the Storage button, click on the icon
(next to the Optical Drive field) to provide the path of the ISO file,
shown as follows, and finally click on the OK button:
8. Finally, you will be asked to give the Root Password, and here you
can type any desired password with which to log in to the virtual
machine. It is important to note that root is the admin account, and
it will be used during the Puppet master installation as well:
10. The same steps can be followed to set up another virtual machine
(Puppet node). Now, both our Linux machines are ready. But, keep
in mind that both the Puppet master and Puppet client must be
accessible; an entry must be made in etc/hosts for the resolution of
the DNS issue on both nodes or you can configure DNS to resolve
the IP. Disable the firewall on both Puppet master and Puppet
client, to avoid any issues, and type the systemctl stop firewalld and
systemctl disable firewalld commands on the Linux Terminals.
Finally, make sure both the Puppet master and the Puppet client
have internet access to install packages from the Puppet lab
repositories. You may be interested to know that Puppet has its
own repository (Puppet Forge) from where you can install the
different packages, code, and so on.
11. Once Linux is installed, you should be directed to the command
line, or Linux Terminal screen (the Puppet master - Linux
Terminal), where you need to enter all the commands to install the
Puppet master:
Puppet master virtual image
13. You will have observed that there is no graphical user interface
(GUI), so how can we keep the Terminal clean in the Linux
space? You can use the clear command to clean the screen, as
shown in the following screenshot:
Clear screen
14. Moving on to the network setting part, now we are going give a
static IP to the Puppet master for connectivity purposes. To do so,
type the following command in the Terminal and press Enter:
15. After pressing the Enter button, you should be directed to the
following screen, where you should able to view various network
details such as ONBOOT, BOOTROTO, UUID, and so on; however, we should
modify these, as shown in following screenshot. But, don't forget
to save the new settings. To save the settings, press the Esc button
first on your keyboard, and type the :wq command to gain control
again:
Static IP configuration
16. After getting the control back, you need to start the network
service again by typing the command shown in the following
screenshot:
17. Now, after hitting the Enter button, you should see a series of
messages, as shown in the following screenshot. So, wait until
command execution is completed:
18. Once the previous step is completed, you can type any web
address in the Linux Terminal and you should able to view the
response of the ping command, as shown in the following
screenshot:
following screenshot:
2. The next step is to get the Puppet repository. Open any standard
browser and type yum.puppetlabs.com, and there you can view the
repositories for our OS, such as puppetlabs-release-el-6.noarch.rpm, as
shown in the following screenshot:
Puppet repository
3. Now, copy the link location for future use. Now, open the
Terminal and type the command, as shown in the following
screenshot with web address, and press Enter:
4. After pressing the Enter button, you will notice the retrieving
message, as shown in the following screenshot, and a warning
message (ignore the warning message):
5. Now, we are ready to install the Puppet master. To do so, type the
command, as shown in the following screenshot, and press Enter.
Execution of the command should show you a series of
installation messages:
Puppet master 3
Puppet master 4
Installing the Puppet agent
In previous sections, we saw that the hostname was configured as
puppetmaster, so during the installation of the second machine, the hostname
was puppetagent while executing the command on the Linux Terminals. I
think it is worth mentioning that you should not forget about the network
configuration, where the Connect automatically checkbox should be
checked. So now, log in to the Puppet agent machine by using root and the
root password that was used during the Linux installation. After successful
installation, you should see the following screen:
Disable firewall
3. Moving on, we must enable the Puppet repository on the
. To do so, execute the following command in the
puppetagent
6. After pressing the Enter button, you will see a series of messages
as a part of the Puppet agent installation process:
Puppet agent installation
1. Open the Puppet master Terminal and type the command shown in
the following screenshot, which will return the IP address of the
machine. And, in the eth1 (Ethernet interface), you can view the IP
address of the Puppet master:
2. Now, we must edit the host machine name in the vi editor. We are
giving a hostname to the puppetmaster machine. Type the command
in the Terminal, as shown in the following screenshot:
5. The next step is to edit the Puppet configuration file, using the
command shown in the following screenshot:
7. In the main section, add the DNS and certificate name, shown in
the following screenshot, and save it:
10. On the next screen, you can view the IP address in the eth1 file, so
copy it somewhere so that it can be used while editing the
configuration file.
11. Now, open the host file of the Puppet agent by executing the
following command:
12. After executing the command, you should able to view the
following screen. Now, you have to give a name to the IP address
of the Puppet master and the Puppet agent, as shown in the
following screenshot:
Assigning a domain name to the Puppet agent
13. Now it's time to edit the Puppet agent configuration file. This can
be done by executing the following command:
14. After executing the command, you should able to view the
following screen where the main and agent sections are available.
Now, as it is the Puppet agent configuration, we'll make the
changes in the agent section:
15. In the agent section, you must add the server with which the
Puppet node is going to communicate:
Adding a server name to the Puppet agent
16. Congratulations! It's time to start the Puppet agent, and this can be
done by executing the following command:
2. Our Puppet master has been stopped successfully, and now it's
time to generate the certificate by executing the following
command:
3. Soon after executing the command, you should able to view the
SSL certificate, as shown in the following screenshot. After seeing
Notice : Starting Puppet Master Version 3.8.7, you need to exit it, as
Let's move on to the Puppet agent and generate the certificate as part of
the configuration:
1. First, go to the Puppet agent virtual machine and stop the Puppet
agent by executing the command shown in the following
screenshot:
4. Now, the Puppet master must sign the certificate from the Puppet
agent. To sign the request, you should execute the following
command with the name of the Puppet agent:
6. In the previous section, we have seen that the Puppet master has
signed the Puppet agent's certificate, but when there are many
nodes it is important to verify that the correct certificate has been
signed; you can validate it by executing the command shown in
the following screenshot:
7. After executing the command, you will view the certificate, which
must be matched with signed one.
8. Finally, the Puppet agent must update itself from the Puppet
master to get recent changes. To do so, use the command shown in
the following screenshot:
After having clicked on the Submit button, you should be directed to the
discovery schedule list view where you can view your newly created
discovery schedule. Now, click on the discovery schedule record and
scroll down to where you can see two options under related links. The first
is the Quick range, which takes IP ranges as an input, discovery should
begin its probe process in the entered IPs (quick ranges) only after having
clicked on Discover now button, keep in mind that you must have IP
addresses of your Puppet masters in the discovery schedule. In my lab
environment, I have installed Puppet on my machine only; that's why I
have given my system, and you can do it as well. After kicking off the
discovery schedule, you will notice the Puppet master has been discovered
in the devices related list in the discovery schedule.
Furthermore, you may ask: How is the Puppet master discovered by the
discovery schedule? From the previous sections, we know that Discovery
is an agent-less system and works based on probes and sensors, so we
should understand how Puppet master probes work. So, from the previous
chapters, we know that out of the box there are many probes available in
Discovery applications, and you can find them by navigating to Discovery
definition | probes and filtering out UNIX - Active Process, which runs
behind the discovery of the Puppet master, as shown in the following
screenshot:
But keep in mind that there are certain conditions that must be met, so let's
explore the conditions. Either name of process is pe-httpd or parameter
contains puppet master and name of the process is ruby. And, if one of the
conditions is met, then a record is inserted in the ServiceNow Puppet
master table cmdb_ci_puppet_master. Furthermore, we have seen in previous
sections that a series of probes and sensors is triggered to collect
information about the discovered devices; likewise, once a new CI (Puppet
Master) record is created, an addition Puppet - Master Info probe is
triggered, as shown in the following screenshot, to collect more
information:
We have learned that, by default, the discovery schedule can identify the
Puppet master that is running on the Unix system, but here it is important
to note that credentials rather user account must have rights to execute the
following commands:
For the Puppet - Master probe, the user must have privileges to
execute the puppet, echo, and hostname commands
For the Puppet - Credentials Requests probe, the user must have
Puppet privileges to execute the puppet command
For the Puppet - Manifests probe, the user must have Puppet
privileges to execute the puppet echo, sed, and find commands
For the Puppet Module probe, the user must have Puppet
privileges to execute the puppet command
If you have some experience with Unix, then you will have heard about
the sudo command that allows users to run programs with the security
privileges of another user, and by default that is the super user. You might
be interested to know that ServiceNow does support sudo as well, but you'll
have to make some effort to configure it. Navigate to Discovery | probe |
and filter puppet-related probes. For demonstration purposes, I have taken
the Puppet - Master Info probe, so open it and scroll down to view the
Probe Parameters related list, as shown in the following screenshot:
Furthermore, click on the New button and configure the page, as shown in
the following screenshot, and click on the Submit button. It is important to
note that you must add the must_sudo parameter with each probe that
requires it:
Name
Path
Inherits class
Selectable
Default value
ServiceNow Puppet menus and
modules
In the previous sections, we have learned about the Puppet and its
architecture along with various components including the discovery but
now let's move into the ServiceNow space to understand how it works
with ServiceNow.
Puppet plugin
You might be interested to know that the Puppet Configuration
Management plugin is available as a separate subscription, but I think it is
worth noting that the Orchestration plugin must be activated as well. You
can contact your ServiceNow account manager to get the Puppet
Configuration Management plugin activated on production and non-
production instances, and it may take a few days. If you don't have an
account manager, then you can log in to the ServiceNow customer support
portal at https://hi.service-now.com and navigate to Service catalog | Activate
Plugin, as shown in the following screenshot, and fill in the requested
details:
However, if you are using your personal developer instance, then you can
log in at https://developers.service-now.com and click on Manage | Instance; on
the page click on Action and activate the Puppet Configuration
Management Core plugin, as shown in the following screenshot:
Activating the plugin on the personal developer instance
ServiceNow Puppet application
Once the plugin is successfully activated, on the left-hand side you will
notice the Puppet application, as shown in the following screenshot, where
you can view related modules such as Puppet master, resources and setup,
and so on. Modules will be discussed in upcoming sections:
Setup section:
ENC script
Moving on, you need to enter the Puppet credentials in the try block, as
shown in the following screenshot:
Furthermore, there are two main roles that are associated with the
ServiceNow Puppet application. The first role is the Puppet user,
puppet_user, and the second is the puppet administrator, puppet_admin. Now,
let's understand what these roles do. A puppet_user role holder can assign
node definitions to Puppet nodes, view all Puppet records, and request
changes in existing nodes. A puppet_admin role holder can create and modify
node definitions, modify Puppet properties, and perform Puppet user
actions.
ServiceNow and Puppet
interaction
In the next section, we will view a more detailed interaction. Let's
understand some main components of the interaction that are driving the
entire interaction. ServiceNow and the Puppet master are integrated by a
scripted web service that can be viewed by navigating to Puppet | Setup |
ENC Web Service that acts as an endpoint. So, how it does work? The
process begins by defining the Puppet master and discovering the various
components such as modules, classes, and so on; whenever any request
comes to the Puppet master from Puppet nodes with fully qualified
domain names (FQDNs), the Puppet master invokes a web service call to
ServiceNow and passes an FQDN. ServiceNow looks in the CMDB with
matching FQDN for responding to Puppet master, and that goes to the
Puppet node, as shown in the following diagram:
Managed nodes
After having clicked on Managed Nodes, you should be able to view all
the records that were discovered by the discovery schedule in the
discovery phase, as shown in the following screenshot. It is important to
note that in each of the nodes, or machines, Puppet agents are installed
and, as per the pull configuration approach, it checks with its Puppet
master and returns checks with ServiceNow to know what configuration
should be applied to the Puppet agent:
1. In the managed node, you can view the node definitions that are
applied to the nodes. A node definition is a configuration template
that is applied to the nodes. You may be interested to know that
Puppet automatically configures these node definitions, and they
can be viewed by navigating to Puppet | Node Definitions:
Node definition
2. Now, if you change the node definition, then all the related servers
with that node definition will be changed. So, let's take a record
from the node definition, as shown in the following screenshot,
and click on the Checkout Draft button to make changes in the
related servers:
Node definition
3. If you scroll down, then you can view the classes that were
discovered on the Puppet master by the discovery schedule:
4. Just imagine that you want to add Ruby to related servers. Making
a manual change will be a very challenging task, but through
ServiceNow, you can easily add Ruby on all related servers that
are using the same node definition.
5. After checking it out, you should able to view Node Definitions in
the draft mode. To add a Ruby class, click on the New button for
the Class Declarations related list and add the Ruby class. To
apply the changes, you need to publish the node definition. So,
click on the Publish button again, but this time this addition will
pass through the standard change management process. After
having clicked on the Publish button, a pop-up should come up on
the screen, as shown in the following screenshot; to proceed with
the change, click on the OK button:
Change request
Introduction to Chef
Chef product comes with open sources that are known as open source
Chef and enterprise as well that is known as Chef enterprise and
obviously, the enterprise product comes with official support.
Chef architecture
The Chef architecture mainly revolves around three major components.
Here, I would like to introduce some key terms that will help you to
understand Chef. The terms are Chef server, workstation, and nodes. Let's
explore these terms and understand what role they play:
Soon after, a standard service catalog will be displayed, and you need to
complete it as shown in the following screenshot. ServiceNow requires at
least two working days to activate the Chef plugin on your instance:
Chef configuration management plugin activation
With regard to the plugin, you need to know that Chef configuration
management is available as a separate subscription, and along with that
you also need Orchestration Activities Chef and the Orchestration Plugin,
which again are available as a separate subscription. We have learned
about Chef Plugin activation, so moving on let's understand how
ServiceNow and Chef communicate with each other for archiving better
control over the infrastructure:
Chef process
Chef installation and
configuration
In previous sections, we have seen the architecture of Chef, which helped
us to understand some Chef basics. Now, let's move on to the installation
part of Chef. There are three configurations available for installing Chef:
Ohai
Knife
Chef
Chef supermarket
In previous sections, we have seen that three machines are required. In our
lab environment three virtual machines are being used. The process begins
with installing VirtualBox. I have used Oracle VirtualBox, and you can
download it by visiting https://www.virtualbox.org/wiki/Downloads and
downloading the options as per your operating system:
Download VirtualBox
After the download, install VirtualBox and add three virtual machines
(Chef server, Chef workstation, Chef client) as Oracle virtual machines.
After adding three virtual machines, you should able to view something
like the following screenshot, which shows three virtual machines:
Virtual machines
Chef server installation
We have just created a Chef server virtual machine record, but we have to
load our Linux image (.iso file) as well to install the Puppet server on top
of it. Click on the Settings button, and you should be directed to the
following screen where you need to give the path of the .iso file, as shown
in the following screenshot, and click on the OK button:
Network configuration
Network connection
4. After having clicked on the Edit... button, you should able to view
the following screen, and here you only need to checkmark the
Connect automatically field and click on the Apply button:
5. Finally, click on the Next button to begin the install. Once the
installation is done, you should be able to view the Reboot option,
and click on it. Finally, our Chef server machine is ready for the
Chef server to be installed on it:
6. Well done! We are done with the basic configuration. Now it's
time to install the Chef server. So, open any standard web browser
and open https://www.chef.io/ and click on the Get started button.
Then, you should be directed to the Options page where you
should select the appropriate version as per your machine, as
shown in the following screenshot. Click on the options and you
will be asked to submit some basic details, after which the Chef
server should be downloaded on your local machine:
communicate with the Chef workstation and the Chef node. Flush
the IP tables to avoid any communication-related issues. To do it,
type the command as shown in the following screenshot:
Flush IP tables
Check status
Start service
12. Finally, if you want to check the status of the Chef server, then it
can be done by typing the following command. The status displays
all the PIDs rather than processes that are running, such as chef-
server-ui, chef-solar, and so on:
Chef status
13. Well done! Your Chef server has been configured successfully.
Now, the next move is to install the workstation on the Chef
workstation virtual machine.
Chef workstation installation
In previous sections, we configured the Chef server machine on which the
Chef server was installed; likewise, we must make the Chef workstation
machine ready as well, and the process is similar, but it is important to
note that there should be a different hostname. For example, I have given
the Chef workstation the name shown in the following screenshot, which
will help me to differentiate between the machines:
Chef workstation
Root password
5. Before moving on, to avoid any issues during the installation let's
flush the IP tables first, and that can be done with the command
shown in the following screenshot:
Flush IP tables
Go incertificate directory
9. Let's start from the admin.pem and type the command to copy the
admin.pem certificate, as given in the following screenshot; you will
11. Finally, repeat this for the chef-webui certificate. Type the command
as given in the following screenshot to copy the chef.validator
certificate; again you will be asked for the password of the Chef
workstation so type it and wait for a while:
12. We have just copied three files from the Chef server to the Chef
workstation and you can view them in the workstation by simply
entering the ll command.
13. Time to switch the machine again; type the command as given in
the following screenshot to create a new directory on the Chef
workstation and copy all certificates in it:
Make directory
14. To copy the certificates in the chef directory, type the command as
given in the following screenshot; likewise you can also copy the
other two certificates in it:
15. Now, if you move inside .chef by giving the following command,
you should able to view all three certificates:
16. To establish communication between the Chef server and the Chef
workstation, we are going to use the command as given in the
following screenshot:
Soon after, you will be asked to enter the location of the server that
you can give https://chefserver.servicenow.in:443 (defined in previous
sections). It is important to note that here you will be asked for a
username; if you already have a user like I have my account ashish
on the Chef server then moving on. You will be asked for an admin
account as well, which is obviously admin. As a response to Please
Enter the existing admin name, type admin and after that you should
enter a validated client name, which can only be the default chef-
validator. After validating the client name, you will be asked its
location; thus, you can give the path of the folder as /etc/chef-
server/chef-validator/pem]/root/.chef/chef.validator.pem:
17. You need to fetch the certificate as well; type the command as
given in the following screenshot and wait for a while:
Fetch certificate
18. To check the status, you need to type the command as given in the
following screenshot, which should return the message successfully
verified from chefserver.service.in :
Check status
19. As a next step, type the following command again and enter Y as a
response to overwrite /root/.chef/knife.rb and the Chef server
should be https://chefserver.servicenow.in:443; the name of the new
user should be ashish, the existing admin name should be admin, and
the existing admin's private key should be /etc/chef-
server/admin.pem]/root/.chef/admin.pem. In essence, we are just
Knife configure
21. Moving on, let's install the required package; type the following
command in the terminal:
22. Soon after you will see the following screen indicates the status of
the required package installation:
Download required in process
23. Now it's time to install the Chef development package on the Chef
workstation machine and that can be done by using the wget
command, as shown in the following screenshot. You can get the
URL from the Chef site:
Download Chef DK
24. After entering the command, you should able to view the
installation process as given in the following screenshot:
25. Once the Chef development kit is installed, you will see the Chef
package, as given in the following screenshot, that will be used to
install the Chef development kit on the machine:
26. Finally, time to install the Chef development kit on the Chef
workstation machine. You need to use the command as given in
the following screenshot. It is important to note that we are using
the same package name chef dk-2.3.4-1.e16.x86_64.rpm:
27. Well done! The Chef development kit has been installed on the
workstation machine successfully but we must view all the
components of the Chef development kit, and this can be done by
entering the command as given in the following screenshot. You
may be interested to know that all the components must be on the
successful state:
Chef verify
Chef client installation
The Chef client, rather than the Chef node, is the last machine that we are
going to configure now. As per the pull configuration, it's the Chef node's
responsibility to update itself from the Chef server, but as we know we
need a machine for it and that will be called Chef client for the purposes of
this lab. In previous sections, we have seen the installation of the Chef
server and Chef workstation so let's install the Chef node now:
2. On the same page, you should able to see the Configure Network
button, as well as what we have seen in previous sections. Here
again we are making changes in the network configuration to
avoid any connection glitches. To make changes, click on the
Configure Network button as given in the following screenshot:
Configuring the Chef node network
Connect automatically
4. Moving on, now you will be asked to enter the Root Password, as
given in the following screenshot, which will be used to log into
the Chef node machine:
Flush IP tables
7. Now check the host files by typing the command as given in the
following screenshot, and validate whether you are receiving a
ping or not from the Chef workstation and Chef server as well:
9. Now it's time to install the Chef package on the Chef Node
machine; the package can be installed on the machine by typing
the command as given in the following screenshot. Soon after
hitting the Enter button, the Chef package should be installed:
Make directory
11. Time to switch machines. Now go onto the Chef server machine.
On the Chef server terminal, type the command as given in the
following screenshot to copy the file:
12. If you use the command ll then you should be able to view the
Chef validator file in the Chef node machine file. Now it's time to
change the machine again, so go to the Chef node machine again
and type the command as given in the following screenshot:
13. Now if you go to the chef directory of the Chef node machine, then
you can view the Chef validator file that is the Chef server
signature. Moving on, now you need to fetch the certificate from
the Chef server so use the knife command as given in the
following screenshot:
Fetch certificate
14. Now, if you go to your home directory and then the chef folder, you
should able to see trusted_certs, which is a certificate copied from
the Chef server machine:
15. Furthermore, you can validate it by typing the command as given
in the following screenshot to check that everything is correct.
Finally, if you receive a message saying that the certificates are
successfully verified from chefserver.servicenow.in, then everything
is correct:
Validation
16. Now, to establish the communication between the Chef server and
the Chef node, you need to create a file, so now go back into the
chef directory by typing the command, as shown here:
Switch folder
17. Once you are in the chef directory, type the command as given in
the following screenshot to create a new file:
18. Once you are in the file, add the attributes in the file as shown in
the following screenshot. Here, it is important to note that we have
mainly given the Chef server URL and certificate definition here
so that the node can communicate with the server, and to save
type: wq as we have seen in previous sections as well:
Entries in the client file
19. Now, we need to join the Chef node with the Chef server, which
means utilizing the certificate. To join it, type the command in the
terminal as given in the following screenshot; after successful
completion you should see Chef client finished message:
20. But how you can verify it? So, to verify it you need to utilize the
knife command again, as given in the following screenshot, but not
from the Chef node machine. It will be validated from the
workstation machine only, so it is time to switch to the
workstation machine and type the command as given in the
following screenshot. This command should return the Chef node
name along with the service's name:
Validation
21. If you able to view the Chef node along with services, this means
that the Chef node has joined the Chef server successfully.
ServiceNow Chef basic
concepts
In previous sections, we have seen how the Chef Plugin is activated so
moving on from that, we should understand some basics from the
ServiceNow point of view as well. Let's understand some important terms
here such as Chef server, cookbook, recipe, attributes to help you to
understand the Chef Plugin of ServiceNow:
Chef server; this is the server that maintains all the connected
Chef nodes that are available in your infrastructure.
Managed node
ServiceNow Chef Discovery
In the previous chapters, we read about the discovery and its distinct
phases but to populate the ServiceNow CMDB, you do need the
ServiceNow discovery product license, but that is not the case with Chef.
You might be interested to know that discovery of Chef components is
done by Discover Chef workflow, which can be executed after defining
the Chef server within the ServiceNow instance. Furthermore, more than
one Chef server can be defined and ideally each discovery should be done
separately.
ServiceNow Chef user setup
In previous sections, we have read about the Chef user account
(ServiceNow) that is used to interact with the Chef server. To create a user,
navigate to Chef | Setup | UserAccount | Users | Create New. Now you
should be directed to an empty user form so enter the desired name (you
can even give a name related to Chef), as given in the following
screenshot and click on the Submit button. It is important to note that you
must define and validate the user by clicking on the Validate key store
alias, which is available under Related Links. You will be interested to
know that Chef change management is controlled by two properties (the
sys_properties table) that are false by default. Let's look at these properties:
The Chef server cmdb_ci_chef_server holds all the Chef server records within
the ServiceNow space. To create the Chef server in ServiceNow, navigate
to Chef | Chef Server, and after having clicked on it, you should be
directed to the blank Chef configuration form, which can be configured as
given in the following screenshot:
Chef server
Here you will need assistance from your Chef counterpart, or if you have
additional responsibility for the Chef server then you can refer to the Chef
server configuration on the virtual machine or actual machine. After filling
in the information, simply click on the Submit button and you should be
directed to the list view for the Chef servers. Click on the newly created
Chef server and scroll down; you should be able to see related lists, as
given in the following screenshot. To discover the Chef server details,
click on Discover Chef Details UI action as given in the following
screenshot:
Related links
You may be interested to know that on the Chef form, you have can switch
from the view to the dashboard. On the form, simply click on the
Dashboard UI action, which should direct you to the dashboard as shown
in the following screenshot:
Chef dashboard
After kicking off Chef discovery from ServiceNow, you should wait for
some time. The related lists, Chef Environments, Chef Cookbooks, Chef
Recipes, Chef Roles, and Chef Attributes for the Chef server, should be
auto-populated from the Chef server as given in the following screenshot:
Key validation
Setting up key stores
In previous sections, we have seen the significance of key stores for the
Chef application. To create key stores for Chef, navigate to Chef | Setup |
Key Stores as given in the following screenshot:
X.509certificates
To upload a new certificate, click on the New button, which should direct
you to the following form where you can enter the desired Name and short
description. I think it is important to note that two types of format are
available for key stores: DEM and PEM. Particularly for Chef, the format
should be key store PEM and the type should be key store:
The user account chef_user_account holds the information about the user's
account that is used by Chef resource for interacting with the Chef server.
Use case for change requests
The node definition holds all the configurations that are applied to the
Chef nodes rather than the Chef clients in the ServiceNow space. If you
click on the node definition, then you should be directed to the list view of
the node definitions of Chef and that all should be in a published state. In
the previous section, we have seen that, for the lab environment setup,
three virtual machines have been used where one is acting as a Chef
server, the second is acting as a workstation, and the third and last is acting
as the Chef node. Now, I want to apply a change to the Chef node that is
being managed by a node definition, as given in the following screenshot:
Node definition
Now, if you click on the node definition then you should able to view all
the related lists such as Chef Node Components, Chef Node Attributes,
and finally Managed Node. Just imagine if you want to apply a change on
the Chef nodes, then you can simply check out the draft; just after clicking
on it, a popup will be displayed on the screen, as given in the following
screenshot:
As per the system properties of the Chef, a control process rather than
change management is being applied here. Now simply click on the OK
button and you should be directed to the Change Request form as given in
the following screenshot. It is important to note that ServiceNow
automatically fills in most of the fields:
Change request
Once the change request is submitted, the approval request will be sent to
all the stakeholders of the change and after the approval. Navigate to Chef
| Node Definition to view its status. After the approval, you should able to
carry out one more UI action along with an old one (Cancel Change) and
proceed with the change. It's time to apply the changes on the Chef nodes
and that can be done by simply clicking on the Proceed with Change
button. As per the push configuration, we know that Chef nodes update
themselves periodically, so changes will be applied in the same cycle as
nodes but must be synced with the master.
Summary
In this chapter, we have learned that Chef is a configuration management
tool and follows the pull configuration. In the pull configuration, it is the
node's responsibility to update itself from the server. This chapter has been
divided into two sections, mainly Chef and ServiceNow. Firstly in the
section of Chef, We have learned three major components (Chef server,
workstation, and chef node) rather than machines, are required to set up a
lab environment. In this book, we have used Oracle VirtualBox to install
all three machines (server, workstation, and node), and Chef components
have been installed on top of the Linux machine. Certificates play an
important role for authentication and joining the workstation and the Chef
node with the Chef server. Furthermore, in the ServiceNow section, we
saw the different modules of the Chef application and how the Chef
components can be discovered and stored in the ServiceNow CMDB along
with plugin activation on the ServiceNow customer management portal.
We have also learned about the Chef and user roles that must be used to
interact with the Chef server along with the importance of the X.509
certificate. In the Chef environment, better control can be achieved by the
change management application but, in the Chef space, change
management is controlled by two properties; to enable change
management these properties must be true.
Other Books You May Enjoy
If you enjoyed this book, you may be interested in these other books by
Packt:
ISBN: 978-1-78712-871-2
ISBN: 978-1-78588-908-0