Sei sulla pagina 1di 7

Deploying Mule for high availability

8.2 Deploying Mule for high availability

21
0

Being able to ensure business continuity is one of the main goals of any IT depart- ment.
Your Mule-driven projects will not escape this rule. Depending on the criticality of the
messages thatll flow through your Mule instances, youll probably have to design your
topology so it offers a high availability of service. High availability is gener- ally attained
with redundancy and indirection. Redundancy implies several Mule instances running at
the same time. Indirection implies no direct calls between client applications and these Mule
instances.
An interesting side effect of redundancy and indirection is that you can take Mule instances
down at any time without negative impact on the overall availability of your ESB
infrastructure. This allows you to perform maintenance operations, such as deploying a
new configuration file, without any downtime. In this scenario, each of

Mule standalone server

Mule standalone server

App

App

App

App

App App

Network Load Balancer


Client application

Client
applicatio
n

Client application

Figure 8.12 A network load balancer provides


high availability to Mule instances

the Mule instances behind the indirection layer is taken down and brought back up
successively.
BEST PRACTICE

Consider redundancy and indirection whenever you need

hot deployments.
Using a network load balancer in front of a pool of similar Mule instances is probably the
easiest way to achieve high availability (see figure 8.12). Obviously, this is only an option
if the protocol used to reach the Mule instances can be load-balanced (for example,
HTTP). With a network load balancer in place, one Mule instance can be taken down, for
example, for an upgrade, whereas the client applications will still be able to send messages
to an active instance. As the name suggests, using a load bal- ancer would also allow you to
handle increases in load gracefully; itll always be possi- ble to add a new Mule instance in
the pool and have it handle part of the load.
Another type of indirection layer you can use is a JMS queue concurrently con- sumed by
different Mule instances. No client application will ever talk directly to any Mule
instance; all the communications will happen through the queue. Only one Mule
instance will pick up a message thats been published in the queue. If one instance
goes down, the other one will take care of picking up messages if your JMS middleware
supports
the
competing
consumers
pattern
(see
www.eaipatterns.com/
CompetingConsumers.html). Moreover, if messages arent processed fast enough, you can
easily throw in an extra Mule instance to pick up part of the load. This implies that
youre running a highly available JMS provider that will always be up and available for
client applications. The canonical ESB topology, represented in figure 8.13, can therefore
be easily evolved into a highly available one.
If your Mule instance doesnt contain any kind of session state, then it doesnt mat- ter
where the load balancer will dispatch a particular request, as all your Mule instances
are equal as far as incoming requests are concerned. But, on the other hand,

Client application

JMS destinations
JMS provider

JMS destinations

JMS destinations
Mule standalone app

Mule standalone app


Flow
Flow

Flow
Flow Flow

Flow

Figure 8.13 The canonical ESB deployment


topology of Mule instances relies on a central JMS
provider.

if your Mule instance carries any sort of state (for example, idempotency, aggregators,
resequencers, or components with their own state) thats necessary to process mes- sages
correctly, load balancing wont be enough in your topology, and youll need a way to share
session state between your Mule instances.5 This is usually achieved either with a shared
database or with clustering software, depending on what needs to be shared and
performance constraints.
Note that as of this writing, theres
Distributed shared memory
no officially supported clustering mechanism
for the Mule community edition; you can
work around some of
Mule standalone server
the clustering limitations of the com- munity Mule standalone server
Mule app
Mule app
edition using the object stores, as youll learn
in the next section. The
Mule app
Mule app
Mule app
Enterprise Edition, however, has full- fledged Mule app
support for clustering.
Using the Mule Enterprise Edition, all Mule
features become cluster
aware in a completely transparent
fashion. A cluster of Mule Enterprise Edition
servers will create a distrib- uted shared
memory, as you can see in figure 8.14, thatll Client application
contain all the nec- essary shared state and
coordination
systems to cluster a Mule application without
a specific cluster design in the

Figure 8.14
instances

Network load balancer

Client
applicatio
n

Client
application

Clustered and load-balanced Mule EE

5 One could argue that with source IP stickiness, a load balancer will make a client stick to a particular
Mule instance. This is true, but it wouldnt guarantee a graceful failover in case of a crash.

Mule application. To learn more about Mule Enterprise Edition, the key differences between
it and the community edition, and how it can help you with easier clusteriza- tion, you can visit
the Mule Enterprise Edition site (www.mulesoft.com/mule-esb- enterprise).
At this point, you should have a good understanding of whats involved when designing a
topology for highly available Mule instances. This will allow you to ensure continuity of
service in case of unexpected events or planned maintenance.
But its possible that, for your business, this is still not enough. If you deal with sen- sitive data,
you have to design your topology for fault tolerance as well.

8.5.1

High availability via fault tolerance

If you have to ensure that, whatever happens during


the processing of a request, no mes- sage gets lost at
any time, you have to factor fault tolerance into your
topology design. In

Mule application
Flow
Filesystem

traditional systems, fault tolerance is gener- ally


Flow
Flow
attained by using database transactions,
either local or distributed (XA) ones. In the
happy world of Mule, because the vast major- ity of Figure 8.15 Simple filesystem-based
persisted VM queues are standard with Mule.
transports dont support the notion of
transactions, youll have to carefully craft both your instance-level and network-level
topologies to become truly fault tolerant.
Mule offers a simple way to gain a good level of fault tolerance via its persisted VM queues.
When persistence is activated for the VM transport, messages are stored on the filesystem
when they move between the different services of a Mule instance, as shown in figure 8.15.
In case of a crash, these stored messages will be processed upon restart of the instance. Note
that, because it supports XA, the VM transport can be used in combiMule application
nation with other XA-compatible transports in order to
guarantee transactional consumption of messages.
Flow
Flow Flow
The main drawback of VM-persisted queues6 is
that you need to restart a dead instance in order to resume
processing. This can conflict with high- availability
JMS queues
requirements you may have. When this
is the case, the best option is to rely on an external, highly HA JMS provider
available JMS provider and use dedicated queues for all
intra-instance communications. This is illustrated in figure Figure 8.16 An HA JMS provider can
host queues for all communications
8.16.
within a Mule application.

6 Other than forcing you to move around serializable payloads only.

CLUSTERING INTERNAL STATE USING OBJECT STORES

Using the aforementioned mecha- nisms,


you can reach a certain level of

Message
arrives at the idempotent

fault tolerance with regard to message delivery.


filter
But you must be aware that certain moving
parts of Mule have an internal state thats
Calculate
the ID using the ID
necessary to pro- cess messages correctly. This
expression
state is persisted with an internal framework
thats called the object store. The object stores of
Calculate
your application play a key role in the the value using the value
expression
application design, as they contain data that
probably should be shared in a clustered
Try to store
application.
the key/value pair into the
This internal state can exist in dif- ferent
object store
moving parts of Mule, such as in the idempotent
receiver, which must store the messages its
Is
already received. You may recall using an
the pair already
object store in section 5.2.5, when you were present? Yes
introduced to the idempotent fil- ters. There you
Discard
the message
learned how to config-

No

Let the message pass


the filter

Figure 8.17 Behavior of an idempotent filter


ure an idempotent filter to discard
duplicated orders. You can see how it
interacts with an object store in figure 8.17.
The community version of Mule offers a set of object stores:

in-memory-storeA nonclustered object store that stores the contents in local


memory
simple-text-file-storeStores String objects by key to a text file; only suitable for storing string key/values
custom-object-storeUsed to refer to custom implementations of the object
store; see how to develop your own object stores in section 12.3.4
managed-storeGets an object store inside an object store by getting a partition of a ListableObjectStore retrieved from the Mule registry
spring-object-storeRepresents a reference to a Spring-managed object
store
None of those object stores are inherently clustered. You can use JVM-level clusteriza- tion in
conjunction with the in-memory-store or an NFS shared mount point with the simpletext-file-store, but that would be complex at minimum. Therefore, well turn our
attention to one of the contributed modules of Mule, the Redis con- nector (see
www.mulesoft.org/extensions/redis-connector).
Redis (http://redis.io/) is an open source, networked, in-memory, key/value data store with
optional durability. Its design renders Redis specially suitable to be used as a

Mule object store, and by using the Redis connector, the usage of Redis as a Mule object store
is straightforward.
Given that Redis will represent an external highly available object store, you can include it
in the previous design of our
HA Redis provider
application (in figure 8.16) to store the internal
state of your Mule moving parts, as you can see in
figure 8.18.
The Prancing Donkey commitment to high
availability is unavoidable. Theyve decided to
Flow
Mule application
use a highly available configured Redis server to
store the internal state of
Flow
Flow
some moving parts of Mule. Start the highavailability implementation by configuring the
connectivity with Redis:
JMS queues

<redis:config name="localRedis" />

HA JMS provider

This will configure a local nonpasswordprotected Redis instance running on the standard Figure 8.18 An HA JMS provider can host
port, so it should connect straight to a brand-new queues for all communications within a Mule
instance.
Redis installation. The server in
production will be placed in a different host and will be strengthened with a password. Youll
eventually use the host, port, and password attributes to connect to the server.
Now youre ready to use the Redis connector to store the internal state of your processors. Configure the previously mentioned idempotent filter to use Redis as an object
store, as in the next listing.
Listing 8.8 Configuring an idempotent filter to use Redis as an object store
<idempotent-message-filter
idExpression="xpath('/order/id').text">
<managed-store storeName="localRedis" />
</idempotent-message-filter>

Configure idempotent filter to use


B
an object store

Here you declare an idempotent filter almost identical to the one configured in sec- tion
5.2.5. The only exception is found at B, where you instruct the filter to use the object
store with an ID equal to localRedis, which you declared before.
Redis isnt the only option that implements an object store; another available extension is, for
instance, the MongoDB connector. The Mule Enterprise Edition sup- plies a myriad of other
options such as JDBC or Spring cache-based object stores. But not every possible solution is
covered as a Mule connector or as an Enterprise Edition feature. Youll learn how to
implement your own object store in section 12.3.4.
Youve seen that shooting for fault tolerance can be achieved in different ways with Mule,
depending on the criticality of the data you handle and the availability of trans- actions for
the transports you use.

Potrebbero piacerti anche