Sei sulla pagina 1di 136

Table of Contents

Overview
What is Event Hubs?
FAQ
Get Started
Create an Event Hub
Send events
.NET Standard
.NET Framework
Java
C
Receive events
.NET Standard
.NET Framework
Java
Apache Storm
Programming guide
How to
Plan and design
Event Hubs Dedicated
Authentication and security model overview
Availability and consistency
Develop
Available APIs
Authentication and authorization
AMQP 1.0 protocol guide
Manage
Event Hubs management libraries
Archive
Stream Azure Diagnostics data using Event Hubs
Create and deploy an Event Hub using a Resource Manager template
Reference
.NET
Microsoft.Azure.EventHubs
Microsoft.Azure.EventHubs.Processor
Microsoft.ServiceBus.Messaging
Microsoft.Azure.ServiceBus.EventProcessorHost
Microsoft.Azure.Management.EventHub
Java
com.microsoft.azure.eventhubs
com.microsoft.azure.eventprocessorhost
REST
Exceptions
Quotas
Resources
Code samples
Pricing
Learning path
Service updates
Stack Overflow
Videos
What is Azure Event Hubs?
3/7/2017 10 min to read Edit on GitHub

Event Hubs is a highly scalable data streaming platform capable of ingesting millions of events per second. Data
sent to an Event Hub can be transformed and stored using any real-time analytics provider or batching/storage
adapters. With the ability to provide publish-subscribe capabilities with low latency and at massive scale, Event
Hubs serves as the "on ramp" for Big Data.

Why use Event Hubs?


Event Hubs event and telemetry handling capabilities make it especially useful for:
Application instrumentation
User experience or workflow processing
Internet of Things (IoT) scenarios
Event Hubs also enables behavior tracking in mobile apps, traffic information from web farms, in-game event
capture in console games, or telemetry collected from industrial machines or connected vehicles.

Azure Event Hubs overview


The common role that Event Hubs plays in solution architectures is the "front door" for an event pipeline, often
called an event ingestor. An event ingestor is a component or service that sits between event publishers and
event consumers to decouple the production of an event stream from the consumption of those events.

Azure Event Hubs is an event processing service that provides cloud-scale event and telemetry ingestion, with
low latency and high reliability. Event Hubs provides a message stream handling capability and has
characteristics that are different from traditional enterprise messaging. Event Hubs capabilities are built around
high throughput and event processing scenarios. As such, Event Hubs does not implement some of the
messaging capabilities that are available for messaging entities, such as topics.
An Event Hub is created at the namespace level, and uses AMQP and HTTP as its primary API interfaces.
Event publishers
Any entity that sends data to an Event Hub is an event publisher. Event publishers can publish events using
HTTPS or AMQP 1.0. Event publishers use a Shared Access Signature (SAS) token to identify themselves to an
Event Hub, and can have a unique identity, or use a common SAS token.
Publishing an event
You can publish an event via AMQP 1.0 or HTTPS. Service Bus provides an EventHubClient class for publishing
events to an Event Hub from .NET clients. For other runtimes and platforms, you can use any AMQP 1.0 client,
such as Apache Qpid. You can publish events individually, or batched. A single publication (event data instance)
has a limit of 256 KB, regardless of whether it is a single event or a batch. Publishing events larger than this
results in an error. It is a best practice for publishers to be unaware of partitions within the Event Hub and to
only specify a partition key (introduced in the next section), or their identity via their SAS token.
The choice to use AMQP or HTTPS is specific to the usage scenario. AMQP requires the establishment of a
persistent bidirectional socket in addition to transport level security (TLS) or SSL/TLS. AMQP has higher network
costs when initializing the session, however HTTPS requires additional SSL overhead for every request. AMQP
has higher performance for frequent publishers.

Event Hubs ensures that all events sharing a partition key value are delivered in order, and to the same partition.
If partition keys are used with publisher policies, then the identity of the publisher and the value of the partition
key must match. Otherwise, an error occurs.
Publisher policy
Event Hubs enables granular control over event publishers through publisher policies. Publisher policies are run-
time features designed to facilitate large numbers of independent event publishers. With publisher policies, each
publisher uses its own unique identifier when publishing events to an Event Hub, using the following
mechanism:

//[my namespace].servicebus.windows.net/[event hub name]/publishers/[my publisher name]

You don't have to create publisher names ahead of time, but they must match the SAS token used when
publishing an event, in order to ensure independent publisher identities. When using publisher policies, the
PartitionKey value is set to the publisher name. To work properly, these values must match.

Partitions
Event Hubs provides message streaming through a partitioned consumer pattern in which each consumer only
reads a specific subset, or partition, of the message stream. This pattern enables horizontal scale for event
processing and provides other stream-focused features that are unavailable in queues and topics.
A partition is an ordered sequence of events that is held in an Event Hub. As newer events arrive, they are added
to the end of this sequence. A partition can be thought of as a "commit log."

Event Hubs retain data for a configured retention time that applies across all partitions in the Event Hub. Events
expire on a time basis; you cannot explicitly delete them. Because partitions are independent and contain their
own sequence of data, they often grow at different rates.

The number of partitions is specified at creation and must be between 2 and 32. The partition count is not
changeable, so you should consider long-term scale when setting partition count. Partitions are a data
organization mechanism that relates to the downstream parallelism required in consuming applications. The
number of partitions in an Event Hub directly relates to the number of concurrent readers you expect to have.
You can increase the number of partitions beyond 32 by contacting the Event Hubs team.
While partitions are identifiable and can be sent to directly, this is not recommended. Instead, you can use
higher level constructs introduced in the Event publisher and Capacity sections.
Partitions are filled with a sequence of event data which contains the body of the event, a user-defined property
bag, and metadata such as its offset in the partition and its number in the stream sequence.
For more information about partitions and the trade-off between availability and reliability, see the Event Hubs
programming guide and the Availability and consistency in Event Hubs article.
Partition key
You can use a partition key to map incoming event data into specific partitions for the purpose of data
organization. The partition key is a sender-supplied value passed into an Event Hub. It is processed through a
static hashing function, which creates the partition assignment. If you don't specify a partition key when
publishing an event, a round-robin assignment is used.
The event publisher is only aware of its partition key, not the partition to which the events are published. This
decoupling of key and partition insulates the sender from needing to know too much about the downstream
processing. A per-device or user unique identity makes a good partition key, but other attributes such as
geography can also be used to group related events into a single partition.
SAS tokens
Event Hubs uses Shared Access Signatures, which are available at the namespace and Event Hub level. A SAS
token is generated from a SAS key and is an SHA hash of a URL, encoded in a specific format. Using the name of
the key (policy) and the token, Event Hubs can regenerate the hash and thus authenticate the sender. Normally,
SAS tokens for event publishers are created with only send privileges on a specific Event Hub. This SAS token
URL mechanism is the basis for publisher identification introduced in the publisher policy. For more information
about working with SAS, see Shared Access Signature Authentication with Service Bus.

Event consumers
Any entity that reads event data from an Event Hub is an event consumer. All Event Hubs consumers connect via
the AMQP 1.0 session and events are delivered through the session as they become available. The client does
not need to poll for data availability.
Consumer groups
The publish/subscribe mechanism of Event Hubs is enabled through consumer groups. A consumer group is a
view (state, position, or offset) of an entire Event Hub. Consumer groups enable multiple consuming
applications to each have a separate view of the event stream, and to read the stream independently at their
own pace and with their own offsets.
In a stream processing architecture, each downstream application equates to a consumer group. If you want to
write event data to long-term storage, then that storage writer application is a consumer group. Complex event
processing can then be performed by another, separate consumer group. You can only access partitions through
a consumer group. Each partition can only have one active reader from a given consumer group at a time.
There is always a default consumer group in an Event Hub, and you can create up to 20 consumer groups for a
Standard tier Event Hub.
The following are examples of the consumer group URI convention:

//[my namespace].servicebus.windows.net/[event hub name]/[Consumer Group #1]


//[my namespace].servicebus.windows.net/[event hub name]/[Consumer Group #2]

Stream offsets
An offset is the position of an event within a partition. You can think of an offset as a client-side cursor. The
offset is a byte numbering of the event. This offset enables an event consumer (reader) to specify a point in the
event stream from which they want to begin reading events. You can specify the offset as a timestamp or as an
offset value. Consumers are responsible for storing their own offset values outside of the Event Hubs service.
Within a partition, each event includes an offset.

Checkpointing
Checkpointing is a process by which readers mark or commit their position within a partition event sequence.
Checkpointing is the responsibility of the consumer and occurs on a per-partition basis within a consumer
group. This responsibility means that for each consumer group, each partition reader must keep track of its
current position in the event stream, and can inform the service when it considers the data stream complete.
If a reader disconnects from a partition, when it reconnects it begins reading at the checkpoint that was
previously submitted by the last reader of that partition in that consumer group. When the reader connects, it
passes this offset to the Event Hub to specify the location at which to start reading. In this way, you can use
checkpointing to both mark events as "complete" by downstream applications, and to provide resiliency if a
failover between readers running on different machines occurs. It is possible to return to older data by
specifying a lower offset from this checkpointing process. Through this mechanism, checkpointing enables both
failover resiliency and event stream replay.
Common consumer tasks
All Event Hubs consumers connect via an AMQP 1.0 session and state-aware bidirectional communication
channel. Each partition has an AMQP 1.0 session that facilitates the transport of events segregated by partition.
Connect to a partition
When connecting to partitions, it is common practice to use a leasing mechanism to coordinate reader
connections to specific partitions. This way, it is possible for every partition in a consumer group to have only
one active reader. Checkpointing, leasing, and managing readers are simplified by using the EventProcessorHost
class for .NET clients. The Event Processor Host is an intelligent consumer agent.
Read events
After an AMQP 1.0 session and link is opened for a specific partition, events are delivered to the AMQP 1.0 client
by the Event Hubs service. This delivery mechanism enables higher throughput and lower latency than pull-
based mechanisms such as HTTP GET. As events are sent to the client, each event data instance contains
important metadata such as the offset and sequence number that are used to facilitate checkpointing on the
event sequence.
Event data:
Offset
Sequence number
Body
User properties
System properties
It is your responsibility to manage the offset.

Capacity
Event Hubs has a highly scalable parallel architecture and there are several key factors to consider when sizing
and scaling.
Throughput units
The throughput capacity of Event Hubs is controlled by throughput units. Throughput units are pre-purchased
units of capacity. A single throughput unit includes the following capacity:
Ingress: Up to 1 MB per second or 1000 events per second (whichever comes first)
Egress: Up to 2 MB per second
Beyond the capacity of the purchased throughput units, ingress is throttled and a ServerBusyException is
returned. Egress does not produce throttling exceptions, but is still limited to the capacity of the purchased
throughput units. If you receive publishing rate exceptions or are expecting to see higher egress, be sure to
check how many throughput units you have purchased for the namespace. You can manage throughput units
on the Scale blade of the namespaces in the Azure portal. You can also manage throughput units
programmatically using the Azure APIs.
Throughput units are billed per hour and are pre-purchased. Once purchased, throughput units are billed for a
minimum of one hour. Up to 20 throughput units can be purchased for an Event Hubs namespace and are
shared across all Event Hubs in the namespace.
More throughput units can be purchased in blocks of 20, up to 100 throughput units, by contacting Azure
support. Beyond that, you can also purchase blocks of 100 throughput units.
It is recommended that you balance throughput units and partitions to achieve optimal scale. A single partition
has a maximum scale of one throughput unit. The number of throughput units should be less than or equal to
the number of partitions in an Event Hub.
For detailed pricing information, see Event Hubs Pricing.

Next steps
Get started with an Event Hubs tutorial
Event Hubs programming guide
Availability and consistency in Event Hubs
Event Hubs FAQ
Sample applications that use Event Hubs
Event Hubs frequently asked questions
3/13/2017 7 min to read Edit on GitHub

General
What is the difference between Event Hubs Basic and Standard tiers?
The Standard tier of Azure Event Hubs provides features beyond what is available in the Basic tier. The following
features are included with Standard:
Longer event retention
Additional brokered connections, with an overage charge for more than the number included
More than a single Consumer Group
Archive
For a more details regarding pricing tiers, including Dedicated Event Hubs, see the Event Hubs pricing details.
What are Event Hubs throughput units?
You explicitly select Event Hubs throughput units, either through the Azure portal or Event Hubs resource
manager templates. Throughput units apply to all Event Hubs in an Event Hubs namespace, and each throughput
unit entitles the namespace to the following capabilities:
Up to 1 MB per second of ingress events (events sent into an Event Hub), but no more than 1000 ingress
events, management operations or control API calls per second.
Up to 2 MB per second of egress events (events consumed from an Event Hub).
Up to 84 GB of event storage (sufficient for the default 24-hour retention period).
Event Hubs throughput units are billed hourly, based on the maximum number of units selected during the given
hour.
How are Event Hubs throughput unit limits enforced?
If the total ingress throughput or the total ingress event rate across all Event Hubs in a namespace exceeds the
aggregate throughput unit allowances, senders will be throttled and will receive errors indicating that the ingress
quota has been exceeded.
If the total egress throughput or the total event egress rate across all Event Hubs in a namespace exceeds the
aggregate throughput unit allowances, receivers will be throttled and will receive errors indicating that the egress
quota has been exceeded. Ingress and egress quotas are enforced separately, so that no sender can cause event
consumption to slow down, nor can a receiver prevent events from being sent into an Event Hub.
Is there a limit on the number of throughput units that can be selected?
There is a default quota of 20 throughput units per namespace. You can request a larger quota of throughput
units by filing a support ticket. Beyond the 20 throughput unit limit, bundles are available in 20 and 100
throughput units. Note that using more than 20 throughput units removes the ability to change the number of
throughput units without filing a support ticket.
Can I use a single AMQP connection to send and receive from multiple Event Hubs?
Yes, as long as all of the Event Hubs are in the same namespace.
What is the maximum retention period for events?
Event Hubs Standard tier currently supports a maximum retention period of 7 days. Note that Event Hubs are not
intended as a permanent data store. Retention periods greater than 24 hours are intended for scenarios in which
it is convenient to replay an event stream into the same systems; for example, to train or verify a new machine
learning model on existing data. If you need message retention beyond 7 days, enabling Archive on your Event
Hub will pull the data from your Event Hub to the storage of your choosing. Enabling Archive will incur a charge
based on your purchased Throughput Unit.
Where is Azure Event Hubs available?
Azure Event Hubs is available in all supported Azure regions. For a list, visit the Azure regions page.

Best practices
How many partitions do I need?
Please keep in mind that the partition count on an Event Hub cannot be modified after setup. With that in mind, it
is important to think about how many partitions you need before getting started.
Event Hubs is designed to allow a single partition reader per consumer group. In most use cases, the default
setting of four partitions is sufficient. If you are looking to scale your event processing, you may want to consider
adding additional partitions. There is no specific throughput limit on a partition, however the aggregate
throughput in your namespace is limited by the number of throughput units. As you increase the number of
throughput units in your namespace, you may want additional partitions to allow concurrent readers to achieve
their own maximum throughput.
However, if you have a model in which your application has an affinity to a particular partition, increasing the
number of partitions may not be of any benefit to you. For more information on this please see availability and
consistency.

Pricing
Where can I find more pricing information?
For complete information about Event Hubs pricing, see the Event Hubs pricing details.
Is there a charge for retaining Event Hubs events for more than 24 hours?
The Event Hubs Standard tier does allow message retention periods longer than 24 hours, for a maximum of 7
days. If the size of the total number of stored events exceeds the storage allowance for the number of selected
throughput units (84 GB per throughput unit), the size that exceeds the allowance is charged at the published
Azure Blob storage rate. The storage allowance in each throughput unit covers all storage costs for retention
periods of 24 hours (the default) even if the throughput unit is used up to the maximum ingress allowance.
How is the Event Hubs storage size calculated and charged?
The total size of all stored events, including any internal overhead for event headers or on disk storage structures
in all Event Hubs, is measured throughout the day. At the end of the day, the peak storage size is calculated. The
daily storage allowance is calculated based on the minimum number of throughput units that were selected
during the day (each throughput unit provides an allowance of 84 GB). If the total size exceeds the calculated
daily storage allowance, the excess storage is billed using Azure Blob storage rates (at the Locally Redundant
Storage rate).
How are Event Hubs ingress events calculated?
Each event sent to an Event Hub counts as a billable message. An ingress event is defined as a unit of data that is
less than or equal to 64 KB. Any event that is less than or equal to 64 KB in size is considered to be one billable
event. If the event is greater than 64 KB, the number of billable events is calculated according to the event size, in
multiples of 64 KB. For example, an 8 KB event sent to the Event Hub is billed as one event, but a 96 KB message
sent to the Event Hub is billed as two events.
Events consumed from an Event Hub, as well as management operations and control calls such as checkpoints,
are not counted as billable ingress events, but accrue up to the throughput unit allowance.
Do brokered connection charges apply to Event Hubs?
Connection charges apply only when the AMQP protocol is used. There are no connection charges for sending
events using HTTP, regardless of the number of sending systems or devices. If you plan to use AMQP (for
example, to achieve more efficient event streaming or to enable bi-directional communication in IoT command
and control scenarios), please refer to the Event Hubs pricing information page for details regarding how many
connections are included in each service tier.
How is Event Hubs Archive billed?
Archive is enabled when any Event Hub in the namespace has the Archive feature enabled. Archive is billed
hourly per purchased Throughput Unit. As the Throughput Unit count is increased or decreased, Event Hubs
Archive billing will reflect these changes in whole hour increments. please refer to the Event Hubs pricing
information page for details regarding Event Hubs Archive billing.
Will I be billed for the storage account I select for Event Hubs Archive?
Archive uses a storage account you provide when enabled on an Event Hub. As this is your storage account, any
changes for this will be billed to your Azure subscription.

Quotas
Are there any quotas associated with Event Hubs?
For a list of all Event Hubs quotas, see quotas.

Troubleshooting
What are some of the exceptions generated by Event Hubs and their suggested actions?
For a list of possible Event Hubs exceptions, see Exceptions overview.
Diagnostic logs
Event Hubs supports two types of diagnostics logs - Archive error logs and operational logs - both of which are
represented in json and can be turned on through the Azure portal.
Support and SLA
Technical support for Event Hubs is available through the community forums. Billing and subscription
management support is provided at no cost.
To learn more about our SLA, see the Service Level Agreements page.

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Create an Event Hubs namespace and an Event Hub
using the Azure portal
2/2/2017 1 min to read Edit on GitHub

Create an Event Hubs namespace


1. Log on to the Azure portal, and click New at the top left of the screen.
2. Click Internet of Things, then click Event Hubs.

3. In the Create namespace blade, enter a namespace name. The system immediately checks to see if the
name is available.
4. After making sure the namespace name is available, choose the pricing tier (Basic or Standard). Also, choose
an Azure subscription, resource group, and location in which to create the resource.
5. Click Create to create the namespace.
6. In the Event Hubs namespace list, click the newly-created namespace.

7. In the namespace blade, click Event Hubs.


Create an Event Hub
1. At the top of the blade, click Add Event Hub.
2. Type a name for your Event Hub, then click Create.
3. In the list of Event Hubs, click the newly created Event Hub name.

4. Back in the namespace blade (not the specific Event Hub blade), click Shared access policies, and then
click RootManageSharedAccessKey.
5. Click the copy button to copy the RootManageSharedAccessKey connection string to the clipboard.
Save this connection string to use later in the tutorial.

Your Event Hub is now created, and you have the connection strings you need to send and receive events.

Next steps
To learn more about Event Hubs, visit these links:
Event Hubs overview
Event Hubs API overview
Get started sending messages to Azure Event Hubs in
.NET Standard
3/21/2017 3 min to read Edit on GitHub

NOTE
This sample is available on GitHub.

This tutorial shows how to write a .NET Core console application that sends a set of messages to an Event Hub. You
can run the GitHub solution as-is, replacing the EhConnectionString and EhEntityPath strings with your Event Hub
values. Or you can follow the steps in this tutorial to create your own.

Prerequisites
Microsoft Visual Studio 2015 or 2017. The examples in this tutorial use Visual Studio 2015, but Visual Studio
2017 is also supported.
.NET Core Visual Studio 2015 or 2017 tools.
An Azure subscription.
An Event Hubs namespace.
To send messages to an Event Hub, we will use Visual Studio to write a C# console application.

Create an Event Hubs namespace and an Event Hub


The first step is to use the Azure portal to create a namespace for the Event Hubs type, and obtain the management
credentials that your application needs to communicate with the Event Hub. To create a namespace and an Event
Hub, follow the procedure in this article, and then proceed with the following steps.

Create a console application


Start Visual Studio. From the File menu, click New, and then click Project. Create a .NET Core console application.
Add the Event Hubs NuGet package
Add the Microsoft.Azure.EventHubs NuGet package to your project.

Write some code to send messages to the Event Hub


1. Add the following using statements to the top of the Program.cs file.

using Microsoft.Azure.EventHubs;
using System.Text;

2. Add constants to the Program class for the Event Hubs connection string and entity path (individual Event
Hub name). Replace the placeholders in brackets with the proper values that were obtained when creating
the Event Hub.

private static EventHubClient eventHubClient;


private const string EhConnectionString = "{Event Hubs connection string}";
private const string EhEntityPath = "{Event Hub path/name}";

3. Add a new method named MainAsync to the Program class, as follows:


private static async Task MainAsync(string[] args)
{
// Creates an EventHubsConnectionStringBuilder object from the connection string, and sets the
EntityPath.
// Typically, the connection string should have the entity path in it, but for the sake of this
simple scenario
// we are using the connection string from the namespace.
var connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
{
EntityPath = EhEntityPath
};

eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());

await SendMessagesToEventHub(100);

await eventHubClient.CloseAsync();

Console.WriteLine("Press ENTER to exit.");


Console.ReadLine();
}

4. Add a new method named SendMessagesToEventHub to the Program class like the following:

// Creates an Event Hub client and sends 100 messages to the event hub.
private static async Task SendMessagesToEventHub(int numMessagesToSend)
{
for (var i = 0; i < numMessagesToSend; i++)
{
try
{
var message = $"Message {i}";
Console.WriteLine($"Sending message: {message}");
await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} > Exception: {exception.Message}");
}

await Task.Delay(10);
}

Console.WriteLine($"{numMessagesToSend} messages sent.");


}

5. Add the following code to the Main method in the Program class.

MainAsync(args).GetAwaiter().GetResult();

Here is what your Program.cs should look like.


namespace SampleSender
{
using System;
using System.Text;
using System.Threading.Tasks;
using Microsoft.Azure.EventHubs;

public class Program


{
private static EventHubClient eventHubClient;
private const string EhConnectionString = "{Event Hubs connection string}";
private const string EhEntityPath = "{Event Hub path/name}";

public static void Main(string[] args)


{
MainAsync(args).GetAwaiter().GetResult();
}

private static async Task MainAsync(string[] args)


{
// Creates an EventHubsConnectionStringBuilder object from the connection string, and sets
the EntityPath.
// Typically, the connection string should have the entity path in it, but for the sake of
this simple scenario
// we are using the connection string from the namespace.
var connectionStringBuilder = new EventHubsConnectionStringBuilder(EhConnectionString)
{
EntityPath = EhEntityPath
};

eventHubClient =
EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());

await SendMessagesToEventHub(100);

await eventHubClient.CloseAsync();

Console.WriteLine("Press ENTER to exit.");


Console.ReadLine();
}

// Creates an Event Hub client and sends 100 messages to the event hub.
private static async Task SendMessagesToEventHub(int numMessagesToSend)
{
for (var i = 0; i < numMessagesToSend; i++)
{
try
{
var message = $"Message {i}";
Console.WriteLine($"Sending message: {message}");
await eventHubClient.SendAsync(new EventData(Encoding.UTF8.GetBytes(message)));
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} > Exception: {exception.Message}");
}

await Task.Delay(10);
}

Console.WriteLine($"{numMessagesToSend} messages sent.");


}
}
}

6. Run the program, and ensure that there are no errors.


Congratulations! You have now sent messages to an Event Hub.

Next steps
You can learn more about Event Hubs by visiting the following links:
Receive events from Event Hubs
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Send events to Azure Event Hubs using the .NET
Framework
3/8/2017 2 min to read Edit on GitHub

Introduction
Event Hubs is a service that processes large amounts of event data (telemetry) from connected devices and
applications. After you collect data into Event Hubs, you can store the data using a storage cluster or transform it
using a real-time analytics provider. This large-scale event collection and processing capability is a key component
of modern application architectures including the Internet of Things (IoT).
This tutorial shows how to use the Azure portal to create an Event Hub. It also shows how to send events to an
Event Hub using a console application written in C# using the .NET Framework. To receive events using the .NET
Framework, see the Receive events using the .NET Framework article, or click the appropriate receiving language in
the left-hand table of contents.
To complete this tutorial, you'll need the following:
Microsoft Visual Studio 2015 or higher. The screen shots in this tutorial use Visual Studio 2017.
An active Azure account. If you don't have one, you can create a free account in just a couple of minutes. For
details, see Azure Free Trial.

Create an Event Hubs namespace and an Event Hub


The first step is to use the Azure portal to create a namespace of type Event Hubs, and obtain the management
credentials your application needs to communicate with the Event Hub. To create a namespace and Event Hub,
follow the procedure in this article, then proceed with the following steps.

Create a console application


In this section, you'll write a Windows console app that sends events to your Event Hub.
1. In Visual Studio, create a new Visual C# Desktop App project using the Console Application project
template. Name the project Sender.
2. In Solution Explorer, right-click the Sender project, and then click Manage NuGet Packages for Solution.
3. Click the Browse tab, then search for Microsoft Azure Service Bus . Click Install, and accept the terms of
use.

Visual Studio downloads, installs, and adds a reference to the Azure Service Bus library NuGet package.
4. Add the following using statements at the top of the Program.cs file:

using System.Threading;
using Microsoft.ServiceBus.Messaging;

5. Add the following fields to the Program class, substituting the placeholder values with the name of the
Event Hub you created in the previous section, and the namespace-level connection string you saved
previously.
static string eventHubName = "{Event Hub name}";
static string connectionString = "{send connection string}";

6. Add the following method to the Program class:

static void SendingRandomMessages()


{
var eventHubClient = EventHubClient.CreateFromConnectionString(connectionString, eventHubName);
while (true)
{
try
{
var message = Guid.NewGuid().ToString();
Console.WriteLine("{0} > Sending message: {1}", DateTime.Now, message);
eventHubClient.Send(new EventData(Encoding.UTF8.GetBytes(message)));
}
catch (Exception exception)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("{0} > Exception: {1}", DateTime.Now, exception.Message);
Console.ResetColor();
}

Thread.Sleep(200);
}
}

This method continuously sends events to your Event Hub with a 200-ms delay.
7. Finally, add the following lines to the Main method:

Console.WriteLine("Press Ctrl-C to stop the sender process");


Console.WriteLine("Press Enter to start now");
Console.ReadLine();
SendingRandomMessages();

8. Run the program, and ensure that there are no errors.


Congratulations! You have now sent messages to an Event Hub.

Next steps
Now that you've built a working application that creates an Event Hub and sends data, you can move on to the
following scenarios:
Receive events using the Event Processor Host
Event Processor Host reference
Event Hubs overview
Send events to Azure Event Hubs using Java
2/2/2017 2 min to read Edit on GitHub

Introduction
Event Hubs is a highly scalable ingestion system that can intake millions of events per second, enabling an
application to process and analyze the massive amounts of data produced by your connected devices and
applications. Once collected into Event Hubs, you can transform and store data using any real-time analytics
provider or storage cluster.
For more information, see the Event Hubs overview.
This tutorial shows how to send events to an Event Hub using a console application in Java. To receive events using
the Java Event Processor Host library, see this article, or click the appropriate receiving language in the left-hand
table of contents.
In order to complete this tutorial, you will need the following:
A Java development environment. For this tutorial, we will assume Eclipse.
An active Azure account.
If you don't have an account, you can create a free account in just a couple of minutes. For details, see Azure
Free Trial.

Send messages to Event Hubs


The Java client library for Event Hubs is available for use in Maven projects from the Maven Central Repository, and
can be referenced using the following dependency declaration inside your Maven project file:

<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-eventhubs</artifactId>
<version>{VERSION}</version>
</dependency>

For different types of build environments, you can explicitly obtain the latest released JAR files from the Maven
Central Repository or from the release distribution point on GitHub.
For a simple event publisher, import the com.microsoft.azure.eventhubs package for the Event Hubs client classes
and the com.microsoft.azure.servicebus package for utility classes such as common exceptions that are shared with
the Azure Service Bus messaging client.
For the following sample, first create a new Maven project for a console/shell application in your favorite Java
development environment. The class will be called Send .
import java.io.IOException;
import java.nio.charset.*;
import java.util.*;
import java.util.concurrent.ExecutionException;

import com.microsoft.azure.eventhubs.*;
import com.microsoft.azure.servicebus.*;

public class Send


{
public static void main(String[] args)
throws ServiceBusException, ExecutionException, InterruptedException, IOException
{

Replace the namespace and Event Hub names with the values used when you created the Event Hub.

final String namespaceName = "----ServiceBusNamespaceName-----";


final String eventHubName = "----EventHubName-----";
final String sasKeyName = "-----SharedAccessSignatureKeyName-----";
final String sasKey = "---SharedAccessSignatureKey----";
ConnectionStringBuilder connStr = new ConnectionStringBuilder(namespaceName, eventHubName, sasKeyName,
sasKey);

Then, create a singular event by turning a string into its UTF-8 byte encoding. We then create a new Event Hubs
client instance from the connection string and send the message.

byte[] payloadBytes = "Test AMQP message from JMS".getBytes("UTF-8");


EventData sendEvent = new EventData(payloadBytes);

EventHubClient ehClient = EventHubClient.createFromConnectionStringSync(connStr.toString());


ehClient.sendSync(sendEvent);
}
}

Next steps
You can learn more about Event Hubs by visiting the following links:
Receive events using the EventProcessorHost
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Send events to Azure Event Hubs using C
2/28/2017 3 min to read Edit on GitHub

Introduction
Event Hubs is a highly scalable ingestion system that can intake millions of events per second, enabling an
application to process and analyze the massive amounts of data produced by your connected devices and
applications. Once collected into Event Hubs, you can transform and store data using any real-time analytics
provider or storage cluster.
For more information, please see the Event Hubs overview.
In this tutorial, you will learn how to send events to an Event Hub using a console application in C. To receive
events, click the appropriate receiving language in the left-hand table of contents.
To complete this tutorial, you will need the following:
A C development environment. For this tutorial, we will assume the gcc stack on an Azure Linux VM with Ubuntu
14.04.
Microsoft Visual Studio, or Visual Studio Community Edition
An active Azure account. If you don't have an account, you can create a free trial account in just a couple of
minutes. For details, see Azure Free Trial.

Send messages to Event Hubs


In this section we will write a C app to send events to your Event Hub. We will use the Proton AMQP library from
the Apache Qpid project. This is analogous to using Service Bus queues and topics with AMQP from C as shown
here. For more information, see Qpid Proton documentation.
1. From the Qpid AMQP Messenger page, click the Installing Qpid Proton link and follow the instructions
depending on your environment.
2. To compile the Proton library, install the following packages:

sudo apt-get install build-essential cmake uuid-dev openssl libssl-dev

3. Download the Qpid Proton library, and extract it, e.g.:

wget http://archive.apache.org/dist/qpid/proton/0.7/qpid-proton-0.7.tar.gz
tar xvfz qpid-proton-0.7.tar.gz

4. Create a build directory, compile and install:

cd qpid-proton-0.7
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX=/usr ..
sudo make install

5. In your work directory, create a new file called sender.c with the following content. Remember to substitute
the value for your Event Hub name and namespace name (the latter is usually {event hub name}-ns ). You
must also substitute a URL-encoded version of the key for the SendRule created earlier. You can URL-
encode it here.

#include "proton/message.h"
#include "proton/messenger.h"

#include <getopt.h>
#include <proton/util.h>
#include <sys/time.h>
#include <stddef.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <stdlib.h>

#define check(messenger)
{
if(pn_messenger_errno(messenger))
{
printf("check\n");
die(__FILE__, __LINE__, pn_error_text(pn_messenger_error(messenger)));
}
}

pn_timestamp_t time_now(void)
{
struct timeval now;
if (gettimeofday(&now, NULL)) pn_fatal("gettimeofday failed\n");
return ((pn_timestamp_t)now.tv_sec) * 1000 + (now.tv_usec / 1000);
}

void die(const char *file, int line, const char *message)


{
printf("Dead\n");
fprintf(stderr, "%s:%i: %s\n", file, line, message);
exit(1);
}

int sendMessage(pn_messenger_t * messenger) {


char * address = (char *) "amqps://SendRule:{Send Rule key}@{namespace
name}.servicebus.windows.net/{event hub name}";
char * msgtext = (char *) "Hello from C!";

pn_message_t * message;
pn_data_t * body;
message = pn_message();

pn_message_set_address(message, address);
pn_message_set_content_type(message, (char*) "application/octect-stream");
pn_message_set_inferred(message, true);

body = pn_message_body(message);
pn_data_put_binary(body, pn_bytes(strlen(msgtext), msgtext));

pn_messenger_put(messenger, message);
check(messenger);
pn_messenger_send(messenger, 1);
check(messenger);

pn_message_free(message);
}

int main(int argc, char** argv) {


printf("Press Ctrl-C to stop the sender process\n");

pn_messenger_t *messenger = pn_messenger(NULL);


pn_messenger_set_outgoing_window(messenger, 1);
pn_messenger_start(messenger);
while(true) {
sendMessage(messenger);
printf("Sent message\n");
sleep(1);
}

// release messenger resources


pn_messenger_stop(messenger);
pn_messenger_free(messenger);

return 0;
}

6. Compile the file, assuming gcc:

gcc sender.c -o sender -lqpid-proton

NOTE
In this code, we use an outgoing window of 1 to force the messages out as soon as possible. In general, your
application should try to batch messages to increase throughput. See Qpid AMQP Messenger page for more
information about how to use the Qpid Proton library in this and other environments, and from platforms for which
bindings are provided (currently Perl, PHP, Python, and Ruby).

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Get started receiving messages with the Event
Processor Host in .NET Standard
3/21/2017 4 min to read Edit on GitHub

NOTE
This sample is available on GitHub.

This tutorial shows how to write a .NET Core console application that receives messages from an Event Hub by
using EventProcessorHost. You can run the GitHub solution as-is, replacing the strings with your Event Hub and
storage account values. Or you can follow the steps in this tutorial to create your own.

Prerequisites
Microsoft Visual Studio 2015 or 2017. The examples in this tutorial use Visual Studio 2015, but Visual Studio
2017 is also supported.
.NET Core Visual Studio 2015 or 2017 tools.
An Azure subscription.
An Azure Event Hubs namespace.
An Azure storage account.

Create an Event Hubs namespace and an Event Hub


The first step is to use the Azure portal to create a namespace for the Event Hubs type, and obtain the management
credentials that your application needs to communicate with the Event Hub. To create a namespace and Event Hub,
follow the procedure in this article, and then proceed with the following steps.

Create an Azure storage account


1. Sign in to the Azure portal.
2. In the left navigation pane of the portal, click New, click Storage, and then click Storage Account.
3. Complete the fields in the storage account blade, and then click Create.
4. After you see the Deployments Succeeded message, click the name of the new storage account. In the
Essentials blade, click Blobs. When the Blob service blade opens, click + Container at the top. Give the
container a name, and then close the Blob service blade.
5. Click Access keys in the left blade and copy the name of the storage container, the storage account, and the
value of key1. Save these values to Notepad or some other temporary location.

Create a console application


1. Start Visual Studio. From the File menu, click New, and then click Project. Create a .NET Core console
application.
2. In Solution Explorer, double-click the project.json file to open it in the Visual Studio editor.
3. Add the string "portable-net45+win8" to the "imports" declaration, within the "frameworks" section. That
section should now appear as follows. This string is necessary due to the Azure Storage dependency on
OData:

"frameworks": {
"netcoreapp1.0": {
"imports": [
"dnxcore50",
"portable-net45+win8"
]
}
}

4. From the File menu, click Save All.


Note that this tutorial shows how to write a .NET Core application. If you want to target the full .NET Framework,
add the following line of code to the project.json file, in the "frameworks" section:

"net451": {
},

Add the Event Hubs NuGet package


Add the following NuGet packages to the project:
Microsoft.Azure.EventHubs
Microsoft.Azure.EventHubs.Processor

Implement the IEventProcessor interface


1. In Solution Explorer, right-click the project, click Add, and then click Class. Name the new class
SimpleEventProcessor.
2. Open the SimpleEventProcessor.cs file and add the following using statements to the top of the file.

using System.Text;
using Microsoft.Azure.EventHubs;
using Microsoft.Azure.EventHubs.Processor;

3. Implement the IEventProcessor interface. Replace the entire contents of the SimpleEventProcessor class
with the following code:

public class SimpleEventProcessor : IEventProcessor


{
public Task CloseAsync(PartitionContext context, CloseReason reason)
{
Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason:
'{reason}'.");
return Task.CompletedTask;
}

public Task OpenAsync(PartitionContext context)


{
Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
return Task.CompletedTask;
}

public Task ProcessErrorAsync(PartitionContext context, Exception error)


{
Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
return Task.CompletedTask;
}

public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)


{
foreach (var eventData in messages)
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset,
eventData.Body.Count);
Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
}

return context.CheckpointAsync();
}
}

Write a main console method that uses the SimpleEventProcessor class


to receive messages
1. Add the following using statements to the top of the Program.cs file.

using Microsoft.Azure.EventHubs;
using Microsoft.Azure.EventHubs.Processor;

2. Add constants to the Program class for the Event Hubs connection string, Event Hub name, storage account
container name, storage account name, and storage account key. Add the following code, replacing the
placeholders with their corresponding values.
private const string EhConnectionString = "{Event Hubs connection string}";
private const string EhEntityPath = "{Event Hub path/name}";
private const string StorageContainerName = "{Storage account container name}";
private const string StorageAccountName = "{Storage account name}";
private const string StorageAccountKey = "{Storage account key}";

private static readonly string StorageConnectionString =


string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName,
StorageAccountKey);

3. Add a new method named MainAsync to the Program class, as follows:

private static async Task MainAsync(string[] args)


{
Console.WriteLine("Registering EventProcessor...");

var eventProcessorHost = new EventProcessorHost(


EhEntityPath,
PartitionReceiver.DefaultConsumerGroupName,
EhConnectionString,
StorageConnectionString,
StorageContainerName);

// Registers the Event Processor Host and starts receiving messages


await eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>();

Console.WriteLine("Receiving. Press ENTER to stop worker.");


Console.ReadLine();

// Disposes of the Event Processor Host


await eventProcessorHost.UnregisterEventProcessorAsync();
}

4. Add the following line of code to the Main method:

MainAsync(args).GetAwaiter().GetResult();

Here is what your Program.cs file should look like:


namespace SampleEphReceiver
{

public class Program


{
private const string EhConnectionString = "{Event Hubs connection string}";
private const string EhEntityPath = "{Event Hub path/name}";
private const string StorageContainerName = "{Storage account container name}";
private const string StorageAccountName = "{Storage account name}";
private const string StorageAccountKey = "{Storage account key}";

private static readonly string StorageConnectionString =


string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}", StorageAccountName,
StorageAccountKey);

public static void Main(string[] args)


{
MainAsync(args).GetAwaiter().GetResult();
}

private static async Task MainAsync(string[] args)


{
Console.WriteLine("Registering EventProcessor...");

var eventProcessorHost = new EventProcessorHost(


EhEntityPath,
PartitionReceiver.DefaultConsumerGroupName,
EhConnectionString,
StorageConnectionString,
StorageContainerName);

// Registers the Event Processor Host and starts receiving messages


await eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>();

Console.WriteLine("Receiving. Press ENTER to stop worker.");


Console.ReadLine();

// Disposes of the Event Processor Host


await eventProcessorHost.UnregisterEventProcessorAsync();
}
}
}

5. Run the program, and ensure that there are no errors.


Congratulations! You have now received messages from an Event Hub by using the Event Processor Host.

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Receive events from Azure Event Hubs using the
.NET Framework
3/8/2017 5 min to read Edit on GitHub

Introduction
Event Hubs is a service that processes large amounts of event data (telemetry) from connected devices and
applications. After you collect data into Event Hubs, you can store the data using a storage cluster or transform it
using a real-time analytics provider. This large-scale event collection and processing capability is a key component
of modern application architectures including the Internet of Things (IoT).
This tutorial shows how to write a .NET Framework console application that receives messages from an Event Hub
using the Event Processor Host. To send events using the .NET Framework, see the Send events to Azure Event
Hubs using the .NET Framework article, or click the appropriate sending language in the left-hand table of
contents.
The Event Processor Host is a .NET class that simplifies receiving events from Event Hubs by managing persistent
checkpoints and parallel receives from those Event Hubs. Using the Event Processor Host, you can split events
across multiple receivers, even when hosted in different nodes. This example shows how to use the Event
Processor Host for a single receiver. The Scale out event processing sample shows how to use the Event Processor
Host with multiple receivers.

Prerequisites
To complete this tutorial, you'll need the following:
Microsoft Visual Studio 2015 or higher. The screen shots in this tutorial use Visual Studio 2017.
An active Azure account. If you don't have one, you can create a free account in just a couple of minutes. For
details, see Azure Free Trial.

Create an Event Hubs namespace and an Event Hub


The first step is to use the Azure portal to create a namespace of type Event Hubs, and obtain the management
credentials your application needs to communicate with the Event Hub. To create a namespace and Event Hub,
follow the procedure in this article, then proceed with the following steps.

Create an Azure Storage account


To use the Event Processor Host, you must have an Azure Storage account:
1. Log on to the Azure portal, and click New at the top left of the screen.
2. Click Storage, then click Storage account.
3. In the Create storage account blade, type a name for the storage account. Choose an Azure subscription,
resource group, and location in which to create the resource. Then click Create.
4. In the list of storage accounts, click the newly-created storage account.
5. In the storage account blade, click Access keys. Copy the value of key1 to use later in this tutorial.
6. In Visual Studio, create a new Visual C# Desktop App project using the Console Application project
template. Name the project Receiver.

7. In Solution Explorer, right-click the Receiver project, and then click Manage NuGet Packages for Solution.
8. Click the Browse tab, then search for Microsoft Azure Service Bus Event Hub - EventProcessorHost . Click
Install, and accept the terms of use.
Visual Studio downloads, installs, and adds a reference to the Azure Service Bus Event Hub -
EventProcessorHost NuGet package, with all its dependencies.
9. Right-click the Receiver project, click Add, and then click Class. Name the new class
SimpleEventProcessor, and then click Add to create the class.

10. Add the following statements at the top of the SimpleEventProcessor.cs file:

using Microsoft.ServiceBus.Messaging;
using System.Diagnostics;

Then, substitute the following code for the body of the class:
class SimpleEventProcessor : IEventProcessor
{
Stopwatch checkpointStopWatch;

async Task IEventProcessor.CloseAsync(PartitionContext context, CloseReason reason)


{
Console.WriteLine("Processor Shutting Down. Partition '{0}', Reason: '{1}'.",
context.Lease.PartitionId, reason);
if (reason == CloseReason.Shutdown)
{
await context.CheckpointAsync();
}
}

Task IEventProcessor.OpenAsync(PartitionContext context)


{
Console.WriteLine("SimpleEventProcessor initialized. Partition: '{0}', Offset: '{1}'",
context.Lease.PartitionId, context.Lease.Offset);
this.checkpointStopWatch = new Stopwatch();
this.checkpointStopWatch.Start();
return Task.FromResult<object>(null);
}

async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData>


messages)
{
foreach (EventData eventData in messages)
{
string data = Encoding.UTF8.GetString(eventData.GetBytes());

Console.WriteLine(string.Format("Message received. Partition: '{0}', Data: '{1}'",


context.Lease.PartitionId, data));
}

//Call checkpoint every 5 minutes, so that worker can resume processing from 5 minutes back if
it restarts.
if (this.checkpointStopWatch.Elapsed > TimeSpan.FromMinutes(5))
{
await context.CheckpointAsync();
this.checkpointStopWatch.Restart();
}
}
}

This class will be called by the EventProcessorHost to process events received from the Event Hub. Note
that the SimpleEventProcessor class uses a stopwatch to periodically call the checkpoint method on the
EventProcessorHost context. This ensures that, if the receiver is restarted, it will lose no more than five
minutes of processing work.
11. In the Program class, add the following using statement at the top of the file:

using Microsoft.ServiceBus.Messaging;

Then, replace the Main method in the Program class with the following code, substituting the Event Hub
name and the namespace-level connection string that you saved previously, and the storage account and
key that you copied in the previous sections.
static void Main(string[] args)
{
string eventHubConnectionString = "{Event Hubs namespace connection string}";
string eventHubName = "{Event Hub name}";
string storageAccountName = "{storage account name}";
string storageAccountKey = "{storage account key}";
string storageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName=
{0};AccountKey={1}", storageAccountName, storageAccountKey);

string eventProcessorHostName = Guid.NewGuid().ToString();


EventProcessorHost eventProcessorHost = new EventProcessorHost(eventProcessorHostName, eventHubName,
EventHubConsumerGroup.DefaultGroupName, eventHubConnectionString, storageConnectionString);
Console.WriteLine("Registering EventProcessor...");
var options = new EventProcessorOptions();
options.ExceptionReceived += (sender, e) => { Console.WriteLine(e.Exception); };
eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>(options).Wait();

Console.WriteLine("Receiving. Press enter key to stop worker.");


Console.ReadLine();
eventProcessorHost.UnregisterEventProcessorAsync().Wait();
}

12. Run the program, and ensure that there are no errors.
Congratulations! You have now received messages from an Event Hub using the Event Processor Host.

NOTE
This tutorial uses a single instance of EventProcessorHost. To increase throughput, it is recommended that you run multiple
instances of EventProcessorHost, as shown in the [Scaled out event processing][Scaled out event processing] sample. In
those cases, the various instances automatically coordinate with each other to load balance the received events. If you want
multiple receivers to each process all the events, you must use the ConsumerGroup concept. When receiving events from
different machines, it might be useful to specify names for EventProcessorHost instances based on the machines (or roles) in
which they are deployed. For more information about these topics, see the Event Hubs Overview and Event Hubs
Programming Guide topics.

Next steps
Now that you've built a working application that creates an Event Hub and sends and receives data, you can learn
more by visiting the following links:
Event Processor Host
Event Hubs overview
Event Hubs FAQ
Receive events from Azure Event Hubs using Java
2/2/2017 4 min to read Edit on GitHub

Introduction
Event Hubs is a highly scalable ingestion system that can intake millions of events per second, enabling an
application to process and analyze the massive amounts of data produced by your connected devices and
applications. Once collected into Event Hubs, you can transform and store data using any real-time analytics
provider or storage cluster.
For more information, see the Event Hubs overview.
This tutorial shows how to receive events into an Event Hub using a console application written in Java.
In order to complete this tutorial, you will need the following:
A Java development environment. For this tutorial, we will assume Eclipse.
An active Azure account.
If you don't have an account, you can create a free account in just a couple of minutes. For details, see Azure
Free Trial.

Receive messages with EventProcessorHost in Java


EventProcessorHost is a Java class that simplifies receiving events from Event Hubs by managing persistent
checkpoints and parallel receives from those Event Hubs. Using EventProcessorHost you can split events across
multiple receivers, even when hosted in different nodes. This example shows how to use EventProcessorHost for a
single receiver.
Create a storage account
In order to use EventProcessorHost, you must have an Azure Storage account:
1. Log on to the Azure classic portal, and click NEW at the bottom of the screen.
2. Click Data Services, then Storage, then Quick Create, and then type a name for your storage account.
Select your desired region, and then click Create Storage Account.
3. Click the newly created storage account, and then click Manage Access Keys:
Copy the primary access key to use later in this tutorial.
Create a Java project using the EventProcessor Host
The Java client library for Event Hubs is available for use in Maven projects from the Maven Central Repository,
and can be referenced using the following dependency declaration inside your Maven project file:

<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-eventhubs</artifactId>
<version>{VERSION}</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-eventhubs-eph</artifactId>
<version>{VERSION}</version>
</dependency>

For different types of build environments, you can explicitly obtain the latest released JAR files from the Maven
Central Repository or from the release distribution point on GitHub.
1. For the following sample, first create a new Maven project for a console/shell application in your favorite
Java development environment. The class will be called ErrorNotificationHandler .
import java.util.function.Consumer;
import com.microsoft.azure.eventprocessorhost.ExceptionReceivedEventArgs;

public class ErrorNotificationHandler implements Consumer<ExceptionReceivedEventArgs>


{
@Override
public void accept(ExceptionReceivedEventArgs t)
{
System.out.println("SAMPLE: Host " + t.getHostname() + " received general error notification
during " + t.getAction() + ": " + t.getException().toString());
}
}

2. Use the following code to create a new class called EventProcessor .


import com.microsoft.azure.eventhubs.EventData;
import com.microsoft.azure.eventprocessorhost.CloseReason;
import com.microsoft.azure.eventprocessorhost.IEventProcessor;
import com.microsoft.azure.eventprocessorhost.PartitionContext;

public class EventProcessor implements IEventProcessor


{
private int checkpointBatchingCount = 0;

@Override
public void onOpen(PartitionContext context) throws Exception
{
System.out.println("SAMPLE: Partition " + context.getPartitionId() + " is opening");
}

@Override
public void onClose(PartitionContext context, CloseReason reason) throws Exception
{
System.out.println("SAMPLE: Partition " + context.getPartitionId() + " is closing for reason "
+ reason.toString());
}

@Override
public void onError(PartitionContext context, Throwable error)
{
System.out.println("SAMPLE: Partition " + context.getPartitionId() + " onError: " +
error.toString());
}

@Override
public void onEvents(PartitionContext context, Iterable<EventData> messages) throws Exception
{
System.out.println("SAMPLE: Partition " + context.getPartitionId() + " got message batch");
int messageCount = 0;
for (EventData data : messages)
{
System.out.println("SAMPLE (" + context.getPartitionId() + "," +
data.getSystemProperties().getOffset() + "," +
data.getSystemProperties().getSequenceNumber() + "): " + new
String(data.getBody(), "UTF8"));
messageCount++;

this.checkpointBatchingCount++;
if ((checkpointBatchingCount % 5) == 0)
{
System.out.println("SAMPLE: Partition " + context.getPartitionId() + " checkpointing
at " +
data.getSystemProperties().getOffset() + "," +
data.getSystemProperties().getSequenceNumber());
context.checkpoint(data);
}
}
System.out.println("SAMPLE: Partition " + context.getPartitionId() + " batch size was " +
messageCount + " for host " + context.getOwner());
}
}

3. Create one final class called EventProcessorSample , using the following code.
import com.microsoft.azure.eventprocessorhost.*;
import com.microsoft.azure.servicebus.ConnectionStringBuilder;
import com.microsoft.azure.eventhubs.EventData;

public class EventProcessorSample


{
public static void main(String args[])
{
final String consumerGroupName = "$Default";
final String namespaceName = "----ServiceBusNamespaceName-----";
final String eventHubName = "----EventHubName-----";
final String sasKeyName = "-----SharedAccessSignatureKeyName-----";
final String sasKey = "---SharedAccessSignatureKey----";

final String storageAccountName = "---StorageAccountName----";


final String storageAccountKey = "---StorageAccountKey----";
final String storageConnectionString = "DefaultEndpointsProtocol=https;AccountName=" +
storageAccountName + ";AccountKey=" + storageAccountKey;

ConnectionStringBuilder eventHubConnectionString = new ConnectionStringBuilder(namespaceName,


eventHubName, sasKeyName, sasKey);

EventProcessorHost host = new EventProcessorHost(eventHubName, consumerGroupName,


eventHubConnectionString.toString(), storageConnectionString);

System.out.println("Registering host named " + host.getHostName());


EventProcessorOptions options = new EventProcessorOptions();
options.setExceptionNotification(new ErrorNotificationHandler());
try
{
host.registerEventProcessor(EventProcessor.class, options).get();
}
catch (Exception e)
{
System.out.print("Failure while registering: ");
if (e instanceof ExecutionException)
{
Throwable inner = e.getCause();
System.out.println(inner.toString());
}
else
{
System.out.println(e.toString());
}
}

System.out.println("Press enter to stop");


try
{
System.in.read();
host.unregisterEventProcessor();

System.out.println("Calling forceExecutorShutdown");
EventProcessorHost.forceExecutorShutdown(120);
}
catch(Exception e)
{
System.out.println(e.toString());
e.printStackTrace();
}

System.out.println("End of sample");
}
}

4. Replace the following fields with the values used when you created the Event Hub and storage account.
final String namespaceName = "----ServiceBusNamespaceName-----";
final String eventHubName = "----EventHubName-----";

final String sasKeyName = "-----SharedAccessSignatureKeyName-----";


final String sasKey = "---SharedAccessSignatureKey----";

final String storageAccountName = "---StorageAccountName----"


final String storageAccountKey = "---StorageAccountKey----";

NOTE
This tutorial uses a single instance of EventProcessorHost. To increase throughput, it is recommended that you run multiple
instances of EventProcessorHost. In those cases, the various instances automatically coordinate with each other in order to
load balance the received events. If you want multiple receivers to each process all the events, you must use the
ConsumerGroup concept. When receiving events from different machines, it might be useful to specify names for
EventProcessorHost instances based on the machines (or roles) in which they are deployed.

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Receive events from Azure Event Hubs using Apache Storm
2/2/2017 4 min to read Edit on GitHub

Apache Storm is a distributed real-time computation system that simplifies reliable processing of unbounded
streams of data. This section shows how to use an Event Hubs Storm spout to receive events from Event Hubs.
Using Apache Storm, you can split events across multiple processes hosted in different nodes. The Event Hubs
integration with Storm simplifies event consumption by transparently checkpointing its progress using Storm's
Zookeeper installation, managing persistent checkpoints and parallel receives from Event Hubs.
For more information about Event Hubs receive patterns, see the Event Hubs overview.
This tutorial uses an HDInsight Storm installation, which comes with the Event Hubs spout already available.
1. Follow the HDInsight Storm - Get Started procedure to create a new HDInsight cluster, and connect to it via
Remote Desktop.
2. Copy the %STORM_HOME%\examples\eventhubspout\eventhubs-storm-spout-0.9-jar-with-dependencies.jar file to your
local development environment. This contains the events-storm-spout.
3. Use the following command to install the package into the local Maven store. This enables you to add it as a
reference in the Storm project in a later step.

mvn install:install-file -Dfile=target\eventhubs-storm-spout-0.9-jar-with-dependencies.jar -


DgroupId=com.microsoft.eventhubs -DartifactId=eventhubs-storm-spout -Dversion=0.9 -Dpackaging=jar

1. In Eclipse, create a new Maven project (click File, then New, then Project).

2. Select Use default Workspace location, then click Next


3. Select the maven-archetype-quickstart archetype, then click Next
4. Insert a GroupId and ArtifactId, then click Finish
5. In pom.xml, add the following dependencies in the <dependency> node.
xml <dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-core</artifactId> <version>0.9.2-
incubating</version> <scope>provided</scope> </dependency> <dependency>
<groupId>com.microsoft.eventhubs</groupId> <artifactId>eventhubs-storm-spout</artifactId>
<version>0.9</version> </dependency> <dependency> <groupId>com.netflix.curator</groupId> <artifactId>curator-
framework</artifactId> <version>1.3.3</version> <exclusions> <exclusion> <groupId>log4j</groupId>
<artifactId>log4j</artifactId> </exclusion> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-
log4j12</artifactId> </exclusion> </exclusions> <scope>provided</scope> </dependency>

6. In the src folder, create a file called Config.properties and copy the following content, substituting the
following values:
```java eventhubspout.username = ReceiveRule eventhubspout.password = {receive rule key}
eventhubspout.namespace = ioteventhub-ns eventhubspout.entitypath = {event hub name}
eventhubspout.partitions.count = 16

if not provided, will use storm's zookeeper settings


# zookeeper.connectionstring=localhost:2181

eventhubspout.checkpoint.interval = 10
eventhub.receiver.credits = 10
```
The value for **eventhub.receiver.credits** determines how many events are batched before releasing them to the
Storm pipeline. For the sake of simplicity, this example sets this value to 10. In production, it should
usually be set to higher values; for example, 1024.

1. Create a new class called LoggerBolt with the following code:

import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import backtype.storm.task.OutputCollector;
import backtype.storm.task.TopologyContext;
import backtype.storm.topology.OutputFieldsDeclarer;
import backtype.storm.topology.base.BaseRichBolt;
import backtype.storm.tuple.Tuple;

public class LoggerBolt extends BaseRichBolt {


private OutputCollector collector;
private static final Logger logger = LoggerFactory
.getLogger(LoggerBolt.class);

@Override
public void execute(Tuple tuple) {
String value = tuple.getString(0);
logger.info("Tuple value: " + value);

collector.ack(tuple);
}

@Override
public void prepare(Map map, TopologyContext context, OutputCollector collector) {
this.collector = collector;
this.count = 0;
}

@Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
// no output fields
}

This Storm bolt logs the content of the received events. This can easily be extended to store tuples in a
storage service. The HDInsight sensor analysis tutorial uses this same approach to store data into HBase.
2. Create a class called LogTopology with the following code:

import java.io.FileReader;
import java.util.Properties;
import backtype.storm.Config;
import backtype.storm.LocalCluster;
import backtype.storm.StormSubmitter;
import backtype.storm.generated.StormTopology;
import backtype.storm.topology.TopologyBuilder;
import com.microsoft.eventhubs.samples.EventCount;
import com.microsoft.eventhubs.spout.EventHubSpout;
import com.microsoft.eventhubs.spout.EventHubSpoutConfig;

public class LogTopology {


protected EventHubSpoutConfig spoutConfig;
protected int numWorkers;

protected void readEHConfig(String[] args) throws Exception {


Properties properties = new Properties();
if (args.length > 1) {
properties.load(new FileReader(args[1]));
} else {
properties.load(EventCount.class.getClassLoader()
.getResourceAsStream("Config.properties"));
}

String username = properties.getProperty("eventhubspout.username");


String password = properties.getProperty("eventhubspout.password");
String namespaceName = properties
.getProperty("eventhubspout.namespace");
String entityPath = properties.getProperty("eventhubspout.entitypath");
String zkEndpointAddress = properties
.getProperty("zookeeper.connectionstring"); // opt
int partitionCount = Integer.parseInt(properties
.getProperty("eventhubspout.partitions.count"));
int checkpointIntervalInSeconds = Integer.parseInt(properties
.getProperty("eventhubspout.checkpoint.interval"));
int receiverCredits = Integer.parseInt(properties
.getProperty("eventhub.receiver.credits")); // prefetch count
// (opt)
System.out.println("Eventhub spout config: ");
System.out.println(" partition count: " + partitionCount);
System.out.println(" checkpoint interval: "
+ checkpointIntervalInSeconds);
System.out.println(" receiver credits: " + receiverCredits);

spoutConfig = new EventHubSpoutConfig(username, password,


namespaceName, entityPath, partitionCount, zkEndpointAddress,
checkpointIntervalInSeconds, receiverCredits);

// set the number of workers to be the same as partition number.


// the idea is to have a spout and a logger bolt co-exist in one
// worker to avoid shuffling messages across workers in storm cluster.
numWorkers = spoutConfig.getPartitionCount();

if (args.length > 0) {
// set topology name so that sample Trident topology can use it as
// stream name.
spoutConfig.setTopologyName(args[0]);
}
}

protected StormTopology buildTopology() {


TopologyBuilder topologyBuilder = new TopologyBuilder();

EventHubSpout eventHubSpout = new EventHubSpout(spoutConfig);


EventHubSpout eventHubSpout = new EventHubSpout(spoutConfig);
topologyBuilder.setSpout("EventHubsSpout", eventHubSpout,
spoutConfig.getPartitionCount()).setNumTasks(
spoutConfig.getPartitionCount());
topologyBuilder
.setBolt("LoggerBolt", new LoggerBolt(),
spoutConfig.getPartitionCount())
.localOrShuffleGrouping("EventHubsSpout")
.setNumTasks(spoutConfig.getPartitionCount());
return topologyBuilder.createTopology();
}

protected void runScenario(String[] args) throws Exception {


boolean runLocal = true;
readEHConfig(args);
StormTopology topology = buildTopology();
Config config = new Config();
config.setDebug(false);

if (runLocal) {
config.setMaxTaskParallelism(2);
LocalCluster localCluster = new LocalCluster();
localCluster.submitTopology("test", config, topology);
Thread.sleep(5000000);
localCluster.shutdown();
} else {
config.setNumWorkers(numWorkers);
StormSubmitter.submitTopology(args[0], config, topology);
}
}

public static void main(String[] args) throws Exception {


LogTopology topology = new LogTopology();
topology.runScenario(args);
}
}

This class creates a new Event Hubs spout, using the properties in the configuration file to instantiate it. It is
important to note that this example creates as many spouts tasks as the number of partitions in the Event
Hub, in order to use the maximum parallelism allowed by that Event Hub.

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Event Hubs programming guide
2/24/2017 10 min to read Edit on GitHub

This article discusses some common scenarios in writing code using Azure Event Hubs and the Azure .NET SDK. It
assumes a preliminary understanding of Event Hubs. For a conceptual overview of Event Hubs, see the Event Hubs
overview.

Event publishers
You send events to an Event Hub either using HTTP POST or via an AMQP 1.0 connection. The choice of which to
use when depends on the specific scenario being addressed. AMQP 1.0 connections are metered as brokered
connections in Service Bus and are more appropriate in scenarios with frequent higher message volumes and
lower latency requirements, as they provide a persistent messaging channel.
You create and manage Event Hubs using the NamespaceManager class. When using the .NET managed APIs, the
primary constructs for publishing data to Event Hubs are the EventHubClient and EventData classes.
EventHubClient provides the AMQP communication channel over which events are sent to the Event Hub. The
EventData class represents an event, and is used to publish messages to an Event Hub. This class includes the body,
some metadata, and header information about the event. Other properties are added to the EventData object as it
passes through an Event Hub.

Get started
The .NET classes that support Event Hubs are provided in the Microsoft.ServiceBus.dll assembly. The easiest way to
reference the Service Bus API and to configure your application with all of the Service Bus dependencies is to
download the Service Bus NuGet package. Alternatively, you can use the Package Manager Console in Visual
Studio. To do so, issue the following command in the Package Manager Console window:

Install-Package WindowsAzure.ServiceBus

Create an Event Hub


You can use the NamespaceManager class to create Event Hubs. For example:

var manager = new Microsoft.ServiceBus.NamespaceManager("mynamespace.servicebus.windows.net");


var description = manager.CreateEventHub("MyEventHub");

In most cases, it is recommended that you use the CreateEventHubIfNotExists methods to avoid generating
exceptions if the service restarts. For example:

var description = manager.CreateEventHubIfNotExists("MyEventHub");

All Event Hubs creation operations, including CreateEventHubIfNotExists, require Manage permissions on the
namespace in question. If you want to limit the permissions of your publisher or consumer applications, you can
avoid these create operation calls in production code when you use credentials with limited permissions.
The EventHubDescription class contains details about an Event Hub, including the authorization rules, the message
retention interval, partition IDs, status, and path. You can use this class to update the metadata on an Event Hub.
Create an Event Hubs client
The primary class for interacting with Event Hubs is [Microsoft.ServiceBus.Messaging.EventHubClient][]. This class
provides both sender and receiver capabilities. You can instantiate this class using the Create method, as shown in
the following example.

var client = EventHubClient.Create(description.Path);

This method uses the Service Bus connection information in the App.config file, in the appSettings section. For an
example of the appSettings XML used to store the Service Bus connection information, see the documentation for
the Microsoft.ServiceBus.Messaging.EventHubClient.Create(System.String) method.
Another option is to create the client from a connection string. This option works well when using Azure worker
roles, because you can store the string in the configuration properties for the worker. For example:

EventHubClient.CreateFromConnectionString("your_connection_string");

The connection string will be in the same format as it appears in the App.config file for the previous methods:

Endpoint=sb://[namespace].servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKe
y=[key]

Finally, it is also possible to create an EventHubClient object from a MessagingFactory instance, as shown in the
following example.

var factory = MessagingFactory.CreateFromConnectionString("your_connection_string");


var client = factory.CreateEventHubClient("MyEventHub");

It is important to note that additional EventHubClient objects created from a messaging factory instance will reuse
the same underlying TCP connection. Therefore, these objects have a client-side limit on throughput. The Create
method reuses a single messaging factory. If you need very high throughput from a single sender, then you can
create multiple message factories and one EventHubClient object from each messaging factory.

Send events to an Event Hub


You send events to an Event Hub by creating an EventData instance and sending it via the Send method. This
method takes a single EventData instance parameter and synchronously sends it to an Event Hub.

Event serialization
The EventData class has four overloaded constructors that take a variety of parameters, such as an object and
serializer, a byte array, or a stream. It is also possible to instantiate the EventData class and set the body stream
afterwards. When using JSON with EventData, you can use Encoding.UTF8.GetBytes() to retrieve the byte array
for a JSON-encoded string.

Partition key
The EventData class has a PartitionKey property that enables the sender to specify a value that is hashed to
produce a partition assignment. Using a partition key ensures that all the events with the same key are sent to the
same partition in the Event Hub. Common partition keys include user session IDs and unique sender IDs. The
PartitionKey property is optional and can be provided when using the
Microsoft.ServiceBus.Messaging.EventHubClient.Send(Microsoft.ServiceBus.Messaging.EventData) or
Microsoft.ServiceBus.Messaging.EventHubClient.SendAsync(Microsoft.ServiceBus.Messaging.EventData) methods.
If you do not provide a value for PartitionKey, sent events are distributed to partitions using a round-robin model.
Availability considerations
Using a partition key is optional, and you should consider carefully whether or not to use one. In many cases, using
a partition key is a good choice if event ordering is important. When you use a partition key, these partitions
require availability on a single node, and outages can occur over time; for example, when compute nodes reboot
and patch. As such, if you set a partition ID and that partition becomes unavailable for some reason, an attempt to
access the data in that partition will fail. If high availability is most important, do not specify a partition key; in that
case events will be sent to partitions using the round-robin model described previously. In this scenario, you are
making an explicit choice between availability (no partition ID) and consistency (pinning events to a partition ID).
Another consideration is handling delays in processing events. In some cases it might be better to drop data and
retry than to try and keep up with processing, which can potentially cause further downstream processing delays.
For example, with a stock ticker it's better to wait for complete up-to-date data, but in a live chat or VOIP scenario
you'd rather have the data quickly, even if it isn't complete.
Given these availability considerations, in these scenarios you might choose one of the following error handling
strategies:
Stop (stop reading from Event Hubs until things are fixed)
Drop (messages arent important, drop them)
Retry (retry the messages as you see fit)
Dead letter (use a queue or another Event Hub to dead letter only the messages you couldnt process)

Batch event send operations


Sending events in batches can dramatically increase throughput. The SendBatch method takes an IEnumerable
parameter of type EventData and sends the entire batch as an atomic operation to the Event Hub.

public void SendBatch(IEnumerable<EventData> eventDataList);

Note that a single batch must not exceed the 256 KB limit of an event. Additionally, each message in the batch uses
the same publisher identity. It is the responsibility of the sender to ensure that the batch does not exceed the
maximum event size. If it does, a client Send error is generated.

Send asynchronously and send at scale


You can also send events to an Event Hub asynchronously. Sending asynchronously can increase the rate at which
a client is able to send events. Both the Send and SendBatch methods are available in asynchronous versions that
return a Task object. While this technique can increase throughput, it can also cause the client to continue to send
events even while it is being throttled by the Event Hubs service and can result in the client experiencing failures or
lost messages if not properly implemented. In addition, you can use the RetryPolicy property on the client to
control client retry options.

Create a partition sender


Although it is most common to send events to an Event Hub without a partition key, in some cases you might want
to send events directly to a given partition. For example:

var partitionedSender = client.CreatePartitionedSender(description.PartitionIds[0]);

CreatePartitionedSender returns an EventHubSender object that you can use to publish events to a specific Event
Hub partition.

Event consumers
Event Hubs has two primary models for event consumption: direct receivers and higher-level abstractions, such as
EventProcessorHost. Direct receivers are responsible for their own coordination of access to partitions within a
consumer group.
Direct consumer
The most direct way to read from a partition within a consumer group is to use the EventHubReceiver class. To
create an instance of this class, you must use an instance of the EventHubConsumerGroup class. In the following
example, the partition ID must be specified when creating the receiver for the consumer group.

EventHubConsumerGroup group = client.GetDefaultConsumerGroup();


var receiver = group.CreateReceiver(client.GetRuntimeInformation().PartitionIds[0]);

The CreateReceiver method has several overloads that facilitate control over the reader being created. These
methods include specifying an offset as either a string or timestamp, and the ability to specify whether to include
this specified offset in the returned stream, or start after it. After you create the receiver, you can start receiving
events on the returned object. The Receive method has four overloads that control the receive operation
parameters, such as batch size and wait time. You can use the asynchronous versions of these methods to increase
the throughput of a consumer. For example:

bool receive = true;


string myOffset;
while(receive)
{
var message = receiver.Receive();
myOffset = message.Offset;
string body = Encoding.UTF8.GetString(message.GetBytes());
Console.WriteLine(String.Format("Received message offset: {0} \nbody: {1}", myOffset, body));
}

With respect to a specific partition, the messages are received in the order in which they were sent to the Event
Hub. The offset is a string token used to identify a message in a partition.
Note that a single partition within a consumer group cannot have more than 5 concurrent readers connected at
any time. As readers connect or become disconnected, their sessions might stay active for several minutes before
the service recognizes that they have disconnected. During this time, reconnecting to a partition may fail. For a
complete example of writing a direct receiver for Event Hubs, see the Event Hubs Direct Receivers sample.
Event processor host
The EventProcessorHost class processes data from Event Hubs. You should use this implementation when building
event readers on the .NET platform. EventProcessorHost provides a thread-safe, multi-process, safe runtime
environment for event processor implementations that also provides checkpointing and partition lease
management.
To use the EventProcessorHost class, you can implement IEventProcessor. This interface contains three methods:
OpenAsync
CloseAsync
ProcessEventsAsync
To start event processing, instantiate EventProcessorHost, providing the appropriate parameters for your Event
Hub. Then, call RegisterEventProcessorAsync to register your IEventProcessor implementation with the runtime. At
this point, the host will attempt to acquire a lease on every partition in the Event Hub using a "greedy" algorithm.
These leases will last for a given timeframe and must then be renewed. As new nodes, worker instances in this
case, come online, they place lease reservations and over time the load shifts between nodes as each attempts to
acquire more leases.

Over time, an equilibrium is established. This dynamic capability enables CPU-based autoscaling to be applied to
consumers for both scale-up and scale-down. Because Event Hubs do not have a direct concept of message counts,
average CPU utilization is often the best mechanism to measure back end or consumer scale. If publishers begin to
publish more events than consumers can process, the CPU increase on consumers can be used to cause an auto-
scale on worker instance count.
The EventProcessorHost class also implements an Azure storage-based checkpointing mechanism. This
mechanism stores the offset on a per partition basis, so that each consumer can determine what the last
checkpoint from the previous consumer was. As partitions transition between nodes via leases, this is the
synchronization mechanism that facilitates load shifting.

Publisher revocation
In addition to the advanced run-time features of EventProcessorHost, Event Hubs enables publisher revocation in
order to block specific publishers from sending event to an Event Hub. These features are particularly useful if a
publisher token has been compromised, or a software update is causing them to behave inappropriately. In these
situations, the publisher's identity, which is part of their SAS token, can be blocked from publishing events.
For more information about publisher revocation and how to send to Event Hubs as a publisher, see the Event
Hubs Large Scale Secure Publishing sample.

Next steps
To learn more about Event Hubs scenarios, visit these links:
Event Hubs API overview
What is Event Hubs
Event processor host API reference
Overview of Event Hubs Dedicated
2/21/2017 2 min to read Edit on GitHub

Event Hubs Dedicated capacity offers single-tenant deployments for customers with the most demanding
requirements. At full scale, Azure Event Hubs can ingress over 2 million events per second or up to 2 GB per second
of telemetry with fully durable storage and sub-second latency. This also enables integrated solutions by
processing real-time and batch on the same system. With Event Hubs Archive included in the offering, you can
reduce the complexity of your solution by having a single stream support both real-time and batch-based pipelines.
The following table compares the available service tiers of Event Hubs. The Event Hubs Dedicated offering is a fixed
monthly price, compared to usage pricing for most features of Standard and Basic. The Dedicated tier offers the
features of the Standard plan, but with enterprise scale capacity for customers with demanding workloads.

FEATURE BASIC STANDARD DEDICATED

Ingress events Pay per million events Pay per million events Included

Throughput unit (1 MB/sec Pay per hour Pay per hour Included
ingress, 2 MB/sec egress)

Message Size 256 KB 256 KB 1 MB

Publisher policies N/A Yes Yes

Consumer groups 1 - default 20 20

Message replay Yes Yes Yes

Maximum throughput units 20 20 (flexible to 100) 1 CU200

Brokered connections 100 included 1,000 included 100 K included

Additional Brokered N/A Yes Yes


connections

Message Retention 1 day included 1 day included Up to 7 days included

Archive(Preview) N/A Pay per hour Included

Benefits of Event Hubs Dedicated capacity


The following benefits are available when using Event Hubs Dedicated:
Single tenant hosting with no noise from other tenants.
Message size increases to 1 MB as compared to 256 KB for Standard and Basic.
Repeatable performance every time.
Guaranteed capacity to meet your burst needs.
Scalable between 1 and 8 capacity units (CU) providing up to 2 million ingress events per second.
CUs manage the scale for Event Hubs Dedicated, where each CU can provide approximately the
equivalent of 200 throughput units (TU).
Zero maintenance: we manage load balancing, OS updates, security patches, and partitioning.
Fixed monthly pricing.
Event Hubs Dedicated also removes some of the throughput limitations of the Standard offering. Throughput units
in Basic and Standard tiers entitle you to 1000 events per second or 1 MBps of ingress per TU and double that
amount of egress. The Dedicated scale offering has no restrictions on ingress and egress event counts. These limits
are governed only by the processing capacity of the purchased Event Hubs.
This service is targeted at the largest telemetry users and is available to customers with an enterprise agreement.

How to onboard
The Event Hubs Dedicated platform is offered to the public through an enterprise agreement with varying sizes of
CUs. Each CU provides approximately the equivalent of 200 throughput units. You can scale your capacity up or
down throughout the month to meet your needs by adding or removing CUs. The dedicated plan is unique in that
you will experience a more hands-on onboarding from the Event Hubs product team to get the flexible deployment
that is right for you.

Next steps
Contact your Microsoft sales representative or Microsoft Support to get additional details about Event Hubs
Dedicated Capacity. You can also learn more about Event Hubs by visiting the following links:
For detailed information about pricing, visit the following links:
Event Hubs Dedicated pricing. You can also contact your Microsoft sales representative or Microsoft Support to
get additional details about Event Hubs Dedicated capacity.
The Event Hubs FAQ contains pricing information and answers some frequently asked questions about Event
Hubs.
Event Hubs authentication and security model
overview
3/7/2017 4 min to read Edit on GitHub

The Azure Event Hubs security model meets the following requirements:
Only devices that present valid credentials can send data to an Event Hub.
A device cannot impersonate another device.
A rogue device can be blocked from sending data to an Event Hub.

Device authentication
The Event Hubs security model is based on a combination of Shared Access Signature (SAS) tokens and event
publishers. An event publisher defines a virtual endpoint for an Event Hub. The publisher can only be used to send
messages to an Event Hub. It is not possible to receive messages from a publisher.
Typically, an Event Hub employs one publisher per device. All messages that are sent to any of the publishers of an
Event Hub are enqueued within that Event Hub. Publishers enable fine-grained access control and throttling.
Each device is assigned a unique token, which is uploaded to the device. The tokens are produced such that each
unique token grants access to a different unique publisher. A device that possesses a token can only send to one
publisher, but no other publisher. If multiple devices share the same token, then each of these devices shares a
publisher.
Although not recommended, it is possible to equip devices with tokens that grant direct access to an Event Hub.
Any device that holds this token can send messages directly into that Event Hub. Such a device will not be subject to
throttling. Furthermore, the device cannot be blacklisted from sending to that Event Hub.
All tokens are signed with a SAS key. Typically, all tokens are signed with the same key. Devices are not aware of
the key; this prevents devices from manufacturing tokens.
Create the SAS key
When creating an Azure Event Hubs namespace, the service generates a 256-bit SAS key named
RootManageSharedAccessKey. This key grants send, listen, and manage rights to the namespace. You can create
additional keys. It is recommended that you produce a key that grants send permissions to the specific Event Hub.
For the remainder of this topic, it is assumed that you named this key EventHubSendKey.
The following example creates a send-only key when creating the Event Hub:
// Create namespace manager.
string serviceNamespace = "YOUR_NAMESPACE";
string namespaceManageKeyName = "RootManageSharedAccessKey";
string namespaceManageKey = "YOUR_ROOT_MANAGE_SHARED_ACCESS_KEY";
Uri uri = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, string.Empty);
TokenProvider td = TokenProvider.CreateSharedAccessSignatureTokenProvider(namespaceManageKeyName,
namespaceManageKey);
NamespaceManager nm = new NamespaceManager(namespaceUri, namespaceManageTokenProvider);

// Create Event Hub with a SAS rule that enables sending to that Event Hub
EventHubDescription ed = new EventHubDescription("MY_EVENT_HUB") { PartitionCount = 32 };
string eventHubSendKeyName = "EventHubSendKey";
string eventHubSendKey = SharedAccessAuthorizationRule.GenerateRandomKey();
SharedAccessAuthorizationRule eventHubSendRule = new SharedAccessAuthorizationRule(eventHubSendKeyName,
eventHubSendKey, new[] { AccessRights.Send });
ed.Authorization.Add(eventHubSendRule);
nm.CreateEventHub(ed);

Generate tokens
You can generate tokens using the SAS key. You must produce only one token per device. Tokens can then be
produced using the following method. All tokens are generated using the EventHubSendKey key. Each token is
assigned a unique URI.

public static string SharedAccessSignatureTokenProvider.GetSharedAccessSignature(string keyName, string


sharedAccessKey, string resource, TimeSpan tokenTimeToLive)

When calling this method, the URI should be specified as


//<NAMESPACE>.servicebus.windows.net/<EVENT_HUB_NAME>/publishers/<PUBLISHER_NAME> . For all tokens, the URI is
identical, with the exception of PUBLISHER_NAME , which should be different for each token. Ideally, PUBLISHER_NAME
represents the ID of the device that receives that token.
This method generates a token with the following structure:

SharedAccessSignature sr={URI}&sig={HMAC_SHA256_SIGNATURE}&se={EXPIRATION_TIME}&skn={KEY_NAME}

The token expiration time is specified in seconds from Jan 1, 1970. The following is an example of a token:

SharedAccessSignature
sr=contoso&sig=nPzdNN%2Gli0ifrfJwaK4mkK0RqAB%2byJUlt%2bGFmBHG77A%3d&se=1403130337&skn=RootManageSharedAccessKey

Typically, the tokens have a lifespan that resembles or exceeds the lifespan of the device. If the device has the
capability to obtain a new token, tokens with a shorter lifespan can be used.
Devices sending data
Once the tokens have been created, each device is provisioned with its own unique token.
When the device sends data into an Event Hub, the device tags its token with the send request. To prevent an
attacker from eavesdropping and stealing the token, the communication between the device and the Event Hub
must occur over an encrypted channel.
Blacklisting devices
If a token is stolen by an attacker, the attacker can impersonate the device whose token has been stolen. Blacklisting
a device renders the device unusable until the device is given a new token that uses a different publisher.

Authentication of back-end applications


To authenticate back-end applications that consume the data generated by devices, Event Hubs employs a security
model that is similar to the model that is used for Service Bus topics. An Event Hubs consumer group is equivalent
to a subscription to a Service Bus topic. A client can create a consumer group if the request to create the consumer
group is accompanied by a token that grants manage privileges for the Event Hub, or for the namespace to which
the Event Hub belongs. A client is allowed to consume data from a consumer group if the receive request is
accompanied by a token that grants receive rights on that consumer group, the Event Hub, or the namespace to
which the Event Hub belongs.
The current version of Service Bus does not support SAS rules for individual subscriptions. The same holds true for
Event Hubs consumer groups. SAS support will be added for both features in the future.
In the absence of SAS authentication for individual consumer groups, you can use SAS keys to secure all consumer
groups with a common key. This approach enables an application to consume data from any of the consumer
groups of an Event Hub.

Next steps
To learn more about Event Hubs, visit the following topics:
Event Hubs overview
Overview of Shared Access Signatures
Sample applications that use Event Hubs
Availability and consistency in Event Hubs
3/1/2017 2 min to read Edit on GitHub

Overview
Azure Event Hubs uses a partitioning model to improve availability and parallelization within a single Event Hub.
For example, if an Event Hub has four partitions, and one of those partitions is moved from one server to another
in a load balancing operation, you can still send and receive from three other partitions. Additionally, more
partitions enables you to have more concurrent readers processing your data, improving your aggregate
throughput. Understanding the implications of partitioning and ordering in a distributed system is a critical aspect
of solution design.
To help explain the tradeoff between ordering and availability, see the CAP theorem, also known as Brewer's
theorem. This theorem states that one must choose between consistency, availability, and partition tolerance.
Brewer's theorem defines consistency and availability as follows:
Partition tolerance - the ability of a data processing system to continue processing data even if a partition
failure occurs.
Availability - a non-failing node returns a reasonable response within a reasonable amount of time (with no
errors or timeouts).
Consistency - a read is guaranteed to return the most recent write for a given client.

Partition tolerance
Event Hubs is built on top of a partitioned data model. You can configure the number of partitions in your Event
Hub during setup, but you cannot change this value later. Since you must use partitions with Event Hubs, you only
need to make a decision regarding availability and consistency for your application.

Availability
The simplest way to get started with Event Hubs is to use the default behavior. If you create a new EventHubClient
object and use the Send method, your events are automatically distributed between partitions in your Event Hub.
This behavior allows for the greatest amount of up time.
For use cases that require the maximum up time, this model is preferred.

Consistency
In some scenarios, the ordering of events can be important. For example, you may want your back-end system to
process an update command before a delete command. In this instance, you can either set the partition key on an
event, or use a PartitionSender object to only send events to a certain partition. Doing so ensures that when these
events are read from the partition, they are read in order.
With this type of configuration, you must keep in mind that if the particular partition to which you are sending is
unavailable, you will receive an error response. As a point of comparison, if you do not have an affinity to a single
partition, the Event Hubs service sends your event to the next available partition.
One possible solution to ensure ordering, while also maximizing up time, would be to aggregate events as part of
your event processing application. The easiest way to accomplish this is to stamp your event with a custom
sequence number property. The following is an example:
// Get the latest sequence number from your application
var sequenceNumber = GetNextSequenceNumber();
// Create a new EventData object by encoding a string as a byte array
var data = new EventData(Encoding.UTF8.GetBytes("This is my message..."));
// Set a custom sequence number property
data.Properties.Add("SequenceNumber", sequenceNumber);
// Send single message async
await eventHubClient.SendAsync(data);

The preceding example sends your event to one of the available partitions in your Event Hub, and sets the
corresponding sequence number from your application. This solution requires state to be kept by your processing
application, but gives your senders an endpoint that is more likely to be available.

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Available Event Hubs APIs
2/2/2017 1 min to read Edit on GitHub

Runtime APIs
The following is a listing of all currently available Event Hubs runtime clients. While some of these libraries also
include limited management functionality, there are also specific libraries dedicated to management operations.
The core focus of these libraries is to send and receive messages from an Event Hub.
See additional information for more details on the current status of each runtime library.

EVENTPROCESSORHOST
LANGUAGE/PLATFORM CLIENT PACKAGE PACKAGE REPOSITORY

.NET Standard NuGet NuGet GitHub

.NET Framework NuGet NuGet N/A

Java Maven Maven GitHub

Node NPM N/A GitHub

C N/A N/A GitHub

Additional information
.NET
The .NET ecosystem has multiple runtimes, hence there are multiple .NET libraries for Event Hubs. The .NET
Standard library can be run using either .NET Core or the .NET Framework, while the .NET Framework library can
only be run in a .NET Framework environment. For more information on .NET Frameworks, see framework
versions.
Node
The Node.js library is currently in preview and is maintained as a side project by Microsoft employees and external
contributors. All contributions including source code are welcome and will be reviewed.

Management APIs
The following is a listing of all currently available management specific libraries. None of these libraries contain
runtime operations, and are for the sole purpose of managing Event Hubs entities.

LANGUAGE/PLATFORM MANAGEMENT PACKAGE REPOSITORY

.NET Standard NuGet GitHub

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Event Hubs .NET Standard API overview
2/2/2017 3 min to read Edit on GitHub

This article summarizes some of the key Event Hubs .NET Standard client APIs. There are currently two .NET
Standard client libraries:
Microsoft.Azure.EventHubs
This library provides all basic runtime operations.
Microsoft.Azure.EventHubs.Processor
This library adds additional functionality that allows for keeping track of processed events, and is the
easiest way to read from an Event Hub.

Event Hub client


EventHubClient is the primary object you use to send events, create receivers, and to get runtime information.
This client is linked to a particular Event Hub, and creates a new connection to the Event Hubs endpoint.
Create an Event Hub client
An EventHubClient object is created from a connection string. The simplest way to instantiate a new client is
shown in the following example:

var eventHubClient = EventHubClient.CreateFromConnectionString("{Event Hub connection string}");

To programmatically edit the connection string, you can use the EventHubsConnectionStringBuilder class, and
pass the connection string as a parameter to EventHubClient.CreateFromConnectionString.

var connectionStringBuilder = new EventHubsConnectionStringBuilder("{Event Hub connection string}")


{
EntityPath = EhEntityPath
};

var eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());

Send events
To send events to an Event Hub, use the EventData class. The body must be a byte array, or a byte array
segment.

// Create a new EventData object by encoding a string as a byte array


var data = new EventData(Encoding.UTF8.GetBytes("This is my message..."));
// Set user properties if needed
data.Properties.Add("Type", "Informational");
// Send single message async
await eventHubClient.SendAsync(data);

Receive events
The recommended way to receive events from Event Hubs is using the EventProcessorHost, which provides
functionality to automatically keep track of offset, and partition information. However, there are certain situations in
which you may want to use the flexibility of the core Event Hubs library to receive events.
Create a receiver
Receivers are tied to specific partitions, so in order to receive all events in an Event Hub, you will need to create
multiple instances. Generally speaking, it is a good practice to get the partition information programatically, rather
than hard-coding the partition ids. In order to do so, you can use the GetRuntimeInformationAsync method.

// Create a list to keep track of the receivers


var receivers = new List<PartitionReceiver>();
// Use the eventHubClient created above to get the runtime information
var runTimeInformation = await eventHubClient.GetRuntimeInformationAsync();
// Loop over the resulting partition ids
foreach (var partitionId in runTimeInformation.PartitionIds)
{
// Create the receiver
var receiver = eventHubClient.CreateReceiver(PartitionReceiver.DefaultConsumerGroupName, partitionId,
PartitionReceiver.EndOfStream);
// Add the receiver to the list
receivers.Add(receiver);
}

Since events are never removed from an Event Hub (and only expire), you will need to specify the proper starting
point. The following example shows possible combinations.

// partitionId is assumed to come from GetRuntimeInformationAsync()

// Using the constant 'PartitionReceiver.EndOfStream' will only receive all messages from this point forward.
var receiver = eventHubClient.CreateReceiver(PartitionReceiver.DefaultConsumerGroupName, partitionId,
PartitionReceiver.EndOfStream);

// All messages available


var receiver = eventHubClient.CreateReceiver(PartitionReceiver.DefaultConsumerGroupName, partitionId, "-1");

// From one day ago


var receiver = eventHubClient.CreateReceiver(PartitionReceiver.DefaultConsumerGroupName, partitionId,
DateTime.Now.AddDays(-1));

Consume an event

// Receive a maximum of 100 messages in this call to ReceiveAsync


var ehEvents = await receiver.ReceiveAsync(100);
// ReceiveAsync can return null if there are no messages
if (ehEvents != null)
{
// Since ReceiveAsync can return more than a single event you will need a loop to process
foreach (var ehEvent in ehEvents)
{
// Decode the byte array segment
var message = UnicodeEncoding.UTF8.GetString(ehEvent.Body.Array);
// Load the custom property that we set in the send example
var customType = ehEvent.Properties["Type"];
// Implement processing logic here
}
}

Event Processor Host APIs


These APIs provide resiliency to worker processes that may become unavailable, by distributing partitions across
available workers.
// Checkpointing is done within the SimpleEventProcessor and on a per-consumerGroup per-partition basis,
workers resume from where they last left off.

// Read these connection strings from a secure location


var ehConnectionString = "{Event Hubs connection string}";
var ehEntityPath = "{Event Hub path/name}";
var storageConnectionString = "{Storage connection string}";
var storageContainerName = "{Storage account container name}";

var eventProcessorHost = new EventProcessorHost(


ehEntityPath,
PartitionReceiver.DefaultConsumerGroupName,
ehConnectionString,
storageConnectionString,
storageContainerName);

// Start/register an EventProcessorHost
await eventProcessorHost.RegisterEventProcessorAsync<SimpleEventProcessor>();

// Disposes of the Event Processor Host


await eventProcessorHost.UnregisterEventProcessorAsync();

The following is a sample implementation of the IEventProcessor.

public class SimpleEventProcessor : IEventProcessor


{
public Task CloseAsync(PartitionContext context, CloseReason reason)
{
Console.WriteLine($"Processor Shutting Down. Partition '{context.PartitionId}', Reason: '{reason}'.");
return Task.CompletedTask;
}

public Task OpenAsync(PartitionContext context)


{
Console.WriteLine($"SimpleEventProcessor initialized. Partition: '{context.PartitionId}'");
return Task.CompletedTask;
}

public Task ProcessErrorAsync(PartitionContext context, Exception error)


{
Console.WriteLine($"Error on Partition: {context.PartitionId}, Error: {error.Message}");
return Task.CompletedTask;
}

public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)


{
foreach (var eventData in messages)
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset,
eventData.Body.Count);
Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
}

return context.CheckpointAsync();
}
}

Next steps
To learn more about Event Hubs scenarios, visit these links:
What is Azure Event Hubs?
Available Event Hubs apis
The .NET API references are here:
Microsoft.Azure.EventHubs
Microsoft.Azure.EventHubs.Processor
Event Hubs .NET Framework API overview
2/2/2017 2 min to read Edit on GitHub

This article summarizes some of the key Event Hubs .NET Framework client APIs. There are two categories:
management and run-time APIs. Run-time APIs consist of all operations needed to send and receive a message.
Management operations enable you to manage an Event Hubs entity state by creating, updating, and deleting
entities.
Monitoring scenarios span both management and run-time. For detailed reference documentation on the .NET APIs,
see the Service Bus .NET and EventProcessorHost API references.

Management APIs
To perform the following management operations, you must have Manage permissions on the Event Hubs
namespace:
Create

// Create the Event Hub


var ehd = new EventHubDescription(eventHubName);
ehd.PartitionCount = SampleManager.numPartitions;
await namespaceManager.CreateEventHubAsync(ehd);

Update

var ehd = await namespaceManager.GetEventHubAsync(eventHubName);

// Create a customer SAS rule with Manage permissions


ehd.UserMetadata = "Some updated info";
var ruleName = "myeventhubmanagerule";
var ruleKey = SharedAccessAuthorizationRule.GenerateRandomKey();
ehd.Authorization.Add(new SharedAccessAuthorizationRule(ruleName, ruleKey, new AccessRights[]
{AccessRights.Manage, AccessRights.Listen, AccessRights.Send} ));
await namespaceManager.UpdateEventHubAsync(ehd);

Delete

await namespaceManager.DeleteEventHubAsync("Event Hub name");

Run-time APIs
Create publisher

// EventHubClient model (uses implicit factory instance, so all links on same connection)
var eventHubClient = EventHubClient.Create("Event Hub name");

Publish message
// Create the device/temperature metric
var info = new MetricEvent() { DeviceId = random.Next(SampleManager.NumDevices), Temperature = random.Next(100)
};
var data = new EventData(new byte[10]); // Byte array
var data = new EventData(Stream); // Stream
var data = new EventData(info, serializer) //Object and serializer
{
PartitionKey = info.DeviceId.ToString()
};

// Set user properties if needed


data.Properties.Add("Type", "Telemetry_" + DateTime.Now.ToLongTimeString());

// Send single message async


await client.SendAsync(data);

Create consumer

// Create the Event Hubs client


var eventHubClient = EventHubClient.Create(EventHubName);

// Get the default consumer group


var defaultConsumerGroup = eventHubClient.GetDefaultConsumerGroup();

// All messages
var consumer = await defaultConsumerGroup.CreateReceiverAsync(partitionId: index);

// From one day ago


var consumer = await defaultConsumerGroup.CreateReceiverAsync(partitionId: index,
startingDateTimeUtc:DateTime.Now.AddDays(-1));

// From specific offset, -1 means oldest


var consumer = await defaultConsumerGroup.CreateReceiverAsync(partitionId: index,startingOffset:-1);

Consume message

var message = await consumer.ReceiveAsync();

// Provide a serializer
var info = message.GetBody<Type>(Serializer)

// Get a byte[]
var info = message.GetBytes();
msg = UnicodeEncoding.UTF8.GetString(info);

Event Processor Host APIs


These APIs provide resiliency to worker processes that may become unavailable, by distributing partitions across
available workers.
// Checkpointing is done within the SimpleEventProcessor and on a per-consumerGroup per-partition basis,
workers resume from where they last left off.
// Use the EventData.Offset value for checkpointing yourself, this value is unique per partition.

var eventHubConnectionString =
System.Configuration.ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"];
var blobConnectionString =
System.Configuration.ConfigurationManager.AppSettings["AzureStorageConnectionString"]; // Required for
checkpoint/state

var eventHubDescription = new EventHubDescription(EventHubName);


var host = new EventProcessorHost(WorkerName, EventHubName, defaultConsumerGroup.GroupName,
eventHubConnectionString, blobConnectionString);
await host.RegisterEventProcessorAsync<SimpleEventProcessor>();

// To close
await host.UnregisterEventProcessorAsync();

The IEventProcessor interface is defined as follows:

public class SimpleEventProcessor : IEventProcessor


{
IDictionary<string, string> map;
PartitionContext partitionContext;

public SimpleEventProcessor()
{
this.map = new Dictionary<string, string>();
}

public Task OpenAsync(PartitionContext context)


{
this.partitionContext = context;

return Task.FromResult<object>(null);
}

public async Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)


{
foreach (EventData message in messages)
{
// Process messages here
}

// Checkpoint when appropriate


await context.CheckpointAsync();

public async Task CloseAsync(PartitionContext context, CloseReason reason)


{
if (reason == CloseReason.Shutdown)
{
await context.CheckpointAsync();
}
}
}

Next steps
To learn more about Event Hubs scenarios, visit these links:
What is Azure Event Hubs?
Event Hubs programming guide
The .NET API references are here:
Microsoft.ServiceBus.Messaging
Microsoft.Azure.ServiceBus.EventProcessorHost
Event Hubs diagnostic logs
3/1/2017 2 min to read Edit on GitHub

You can view two types of logs for Azure Event Hubs:
Activity logs. These logs have information about operations performed on a job. The logs are always turned on.
Diagnostic logs. You can configure diagnostic logs, for richer insight into everything that happens with a job.
Diagnostic logs cover activities from the time the job is created until the job is deleted, including updates and
activities that occur while the job is running.

Turn on diagnostic logs


Diagnostics logs are off by default. To turn on diagnostic logs:
1. In the Azure portal, go to the streaming job blade.
2. Under Monitoring, go to the Diagnostics logs blade.

3. Select Turn on diagnostics.

4. For Status, select On.

5. Set the archival target that you want, for example, a storage account, an event hub, or Azure Log Analytics.
6. Select the categories of logs that you want to collect, for example, Execution or Authoring.
7. Save the new diagnostics settings.
New settings take effect in about 10 minutes. After that, logs appear in the configured archival target, on the
Diagnostics logs blade.
For more information about configuring diagnostics, see an overview of Azure diagnostic logs.

Diagnostic logs categories


Event Hubs captures diagnostic logs for two categories:
ArchivalLogs capture the logs related to event hub archives, specifically, logs related to archive errors.
OperationalLogs capture what is happening during event hub operation, specifically, the operation type,
including event hub creation, resources used, and the status of the operation.

Diagnostic logs schema


All logs are stored in JavaScript Object Notation (JSON) format. Each entry has string fields that use the format
described in the following examples.
Archive logs schema
Archive log JSON strings include elements listed in the following table:

NAME DESCRIPTION

TaskName Description of the task that failed

ActivityId Internal ID, used for tracking

trackingId Internal ID, used for tracking

resourceId Azure Resource Manager resource ID

eventHub Event hub full name (includes namespace name)

partitionId Event hub partition being written to

archiveStep ArchiveFlushWriter

startTime Failure start time

failures Number of times failure occurred

durationInSeconds Duration of failure

message Error message

category ArchiveLogs

Here's an example of an archive log JSON string:


{
"TaskName": "EventHubArchiveUserError",
"ActivityId": "21b89a0b-8095-471a-9db8-d151d74ecf26",
"trackingId": "21b89a0b-8095-471a-9db8-d151d74ecf26_B7",
"resourceId": "/SUBSCRIPTIONS/854D368F-1828-428F-8F3C-F2AFFA9B2F7D/RESOURCEGROUPS/DEFAULT-EVENTHUB-
CENTRALUS/PROVIDERS/MICROSOFT.EVENTHUB/NAMESPACES/FBETTATI-OPERA-EVENTHUB",
"eventHub": "fbettati-opera-eventhub:eventhub:eh123~32766",
"partitionId": "1",
"archiveStep": "ArchiveFlushWriter",
"startTime": "9/22/2016 5:11:21 AM",
"failures": 3,
"durationInSeconds": 360,
"message": "Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: (404)
Not Found. ---> System.Net.WebException: The remote server returned an error: (404) Not Found.\r\n at
Microsoft.WindowsAzure.Storage.Shared.Protocol.HttpResponseParsers.ProcessExpectedStatusCodeNoException[T]
(HttpStatusCode expectedStatusCode, HttpStatusCode actualStatusCode, T retVal, StorageCommandBase`1 cmd,
Exception ex)\r\n at Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob.<PutBlockImpl>b__3e(RESTCommand`1
cmd, HttpWebResponse resp, Exception ex, OperationContext ctx)\r\n at
Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndGetResponse[T](IAsyncResult getResponseResult)\r\n -
-- End of inner exception stack trace ---\r\n at
Microsoft.WindowsAzure.Storage.Core.Util.StorageAsyncResult`1.End()\r\n at
Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass4.
<CreateCallbackVoid>b__3(IAsyncResult ar)\r\n--- End of stack trace from previous location where exception was
thrown ---\r\n at System.",
"category": "ArchiveLogs"
}

Operation logs schema


Operation log JSON strings include elements listed in the following table:

NAME DESCRIPTION

ActivityId Internal ID, used to track purpose

EventName Operation name

resourceId Azure Resource Manager resource ID

SubscriptionId Subscription ID

EventTimeString Operation time

EventProperties Operation properties

Status Operation status

Caller Caller of operation (Azure portal or management client)

category OperationalLogs

Here's an example of an operation log JSON string:


Example:
{
"ActivityId": "6aa994ac-b56e-4292-8448-0767a5657cc7",
"EventName": "Create EventHub",
"resourceId": "/SUBSCRIPTIONS/1A2109E3-9DA0-455B-B937-E35E36C1163C/RESOURCEGROUPS/DEFAULT-SERVICEBUS-
CENTRALUS/PROVIDERS/MICROSOFT.EVENTHUB/NAMESPACES/SHOEBOXEHNS-CY4001",
"SubscriptionId": "1a2109e3-9da0-455b-b937-e35e36c1163c",
"EventTimeString": "9/28/2016 8:40:06 PM +00:00",
"EventProperties": "{\"SubscriptionId\":\"1a2109e3-9da0-455b-b937-
e35e36c1163c\",\"Namespace\":\"shoeboxehns-cy4001\",\"Via\":\"https://shoeboxehns-
cy4001.servicebus.windows.net/f8096791adb448579ee83d30e006a13e/?api-version=2016-
07\",\"TrackingId\":\"5ee74c9e-72b5-4e98-97c4-08a62e56e221_G1\"}",
"Status": "Succeeded",
"Caller": "ServiceBus Client",
"category": "OperationalLogs"
}

Next steps
Introduction to Event Hubs
Event Hubs API overview
Get started with Event Hubs
Service Bus authentication with Shared Access
Signatures
2/15/2017 15 min to read Edit on GitHub

Shared Access Signatures (SAS) are the primary security mechanism for Service Bus messaging. This article
discusses SAS, how they work, and how to use them in a platform-agnostic way.
SAS authentication enables applications to authenticate to Service Bus using an access key configured on the
namespace, or on the messaging entity (queue or topic) with which specific rights are associated. You can then use
this key to generate a SAS token that clients can in turn use to authenticate to Service Bus.
SAS authentication support is included in the Azure SDK version 2.0 and later.

Overview of SAS
Shared Access Signatures are an authentication mechanism based on SHA-256 secure hashes or URIs. SAS is an
extremely powerful mechanism that is used by all Service Bus services. In actual use, SAS has two components: a
shared access policy and a Shared Access Signature (often called a token).
SAS authentication in Service Bus involves the configuration of a cryptographic key with associated rights on a
Service Bus resource. Clients claim access to Service Bus resources by presenting a SAS token. This token consists
of the resource URI being accessed, and an expiry signed with the configured key.
You can configure Shared Access Signature authorization rules on Service Bus relays, queues, and topics.
SAS authentication uses the following elements:
Shared Access authorization rule: A 256-bit primary cryptographic key in Base64 representation, an optional
secondary key, and a key name and associated rights (a collection of Listen, Send, or Manage rights).
Shared Access Signature token: Generated using the HMAC-SHA256 of a resource string, consisting of the URI
of the resource that is accessed and an expiry, with the cryptographic key. The signature and other elements
described in the following sections are formatted into a string to form the SAS token.

Shared access policy


An important thing to understand about SAS is that it starts with a policy. For each policy, you decide on three
pieces of information: name, scope, and permissions. The name is just that; a unique name within that scope.
The scope is easy enough: it's the URI of the resource in question. For a Service Bus namespace, the scope is the
fully qualified domain name (FQDN), such as https://<yournamespace>.servicebus.windows.net/ .
The available permissions for a policy are largely self-explanatory:
Send
Listen
Manage
After you create the policy, it is assigned a Primary Key and a Secondary Key. These are cryptographically strong
keys. Don't lose them or leak them - they'll always be available in the Azure portal. You can use either of the
generated keys, and you can regenerate them at any time. However, if you regenerate or change the primary key
in the policy, any Shared Access Signatures created from it will be invalidated.
When you create a Service Bus namespace, a policy is automatically created for the entire namespace called
RootManageSharedAccessKey, and this policy has all permissions. You don't log on as root, so don't use this
policy unless there's a really good reason. You can create additional policies in the Configure tab for the
namespace in the portal. It's important to note that a single tree level in Service Bus (namespace, queue, etc.) can
only have up to 12 policies attached to it.

Configuration for Shared Access Signature authentication


You can configure the SharedAccessAuthorizationRule rule on Service Bus namespaces, queues, or topics.
Configuring a SharedAccessAuthorizationRule on a Service Bus subscription is currently not supported, but you
can use rules configured on a namespace or topic to secure access to subscriptions. For a working sample that
illustrates this procedure, see the Using Shared Access Signature (SAS) authentication with Service Bus
Subscriptions sample.
A maximum of 12 such rules can be configured on a Service Bus namespace, queue, or topic. Rules that are
configured on a Service Bus namespace apply to all entities in that namespace.

In this figure, the manageRuleNS, sendRuleNS, and listenRuleNS authorization rules apply to both queue Q1 and
topic T1, while listenRuleQ and sendRuleQ apply only to queue Q1 and sendRuleT applies only to topic T1.
The key parameters of a SharedAccessAuthorizationRule are as follows:

PARAMETER DESCRIPTION

KeyName A string that describes the authorization rule.

PrimaryKey A base64-encoded 256-bit primary key for signing and


validating the SAS token.

SecondaryKey A base64-encoded 256-bit secondary key for signing and


validating the SAS token.

AccessRights A list of access rights granted by the authorization rule. These


rights can be any collection of Listen, Send, and Manage
rights.

When a Service Bus namespace is provisioned, a SharedAccessAuthorizationRule, with KeyName set to


RootManageSharedAccessKey, is created by default.

Generate a Shared Access Signature (token)


The policy itself is not the access token for Service Bus. It is the object from which the access token is generated -
using either the primary or secondary key. Any client that has access to the signing keys specified in the shared
access authorization rule can generate the SAS token. The token is generated by carefully crafting a string in the
following format:

SharedAccessSignature sig=<signature-string>&se=<expiry>&skn=<keyName>&sr=<URL-encoded-resourceURI>
Where signature-string is the SHA-256 hash of the scope of the token (scope as described in the previous
section) with a CRLF appended and an expiry time (in seconds since the epoch: 00:00:00 UTC on 1 January 1970).

NOTE
To avoid a short token expiry time, it is recommended that you encode the expiry time value as at least a 32-bit unsigned
integer, or preferably a long (64-bit) integer.

The hash looks similar to the following pseudo code and returns 32 bytes.

SHA-256('https://<yournamespace>.servicebus.windows.net/'+'\n'+ 1438205742)

The non-hashed values are in the SharedAccessSignature string so that the recipient can compute the hash with
the same parameters, to be sure that it returns the same result. The URI specifies the scope, and the key name
identifies the policy to be used to compute the hash. This is important from a security standpoint. If the signature
doesn't match that which the recipient (Service Bus) calculates, then access is denied. At this point you can be sure
that the sender had access to the key and should be granted the rights specified in the policy.
Note that you should use the encoded resource URI for this operation. The resource URI is the full URI of the
Service Bus resource to which access is claimed. For example,
http://<namespace>.servicebus.windows.net/<entityPath> or sb://<namespace>.servicebus.windows.net/<entityPath>
; that is, http://contoso.servicebus.windows.net/contosoTopics/T1/Subscriptions/S3 .
The shared access authorization rule used for signing must be configured on the entity specified by this URI, or by
one of its hierarchical parents. For example, http://contoso.servicebus.windows.net/contosoTopics/T1 or
http://contoso.servicebus.windows.net in the previous example.

A SAS token is valid for all resources under the <resourceURI> used in the signature-string .
The KeyName in the SAS token refers to the keyName of the shared access authorization rule used to generate
the token.
The URL-encoded-resourceURI must be the same as the URI used in the string-to-sign during the computation of
the signature. It should be percent-encoded.
It is recommended that you periodically regenerate the keys used in the SharedAccessAuthorizationRule object.
Applications should generally use the PrimaryKey to generate a SAS token. When regenerating the keys, you
should replace the SecondaryKey with the old primary key, and generate a new key as the new primary key. This
enables you to continue using tokens for authorization that were issued with the old primary key, and that have
not yet expired.
If a key is compromised and you have to revoke the keys, you can regenerate both the PrimaryKey and the
SecondaryKey of a SharedAccessAuthorizationRule, replacing them with new keys. This procedure invalidates all
tokens signed with the old keys.

How to use Shared Access Signature authentication with Service Bus


The following scenarios include configuration of authorization rules, generation of SAS tokens, and client
authorization.
For a full working sample of a Service Bus application that illustrates the configuration and uses SAS authorization,
see Shared Access Signature authentication with Service Bus. A related sample that illustrates the use of SAS
authorization rules configured on namespaces or topics to secure Service Bus subscriptions is available here:
Using Shared Access Signature (SAS) authentication with Service Bus Subscriptions.
Access Shared Access Authorization rules on a namespace
Operations on the Service Bus namespace root require certificate authentication. You must upload a management
certificate for your Azure subscription. To upload a management certificate, follow the steps here, using the Azure
portal. For more information about Azure management certificates, see the Azure certificates overview.
The endpoint for accessing shared access authorization rules on a Service Bus namespace is as follows:

https://management.core.windows.net/{subscriptionId}/services/ServiceBus/namespaces/{namespace}/AuthorizationR
ules/

To create a SharedAccessAuthorizationRule object on a Service Bus namespace, execute a POST operation on this
endpoint with the rule information serialized as JSON or XML. For example:

// Base address for accessing authorization rules on a namespace


string baseAddress =
@"https://management.core.windows.net/<subscriptionId>/services/ServiceBus/namespaces/<namespace>/Authorizatio
nRules/";

// Configure authorization rule with base64-encoded 256-bit key and Send rights
var sendRule = new SharedAccessAuthorizationRule("contosoSendAll",
SharedAccessAuthorizationRule.GenerateRandomKey(),
new[] { AccessRights.Send });

// Operations on the Service Bus namespace root require certificate authentication.


WebRequestHandler handler = new WebRequestHandler
{
ClientCertificateOptions = ClientCertificateOption.Manual
};
// Access the management certificate by subject name
handler.ClientCertificates.Add(GetCertificate(<certificateSN>));

HttpClient httpClient = new HttpClient(handler)


{
BaseAddress = new Uri(baseAddress)
};
httpClient.DefaultRequestHeaders.Accept.Add(
new MediaTypeWithQualityHeaderValue("application/json"));
httpClient.DefaultRequestHeaders.Add("x-ms-version", "2015-01-01");

// Execute a POST operation on the baseAddress above to create an auth rule


var postResult = httpClient.PostAsJsonAsync("", sendRule).Result;

Similarly, use a GET operation on the endpoint to read the authorization rules configured on the namespace.
To update or delete a specific authorization rule, use the following endpoint:

https://management.core.windows.net/{subscriptionId}/services/ServiceBus/namespaces/{namespace}/AuthorizationR
ules/{KeyName}

Access Shared Access Authorization rules on an entity


You can access a Microsoft.ServiceBus.Messaging.SharedAccessAuthorizationRule object configured on a Service
Bus queue or topic through the AuthorizationRules collection in the corresponding QueueDescription or
TopicDescription.
The following code shows how to add authorization rules for a queue.
// Create an instance of NamespaceManager for the operation
NamespaceManager nsm = NamespaceManager.CreateFromConnectionString(
<connectionString> );
QueueDescription qd = new QueueDescription( <qPath> );

// Create a rule with send rights with keyName as "contosoQSendKey"


// and add it to the queue description.
qd.Authorization.Add(new SharedAccessAuthorizationRule("contosoSendKey",
SharedAccessAuthorizationRule.GenerateRandomKey(),
new[] { AccessRights.Send }));

// Create a rule with listen rights with keyName as "contosoQListenKey"


// and add it to the queue description.
qd.Authorization.Add(new SharedAccessAuthorizationRule("contosoQListenKey",
SharedAccessAuthorizationRule.GenerateRandomKey(),
new[] { AccessRights.Listen }));

// Create a rule with manage rights with keyName as "contosoQManageKey"


// and add it to the queue description.
// A rule with manage rights must also have send and receive rights.
qd.Authorization.Add(new SharedAccessAuthorizationRule("contosoQManageKey",
SharedAccessAuthorizationRule.GenerateRandomKey(),
new[] {AccessRights.Manage, AccessRights.Listen, AccessRights.Send }));

// Create the queue.


nsm.CreateQueue(qd);

Use Shared Access Signature authorization


Applications using the Azure .NET SDK with the Service Bus .NET libraries can use SAS authorization through the
SharedAccessSignatureTokenProvider class. The following code illustrates the use of the token provider to send
messages to a Service Bus queue.

Uri runtimeUri = ServiceBusEnvironment.CreateServiceUri("sb",


<yourServiceNamespace>, string.Empty);
MessagingFactory mf = MessagingFactory.Create(runtimeUri,
TokenProvider.CreateSharedAccessSignatureTokenProvider(keyName, key));
QueueClient sendClient = mf.CreateQueueClient(qPath);

//Sending hello message to queue.


BrokeredMessage helloMessage = new BrokeredMessage("Hello, Service Bus!");
helloMessage.MessageId = "SAS-Sample-Message";
sendClient.Send(helloMessage);

Applications can also use SAS for authentication by using a SAS connection string in methods that accept
connection strings.
Note that to use SAS authorization with Service Bus relays, you can use SAS keys configured on the Service Bus
namespace. If you explicitly create a relay on the namespace (NamespaceManager with a RelayDescription) object,
you can set the SAS rules just for that relay. To use SAS authorization with Service Bus subscriptions, you can use
SAS keys configured on a Service Bus namespace or on a topic.

Use the Shared Access Signature (at HTTP level)


Now that you know how to create Shared Access Signatures for any entities in Service Bus, you are ready to
perform an HTTP POST:
POST https://<yournamespace>.servicebus.windows.net/<yourentity>/messages
Content-Type: application/json
Authorization: SharedAccessSignature
sr=https%3A%2F%2F<yournamespace>.servicebus.windows.net%2F<yourentity>&sig=<yoursignature from code
above>&se=1438205742&skn=KeyName
ContentType: application/atom+xml;type=entry;charset=utf-8

Remember, this works for everything. You can create SAS for a queue, topic, or subscription.
If you give a sender or client a SAS token, they don't have the key directly, and they cannot reverse the hash to
obtain it. As such, you have control over what they can access, and for how long. An important thing to remember
is that if you change the primary key in the policy, any Shared Access Signatures created from it will be invalidated.

Use the Shared Access Signature (at AMQP level)


In the previous section, you saw how to use the SAS token with an HTTP POST request for sending data to the
Service Bus. As you know, you can access Service Bus using the Advanced Message Queuing Protocol (AMQP) that
is the preferred protocol to use for performance reasons, in many scenarios. The SAS token usage with AMQP is
described in the document AMQP Claim-Based Security Version 1.0 that is in working draft since 2013 but well-
supported by Azure today.
Before starting to send data to Service Bus, the publisher must send the SAS token inside an AMQP message to a
well-defined AMQP node named $cbs (you can see it as a "special" queue used by the service to acquire and
validate all the SAS tokens). The publisher must specify the ReplyTo field inside the AMQP message; this is the
node in which the service replies to the publisher with the result of the token validation (a simple request/reply
pattern between publisher and service). This reply node is created "on the fly," speaking about "dynamic creation
of remote node" as described by the AMQP 1.0 specification. After checking that the SAS token is valid, the
publisher can go forward and start to send data to the service.
The following steps show how to send the SAS token with AMQP protocol using the AMQP.Net Lite library. This is
useful if you can't use the official Service Bus SDK (for example on WinRT, .Net Compact Framework, .Net Micro
Framework and Mono) developing in C#. Of course, this library is useful to help understand how claims-based
security works at the AMQP level, as you saw how it works at the HTTP level (with an HTTP POST request and the
SAS token sent inside the "Authorization" header). If you don't need such deep knowledge about AMQP, you can
use the official Service Bus SDK with .Net Framework applications, which will do it for you.
C#
/// <summary>
/// Send claim-based security (CBS) token
/// </summary>
/// <param name="shareAccessSignature">Shared access signature (token) to send</param>
private bool PutCbsToken(Connection connection, string sasToken)
{
bool result = true;
Session session = new Session(connection);

string cbsClientAddress = "cbs-client-reply-to";


var cbsSender = new SenderLink(session, "cbs-sender", "$cbs");
var cbsReceiver = new ReceiverLink(session, cbsClientAddress, "$cbs");

// construct the put-token message


var request = new Message(sasToken);
request.Properties = new Properties();
request.Properties.MessageId = Guid.NewGuid().ToString();
request.Properties.ReplyTo = cbsClientAddress;
request.ApplicationProperties = new ApplicationProperties();
request.ApplicationProperties["operation"] = "put-token";
request.ApplicationProperties["type"] = "servicebus.windows.net:sastoken";
request.ApplicationProperties["name"] = Fx.Format("amqp://{0}/{1}", sbNamespace, entity);
cbsSender.Send(request);

// receive the response


var response = cbsReceiver.Receive();
if (response == null || response.Properties == null || response.ApplicationProperties == null)
{
result = false;
}
else
{
int statusCode = (int)response.ApplicationProperties["status-code"];
if (statusCode != (int)HttpStatusCode.Accepted && statusCode != (int)HttpStatusCode.OK)
{
result = false;
}
}

// the sender/receiver may be kept open for refreshing tokens


cbsSender.Close();
cbsReceiver.Close();
session.Close();

return result;
}

The PutCbsToken() method receives the connection (AMQP connection class instance as provided by the AMQP
.NET Lite library) that represents the TCP connection to the service and the sasToken parameter that is the SAS
token to send.

NOTE
It's important that the connection is created with SASL authentication mechanism set to EXTERNAL (and not the default
PLAIN with username and password used when you don't need to send the SAS token).

Next, the publisher creates two AMQP links for sending the SAS token and receiving the reply (the token validation
result) from the service.
The AMQP message contains a set of properties, and more information than a simple message. The SAS token is
the body of the message (using its constructor). The "ReplyTo" property is set to the node name for receiving the
validation result on the receiver link (you can change its name if you want, and it will be created dynamically by
the service). The last three application/custom properties are used by the service to indicate what kind of operation
it has to execute. As described by the CBS draft specification, they must be the operation name ("put-token"), the
type of token (in this case, a "servicebus.windows.net:sastoken"), and the "name" of the audience to which the
token applies (the entire entity).
After sending the SAS token on the sender link, the publisher must read the reply on the receiver link. The reply is
a simple AMQP message with an application property named "status-code" that can contain the same values as
an HTTP status code.

Rights required for Service Bus operations


The following table shows the access rights required for various operations on Service Bus resources.

OPERATION CLAIM REQUIRED CLAIM SCOPE

Namespace

Configure authorization rule on a Manage Any namespace address


namespace

Service Registry

Enumerate Private Policies Manage Any namespace address

Begin listening on a namespace Listen Any namespace address

Send messages to a listener at a Send Any namespace address


namespace

Queue

Create a queue Manage Any namespace address

Delete a queue Manage Any valid queue address

Enumerate queues Manage /$Resources/Queues

Get the queue description Manage Any valid queue address

Configure authorization rule for a Manage Any valid queue address


queue

Send into to the queue Send Any valid queue address

Receive messages from a queue Listen Any valid queue address

Abandon or complete messages after Listen Any valid queue address


receiving the message in peek-lock
mode

Defer a message for later retrieval Listen Any valid queue address

Deadletter a message Listen Any valid queue address


OPERATION CLAIM REQUIRED CLAIM SCOPE

Get the state associated with a Listen Any valid queue address
message queue session

Set the state associated with a message Listen Any valid queue address
queue session

Topic

Create a topic Manage Any namespace address

Delete a topic Manage Any valid topic address

Enumerate topics Manage /$Resources/Topics

Get the topic description Manage Any valid topic address

Configure authorization rule for a topic Manage Any valid topic address

Send to the topic Send Any valid topic address

Subscription

Create a subscription Manage Any namespace address

Delete subscription Manage ../myTopic/Subscriptions/mySubscriptio


n

Enumerate subscriptions Manage ../myTopic/Subscriptions

Get subscription description Manage ../myTopic/Subscriptions/mySubscriptio


n

Abandon or complete messages after Listen ../myTopic/Subscriptions/mySubscriptio


receiving the message in peek-lock n
mode

Defer a message for later retrieval Listen ../myTopic/Subscriptions/mySubscriptio


n

Deadletter a message Listen ../myTopic/Subscriptions/mySubscriptio


n

Get the state associated with a topic Listen ../myTopic/Subscriptions/mySubscriptio


session n

Set the state associated with a topic Listen ../myTopic/Subscriptions/mySubscriptio


session n

Rules

Create a rule Manage ../myTopic/Subscriptions/mySubscriptio


n
OPERATION CLAIM REQUIRED CLAIM SCOPE

Delete a rule Manage ../myTopic/Subscriptions/mySubscriptio


n

Enumerate rules Manage or Listen ../myTopic/Subscriptions/mySubscriptio


n/Rules

Next steps
To learn more about Service Bus messaging, see the following topics.
Service Bus fundamentals
Service Bus queues, topics, and subscriptions
How to use Service Bus queues
How to use Service Bus topics and subscriptions
AMQP 1.0 in Azure Service Bus and Event Hubs
protocol guide
1/17/2017 23 min to read Edit on GitHub

The Advanced Message Queueing Protocol 1.0 is a standardized framing and transfer protocol for asynchronously,
securely, and reliably transferring messages between two parties. It is the primary protocol of Azure Service Bus
Messaging and Azure Event Hubs. Both services also support HTTPS. The proprietary SBMP protocol that is also
supported is being phased out in favor of AMQP.
AMQP 1.0 is the result of broad industry collaboration that brought together middleware vendors, such as
Microsoft and Red Hat, with many messaging middleware users such as JP Morgan Chase representing the
financial services industry. The technical standardization forum for the AMQP protocol and extension specifications
is OASIS, and it has achieved formal approval as an international standard as ISO/IEC 19494.

Goals
This article briefly summarizes the core concepts of the AMQP 1.0 messaging specification along with a small set of
draft extension specifications that are currently being finalized in the OASIS AMQP technical committee and
explains how Azure Service Bus implements and builds on these specifications.
The goal is for any developer using any existing AMQP 1.0 client stack on any platform to be able to interact with
Azure Service Bus via AMQP 1.0.
Common general purpose AMQP 1.0 stacks, such as Apache Proton or AMQP.NET Lite, already implement all core
AMQP 1.0 gestures. Those foundational gestures are sometimes wrapped with a higher level API; Apache Proton
even offers two, the imperative Messenger API and the reactive Reactor API.
In the following discussion, we will assume that the management of AMQP connections, sessions, and links and the
handling of frame transfers and flow control are handled by the respective stack (such as Apache Proton-C) and do
not require much if any specific attention from application developers. We will abstractly assume the existence of a
few API primitives like the ability to connect, and to create some form of sender and receiver abstraction objects,
which then have some shape of send() and receive() operations, respectively.
When discussing advanced capabilities of Azure Service Bus, such as message browsing or management of
sessions, those will be explained in AMQP terms, but also as a layered pseudo-implementation on top of this
assumed API abstraction.

What is AMQP?
AMQP is a framing and transfer protocol. Framing means that it provides structure for binary data streams that
flow in either direction of a network connection. The structure provides delineation for distinct blocks of data
frames to be exchanged between the connected parties. The transfer capabilities make sure that both
communicating parties can establish a shared understanding about when frames shall be transferred, and when
transfers shall be considered complete.
Unlike earlier expired draft versions produced by the AMQP working group that are still in use by a few message
brokers, the working group's final, and standardized AMQP 1.0 protocol does not prescribe the presence of a
message broker or any particular topology for entities inside a message broker.
The protocol can be used for symmetric peer-to-peer communication, for interaction with message brokers that
support queues and publish/subscribe entities, as Azure Service Bus does. It can also be used for interaction with
messaging infrastructure where the interaction patterns are different from regular queues, as is the case with Azure
Event Hubs. An Event Hub acts like a queue when events are sent to it, but acts more like a serial storage service
when events are read from it; it somewhat resembles a tape drive. The client picks an offset into the available data
stream and is then served all events from that offset to the latest available.
The AMQP 1.0 protocol is designed to be extensible, allowing further specifications to enhance its capabilities. The
three extension specifications we will discuss in this document illustrate this. For communication over existing
HTTPS/WebSockets infrastructure where configuring the native AMQP TCP ports may be difficult, a binding
specification defines how to layer AMQP over WebSockets. For interacting with the messaging infrastructure in a
request/response fashion for management purposes or to provide advanced functionality, the AMQP Management
specification defines the required basic interaction primitives. For federated authorization model integration, the
AMQP claims-based-security specification defines how to associate and renew authorization tokens associated with
links.

Basic AMQP scenarios


This section explains basic usage of AMQP 1.0 with Azure Service Bus, which includes creating connections,
sessions, and links, and transferring messages to and from Service Bus entities such as queues, topics, and
subscriptions.
The most authoritative source to learn about how AMQP works is the AMQP 1.0 specification, but the specification
was written to precisely guide implementation and not to teach the protocol. This section focuses on introducing as
much terminology as needed for describing how Service Bus uses AMQP 1.0. For a more comprehensive
introduction to AMQP, as well as a broader discussion of AMQP 1.0, you can review this video course.
Connections and sessions

AMQP calls the communicating programs containers; those contain nodes, which are the communicating entities
inside of those containers. A queue can be such a node. AMQP allows for multiplexing, so a single connection can
be used for many communication paths between nodes; for example, an application client can concurrently receive
from one queue and send to another queue over the same network connection.
The network connection is thus anchored on the container. It is initiated by the container in the client role making
an outbound TCP socket connection to a container in the receiver role which listens for and accepts inbound TCP
connections. The connection handshake includes negotiating the protocol version, declaring or negotiating the use
of Transport Level Security (TLS/SSL), and an authentication/authorization handshake at the connection scope that
is based on SASL.
Azure Service Bus requires the use of TLS at all times. It supports connections over TCP port 5671, whereby the TCP
connection is first overlaid with TLS before entering the AMQP protocol handshake, and also supports connections
over TCP port 5672 whereby the server immediately offers a mandatory upgrade of connection to TLS using the
AMQP-prescribed model. The AMQP WebSockets binding creates a tunnel over TCP port 443 that is then
equivalent to AMQP 5671 connections.
After setting up the connection and TLS, Service Bus offers two SASL mechanism options:
SASL PLAIN is commonly used for passing username and password credentials to a server. Service Bus does not
have accounts, but named Shared Access Security rules, which confer rights and are associated with a key. The
name of a rule is used as the user name and the key (as base64 encoded text) is used as the password. The
rights associated with the chosen rule govern the operations allowed on the connection.
SASL ANONYMOUS is used for bypassing SASL authorization when the client wants to use the claims-based-
security (CBS) model that will be described later. With this option, a client connection can be established
anonymously for a short time during which the client can only interact with the CBS endpoint and the CBS
handshake must complete.
After the transport connection is established, the containers each declare the maximum frame size they are willing
to handle, and after which idle timeout theyll unilaterally disconnect if there is no activity on the connection.
They also declare how many concurrent channels are supported. A channel is a unidirectional, outbound, virtual
transfer path on top of the connection. A session takes a channel from each of the interconnected containers to
form a bi-directional communication path.
Sessions have a window-based flow control model; when a session is created, each party declares how many
frames it is willing to accept into its receive window. As the parties exchange frames, transferred frames fill that
window and transfers stop when the window is full and until the window gets reset or expanded using the flow
performative (performative is the AMQP term for protocol-level gestures exchanged between the two parties).
This window-based model is roughly analogous to the TCP concept of window-based flow control, but at the
session level inside the socket. The protocols concept of allowing for multiple concurrent sessions exists so that
high priority traffic could be rushed past throttled normal traffic, like on a highway express lane.
Azure Service Bus currently uses exactly one session for each connection. The Service Bus maximum frame-size is
262,144 bytes (256K bytes) for Service Bus Standard and Event Hubs. It is 1,048,576 (1 MB) for Service Bus
Premium. Service Bus does not impose any particular session-level throttling windows, but resets the window
regularly as part of link-level flow control (see the next section).
Connections, channels, and sessions are ephemeral. If the underlying connection collapses, connections, TLS tunnel,
SASL authorization context, and sessions must be reestablished.
Links

AMQP transfers messages over links. A link is a communication path created over a session that enables
transferring messages in one direction; the transfer status negotiation is over the link and bi-directional between
the connected parties.
Links can be created by either container at any time and over an existing session, which makes AMQP different
from many other protocols, including HTTP and MQTT, where the initiation of transfers and transfer path is an
exclusive privilege of the party creating the socket connection.
The link-initiating container asks the opposite container to accept a link and it chooses a role of either sender or
receiver. Therefore, either container can initiate creating unidirectional or bi-directional communication paths, with
the latter modeled as pairs of links.
Links are named and associated with nodes. As stated in the beginning, nodes are the communicating entities
inside a container.
In Azure Service Bus, a node is directly equivalent to a queue, a topic, a subscription, or a deadletter sub-queue of a
queue or subscription. The node name used in AMQP is therefore the relative name of the entity inside of the
Service Bus namespace. If a queue is named myqueue, thats also its AMQP node name. A topic subscription
follows the HTTP API convention by being sorted into a subscriptions resource collection and thus, a subscription
sub or a topic mytopic has the AMQP node name mytopic/subscriptions/sub.
The connecting client is also required to use a local node name for creating links; Service Bus is not prescriptive
about those node names and will not interpret them. AMQP 1.0 client stacks generally use a scheme to assure that
these ephemeral node names are unique in the scope of the client.
Transfers

Once a link has been established, messages can be transferred over that link. In AMQP, a transfer is executed with
an explicit protocol gesture (the transfer performative) that moves a message from sender to receiver over a link. A
transfer is complete when it is settled, meaning that both parties have established a shared understanding of the
outcome of that transfer.
In the simplest case, the sender can choose to send messages "pre-settled," meaning that the client isnt interested
in the outcome and the receiver will not provide any feedback about the outcome of the operation. This mode is
supported by Azure Service Bus at the AMQP protocol level, but not exposed in any of the client APIs.
The regular case is that messages are being sent unsettled, and the receiver will then indicate acceptance or
rejection using the disposition performative. Rejection occurs when the receiver cannot accept the message for any
reason, and the rejection message contains information about the reason, which is an error structure defined by
AMQP. If messages are rejected due to internal errors inside of Azure Service Bus, the service returns extra
information inside that structure that can be used for providing diagnostics hints to support personnel if you are
filing support requests. Youll learn more details about errors later.
A special form of rejection is the released state, which indicates that the receiver has no technical objection to the
transfer, but also no interest in settling the transfer. That case exists, for instance, when a message is delivered to a
Service Bus client, and the client chooses to "abandon" the message because it cannot perform the work resulting
from processing the message while the message delivery itself is not at fault. A variation of that state is the
modified state, which allows changes to the message as it is released. That state is not used by Service Bus at
present.
The AMQP 1.0 specification defines a further disposition state received that specifically helps to handle link
recovery. Link recovery allows reconstituting the state of a link and any pending deliveries on top of a new
connection and session, when the prior connection and session were lost.
Azure Service Bus does not support link recovery; if the client loses the connection to Service Bus with an unsettled
message transfer pending, that message transfer is lost, and the client must reconnect, reestablish the link, and
retry the transfer.
As such, Azure Service Bus and Event Hubs do support "at least once" transfers where the sender can be assured for
the message having been stored and accepted, but it does not support "exactly once" transfers at the AMQP level,
where the system would attempt to recover the link and continue to negotiate the delivery state to avoid
duplication of the message transfer.
To compensate for possible duplicate sends, Azure Service Bus supports duplicate detection as an optional feature
on queues and topics. Duplicate detection records the message IDs of all incoming messages during a user-defined
time window, and silently drop all messages sent with the same message-IDs during that same window.
Flow control

In addition to the session-level flow control model that previously discussed, each link has its own flow control
model. Session-level flow control protects the container from having to handle too many frames at once, link-level
flow control puts the application in charge of how many messages it wants to handle from a link and when.
On a link, transfers can only happen when the sender has enough link credit. Link credit is a counter set by the
receiver using the flow performative, which is scoped to a link. When and while the sender is assigned link credit, it
will attempt to use up that credit by delivering messages. Each message delivery decrements the remaining link
credit by one. When the link credit is used up, deliveries stop.
When Service Bus is in the receiver role it will instantly provide the sender with ample link credit, so that messages
can be sent immediately. As link credit is being used, Service Bus will occasionally send a flow performative to the
sender to update the link credit balance.
In the sender role, Service Bus will eagerly send messages to use up any outstanding link credit.
A "receive" call at the API level translates into a flow performative being sent to Service Bus by the client, and
Service Bus will consume that credit by taking the first available, unlocked message from the queue, locking it and
transferring it. If there is no message readily available for delivery, any outstanding credit by any link established
with that particular entity will remain recorded in order of arrival and messages will be locked and transferred as
they become available to use any outstanding credit.
The lock on a message is released when the transfer is settled into one of the terminal states accepted, rejected, or
released. The message is removed from Service Bus when the terminal state is accepted. It remains in Service Bus
and will be delivered to the next receiver when the transfer reaches any of the other states. Service Bus will
automatically move the message into the entity's deadletter queue when it reaches the maximum delivery count
allowed for the entity due to repeated rejections or releases.
Even though the official Service Bus APIs do not directly expose such an option today, a lower-level AMQP protocol
client can use the link-credit model to turn the "pull-style" interaction of issuing one unit of credit for each receive
request into a "push-style" model by issuing a very large number of link credits and then receive messages as they
become available without any further interaction. Push is supported through the MessagingFactory.PrefetchCount
or MessageReceiver.PrefetchCount property settings. When they are non-zero, the AMQP client uses it as the link
credit.
In this context it's important to understand that the clock for the expiration of the lock on the message inside the
entity starts when the message is taken from the entity and not when the message is being put on the wire.
Whenever the client indicates readiness to receive messages by issuing link credit, it is therefore expected to be
actively pulling messages across the network and be ready to handle them. Otherwise the message lock may have
expired before the message is even delivered. The use of link-credit flow control should directly reflect the
immediate readiness to deal with available messages dispatched to the receiver.
In summary, the following sections provide a schematic overview of the performative flow during different API
interactions. Each section describes a different logical operation. Some of those interactions may be "lazy," meaning
they may only be performed once required. Creating a message sender may not cause a network interaction until
the first message is sent or requested.
The arrows show the performative flow direction.
Create Message Receiver

CLIENT SERVICE BUS

--> attach( Client attaches to entity as receiver


name={link name},
handle={numeric handle},
role=receiver,
source={entity name},
target={client link id}
)

Service Bus replies attaching its end of the link <-- attach(
name={link name},
handle={numeric handle},
role=sender,
source={entity name},
target={client link id}
)

Create Message Sender


CLIENT SERVICE BUS

--> attach( No action


name={link name},
handle={numeric handle},
role=sender,
source={client link id},
target={entity name}
)

No action <-- attach(


name={link name},
handle={numeric handle},
role=receiver,
source={client link id},
target={entity name}
)

Create Message Sender (Error)

CLIENT SERVICE BUS

--> attach( No action


name={link name},
handle={numeric handle},
role=sender,
source={client link id},
target={entity name}
)

No action <-- attach(


name={link name},
handle={numeric handle},
role=receiver,
source=null,
target=null
)

<-- detach(
handle={numeric handle},
closed=true,
error={error info}
)

Close Message Receiver/Sender

CLIENT SERVICE BUS

--> detach( No action


handle={numeric handle},
closed=true
)

No action <-- detach(


handle={numeric handle},
closed=true
)

Send (Success )
CLIENT SERVICE BUS

--> transfer( No action


delivery-id={numeric handle},
delivery-tag={binary handle},
settled=false,,more=false,
state=null,
resume=false
)

No action <-- disposition(


role=receiver,
first={delivery id},
last={delivery id},
settled=true,
state=accepted
)

Send (Error)

CLIENT SERVICE BUS

--> transfer( No action


delivery-id={numeric handle},
delivery-tag={binary handle},
settled=false,,more=false,
state=null,
resume=false
)

No action <-- disposition(


role=receiver,
first={delivery id},
last={delivery id},
settled=true,
state=rejected(
error={error info}
)
)

Receive

CLIENT SERVICE BUS

--> flow( No action


link-credit=1
)

No action < transfer(


delivery-id={numeric handle},
delivery-tag={binary handle},
settled=false,
more=false,
state=null,
resume=false
)
CLIENT SERVICE BUS

--> disposition( No action


role=receiver,
first={delivery id},
last={delivery id},
settled=true,
state=accepted
)

Multi-Message Receive

CLIENT SERVICE BUS

--> flow( No action


link-credit=3
)

No action < transfer(


delivery-id={numeric handle},
delivery-tag={binary handle},
settled=false,
more=false,
state=null,
resume=false
)

No action < transfer(


delivery-id={numeric handle+1},
delivery-tag={binary handle},
settled=false,
more=false,
state=null,
resume=false
)

No action < transfer(


delivery-id={numeric handle+2},
delivery-tag={binary handle},
settled=false,
more=false,
state=null,
resume=false
)

--> disposition( No action


role=receiver,
first={delivery id},
last={delivery id+2},
settled=true,
state=accepted
)

Messages
The following sections explain which properties from the standard AMQP message sections are used by Service Bus
and how they map to the official Service Bus APIs.
FIELD NAME USAGE API NAME

durable - -

priority - -

ttl Time to live for this message TimeToLive

first-acquirer - -

delivery-count - DeliveryCount

properties

FIELD NAME USAGE API NAME

message-id Application-defined, free-form identifier MessageId


for this message. Used for duplicate
detection.

user-id Application-defined user identifier, not Not accessible through the Service Bus
interpreted by Service Bus. API.

to Application-defined destination To
identifier, not interpreted by Service
Bus.

subject Application-defined message purpose Label


identifier, not interpreted by Service
Bus.

reply-to Application-defined reply-path ReplyTo


indicator, not interpreted by Service
Bus.

correlation-id Application-defined correlation CorrelationId


identifier, not interpreted by Service
Bus.

content-type Application-defined content-type ContentType


indicator for the body, not interpreted
by Service Bus.

content-encoding Application-defined content-encoding Not accessible through the Service Bus


indicator for the body, not interpreted API.
by Service Bus.

absolute-expiry-time Declares at which absolute instant the ExpiresAtUtc


message will expire. Ignored on input
(header ttl is observed), authoritative on
output.

creation-time Declares at which time the message was Not accessible through the Service Bus
created. Not used by Service Bus API.
FIELD NAME USAGE API NAME

group-id Application-defined identifier for a SessionId


related set of messages. Used for
Service Bus sessions.

group-sequence Counter identifying the relative Not accessible through the Service Bus
sequence number of the message inside API.
a session. Ignored by Service Bus.

reply-to-group-id - ReplyToSessionId

Advanced Service Bus capabilities


This section covers advanced capabilities of Azure Service Bus that are based on draft extensions to AMQP currently
being developed in the OASIS Technical Committee for AMQP. Azure Service Bus implements the latest status of
these drafts and will adopt changes introduced as those drafts reach standard status.

NOTE
Service Bus Messaging advanced operations are supported through a request/response pattern. The details of these
operations are described in the document AMQP 1.0 in Service Bus: request-response-based operations.

AMQP management
The AMQP Management specification is the first of the draft extensions well discuss here. This specification defines
a set of protocol gestures layered on top of the AMQP protocol that allow management interactions with the
messaging infrastructure over AMQP. The specification defines generic operations such as create, read, update, and
delete for managing entities inside a messaging infrastructure and a set of query operations.
All those gestures require a request/response interaction between the client and the messaging infrastructure, and
therefore the specification defines how to model that interaction pattern on top of AMQP: The client connects to the
messaging infrastructure, initiates a session, and then creates a pair of links. On one link, the client acts as sender
and on the other it acts as receiver, thus creating a pair of links that can act as a bi-directional channel.

LOGICAL OPERATION CLIENT SERVICE BUS

Create Request Response Path --> attach( No action


name={link name},
handle={numeric handle},
role=sender,
source=null,
target=myentity/$management
)

Create Request Response Path No action <-- attach(


name={link name},
handle={numeric handle},
role=receiver,
source=null,
target=myentity
)
LOGICAL OPERATION CLIENT SERVICE BUS

Create Request Response Path --> attach(


name={link name},
handle={numeric handle},
role=receiver,
source=myentity/$management,
target=myclient$id
)

Create Request Response Path No action <-- attach(


name={link name},
handle={numeric handle},
role=sender,
source=myentity,
target=myclient$id
)

Having that pair of links in place, the request/response implementation is straightforward: A request is a message
sent to an entity inside the messaging infrastructure that understands this pattern. In that request-message, the
reply-to field in the properties section is set to the target identifier for the link onto which to deliver the response.
The handling entity will process the request, and then deliver the reply over the link whose target identifier matches
the indicated reply-to identifier.
The pattern obviously requires that the client container and the client-generated identifier for the reply destination
are unique across all clients and, for security reasons, also difficult to predict.
The message exchanges used for the management protocol and for all other protocols that use the same pattern
happen at the application level; they do not define new AMQP protocol-level gestures. Thats intentional so that
applications can take immediate advantage of these extensions with compliant AMQP 1.0 stacks.
Azure Service Bus does not currently implement any of the core features of the management specification, but the
request/response pattern defined by the management specification is foundational for the claims-based-security
feature and for nearly all of the advanced capabilities we will discuss in the following sections.
Claims-based authorization
The AMQP Claims-Based-Authorization (CBS) specification draft builds on the management specifications
request/response pattern, and describes a generalized model for how to use federated security tokens with AMQP.
The default security model of AMQP discussed in the introduction is based on SASL and integrates with the AMQP
connection handshake. Using SASL has the advantage that it provides an extensible model for which a set of
mechanisms have been defined from which any protocol that formally leans on SASL can benefit. Amongst those
mechanisms are PLAIN for transfer of usernames and passwords, EXTERNAL to bind to TLS-level security,
ANONYMOUS to express the absence of explicit authentication/authorization, and a broad variety of additional
mechanisms that allow passing authentication and/or authorization credentials or tokens.
AMQPs SASL integration has two drawbacks:
All credentials and tokens are scoped to the connection. A messaging infrastructure may want to provide
differentiated access control on a per-entity basis. For example, allowing the bearer of a token to send to queue
A but not to queue B. With the authorization context anchored on the connection, its not possible to use a single
connection and yet use different access tokens for queue A and queue B.
Access tokens are typically only valid for a limited time. This forces the user to periodically reacquire tokens and
provides an opportunity to the token issuer to refuse issuing a fresh token if the users access permissions have
changed. AMQP connections may last for very long periods of time. The SASL model only provides a chance to
set a token at connection time, which means that the messaging infrastructure either has to disconnect the client
when the token expires or it needs to accept the risk of allowing continued communication with a client whos
access rights may have been revoked in the interim.
The AMQP CBS specification, implemented by Azure Service Bus, allows an elegant workaround for both of those
issues: It allows a client to associate access tokens with each node, and to update those tokens before they expire,
without interrupting the message flow.
CBS defines a virtual management node, named $cbs, to be provided by the messaging infrastructure. The
management node accepts tokens on behalf of any other nodes in the messaging infrastructure.
The protocol gesture is a request/reply exchange as defined by the management specification. That means the
client establishes a pair of links with the $cbs node and then passes a request on the outbound link, and then waits
for the response on the inbound link.
The request message has the following application properties:

KEY OPTIONAL VALUE TYPE VALUE CONTENTS

operation No string put-token

type No string The type of the token being


put.

name No string The "audience" to which the


token applies.

expiration Yes timestamp The expiry time of the token.

The name property identifies the entity with which the token shall be associated. In Service Bus its the path to the
queue, or topic/subscription. The type property identifies the token type:

TOKEN TYPE TOKEN DESCRIPTION BODY TYPE NOTES

amqp:jwt JSON Web Token (JWT) AMQP Value (string) Not yet available.

amqp:swt Simple Web Token (SWT) AMQP Value (string) Only supported for SWT
tokens issued by AAD/ACS

servicebus.windows.net:sasto Service Bus SAS Token AMQP Value (string) -


ken

Tokens confer rights. Service Bus knows three fundamental rights: Send allows sending, Listen allows receiving,
and Manage allows manipulating entities. SWT tokens issued by AAD/ACS explicitly include those rights as
claims. Service Bus SAS tokens refer to rules configured on the namespace or entity, and those rules are configured
with rights. Signing the token with the key associated with that rule thus makes the token express the respective
rights. The token associated with an entity using put-token will permit the connected client to interact with the
entity per the token rights. A link where the client takes on the sender role requires the Send right, taking on the
receiver role requires the Listen right.
The reply message has the following application-properties values

KEY OPTIONAL VALUE TYPE VALUE CONTENTS

status-code No int HTTP response code


[RFC2616].
KEY OPTIONAL VALUE TYPE VALUE CONTENTS

status-description Yes string Description of the status.

The client can call put-token repeatedly and for any entity in the messaging infrastructure. The tokens are scoped to
the current client and anchored on the current connection, meaning the server will drop any retained tokens when
the connection drops.
The current Service Bus implementation only allows CBS in conjunction with the SASL method ANONYMOUS. A
SSL/TLS connection must always exist prior to the SASL handshake.
The ANONYMOUS mechanism must therefore be supported by the chosen AMQP 1.0 client. Anonymous access
means that the initial connection handshake, including creating of the initial session happens without Service Bus
knowing who is creating the connection.
Once the connection and session is established, attaching the links to the $cbs node and sending the put-token
request are the only permitted operations. A valid token must be set successfully using a put-token request for
some entity node within 20 seconds after the connection has been established, otherwise the connection is
unilaterally dropped by Service Bus.
The client is subsequently responsible for keeping track of token expiration. When a token expires, Service Bus will
promptly drop all links on the connection to the respective entity. To prevent this, the client can replace the token
for the node with a new one at any time through the virtual $cbs management node with the same put-token
gesture, and without getting in the way of the payload traffic that flows on different links.

Next steps
To learn more about AMQP, see the following links:
Service Bus AMQP overview
AMQP 1.0 support for Service Bus partitioned queues and topics
AMQP in Service Bus for Windows Server
Event Hubs management libraries
1/18/2017 1 min to read Edit on GitHub

The Event Hubs management libraries can dynamically provision Event Hubs namespaces and entities. This allows
for complex deployments and messaging scenarios, enabling you to programmatically determine what entities to
provision. These libraries are currently available for .NET.

Supported functionality
Namespace creation, update, deletion
Event Hubs creation, update, deletion
Consumer Group creation, update, deletion

Prerequisites
To get started using the Event Hubs management libraries, you must authenticate with Azure Active Directory
(AAD). AAD requires that you authenticate as a service principal, which provides access to your Azure resources. For
information about creating a service principal, see one of these articles:
Use the Azure portal to create Active Directory application and service principal that can access resources
Use Azure PowerShell to create a service principal to access resources
Use Azure CLI to create a service principal to access resources
These tutorials will provide you with an AppId (Client ID), TenantId , and ClientSecret (Authentication Key), all of
which are used for authentication by the management libraries. You must have 'Owner' permissions for the
resource group on which you wish to run.

Programming pattern
The pattern to manipulate any Event Hubs resource follows a common protocol:
1. Obtain a token from Azure Active Directory using the Microsoft.IdentityModel.Clients.ActiveDirectory
library.

var context = new AuthenticationContext($"https://login.windows.net/{tenantId}");

var result = await context.AcquireTokenAsync(


"https://management.core.windows.net/",
new ClientCredential(clientId, clientSecret)
);

2. Create the EventHubManagementClient object.

var creds = new TokenCredentials(token);


var ehClient = new EventHubManagementClient(creds)
{
SubscriptionId = SettingsCache["SubscriptionId"]
};

3. Set the CreateOrUpdate parameters to your specified values.


var ehParams = new EventHubCreateOrUpdateParameters()
{
Location = SettingsCache["DataCenterLocation"]
};

4. Execute the call.

await ehClient.EventHubs.CreateOrUpdateAsync(resourceGroupName, namespaceName, EventHubName, ehParams);

Next steps
.NET Management sample
Microsoft.Azure.Management.EventHub Reference
Azure Event Hubs Archive
3/22/2017 5 min to read Edit on GitHub

Azure Event Hubs Archive enables you to automatically deliver the streaming data in your Event Hubs to a Blob
storage account of your choice with added flexibility to specify a time or size interval of your choosing. Setting up
Archive is fast, there are no administrative costs to run it, and it scales automatically with your Event Hubs
throughput units. Event Hubs Archive is the easiest way to load streaming data into Azure and enables you to focus
on data processing rather than on data capture.
Event Hubs Archive enables you to process real-time and batch-based pipelines on the same stream. This enables
you to build solutions that can grow with your needs over time. Whether you're building batch-based systems
today with an eye towards future real-time processing, or you want to add an efficient cold path to an existing real-
time solution, Event Hubs Archive makes working with streaming data easier.

How Event Hubs Archive works


Event Hubs is a time-retention durable buffer for telemetry ingress, similar to a distributed log. The key to scale in
Event Hubs is the partitioned consumer model. Each partition is an independent segment of data and is consumed
independently. Over time this data ages off, based on the configurable retention period. As a result, a given Event
Hub never gets "too full."
Event Hubs Archive enables you to specify your own Azure Blob Storage account and Container which will be used
to store the archived data. This account can be in the same region as your Event Hub or in another region, adding to
the flexibility of the Event Hubs Archive feature.
Archived data is written in Apache Avro format: a compact, fast, binary format that provides rich data structures
with inline schema. This format is widely used in the Hadoop ecosystem, as well as by Stream Analytics and Azure
Data Factory. More information about working with Avro is available later in this article.
Archive windowing
Event Hubs Archive enables you to set up a "window" to control archiving. This window is a minimum size and time
configuration with a "first wins policy," meaning that the first trigger encountered causes an archive operation. If
you have a fifteen-minute, 100 MB archive window and send 1 MB per second, the size window will trigger before
the time window. Each partition archives independently and writes a completed block blob at the time of archive,
named for the time at which the archive interval was encountered. The naming convention is as follows:

[Namespace]/[EventHub]/[Partition]/[YYYY]/[MM]/[DD]/[HH]/[mm]/[ss]

Scaling to throughput units


Event Hubs traffic is controlled by throughput units. A single throughput unit allows 1 MB per second or 1000
events per second of ingress and twice that amount of egress. Standard Event Hubs can be configured with 1-20
throughput units, and you can purchase more via a quota increase support request. Usage beyond your purchased
throughput units is throttled. Event Hubs Archive copies data directly from the internal Event Hubs storage,
bypassing throughput unit egress quotas and saving your egress for other processing readers, such as Stream
Analytics or Spark.
Once configured, Event Hubs Archive runs automatically as soon as you send your first event. It continues running
at all times. To make it easier to for your downstream processing to know that the process is working, Event Hubs
writes empty files when there is no data. This provides a predictable cadence and marker that can feed your batch
processors.
Setting up Event Hubs Archive
You can configure Archive at the Event Hub creation time via the portal, or Azure Resource Manager. You simply
enable Archive by clicking the On button. You configure a Storage Account and container by clicking the Container
section of the blade. Because Event Hubs Archive uses service-to-service authentication with storage, you do not
need to specify a storage connection string. The resource picker selects the resource URI for your storage account
automatically. If you use Azure Resource Manager, you must supply this URI explicitly as a string.
The default time window is 5 minutes. The minimum value is 1, the maximum 15. The Size window has a range of
10-500 MB.

Adding Archive to an existing Event Hub


Archive can be configured on existing Event Hubs that are in an Event Hubs namespace. The feature is not available
on older Messaging or Mixed type namespaces. To enable Archive on an existing Event Hub, or to change your
Archive settings, click your namespace to load the Essentials blade, then click the Event Hub for which you want to
enable or change the Archive setting. Finally, click on the Properties section of the open blade as shown in the
following figure.
You can also configure Event Hubs Archive via Azure Resource Manager templates. For more information, see this
article.

Exploring the archive and working with Avro


Once configured, Event Hubs Archive creates files in the Azure Storage account and container provided on the
configured time window. You can view these files in any tool such as Azure Storage Explorer. You can download the
files locally to work on them.
The files produced by Event Hubs Archive have the following Avro schema:

An easy way to explore Avro files is by using the Avro Tools jar from Apache. After downloading this jar, you can
see the schema of a specific Avro file by running the following command:
java -jar avro-tools-1.8.1.jar getschema \<name of archive file\>

This command returns

"type":"record",
"name":"EventData",
"namespace":"Microsoft.ServiceBus.Messaging",
"fields":[
{"name":"SequenceNumber","type":"long"},
{"name":"Offset","type":"string"},
{"name":"EnqueuedTimeUtc","type":"string"},
{"name":"SystemProperties","type":{"type":"map","values":["long","double","string","bytes"]}},
{"name":"Properties","type":{"type":"map","values":["long","double","string","bytes"]}},
{"name":"Body","type":["null","bytes"]}
]
}

You can also use Avro Tools to convert the file to JSON format and perform other processing.
To perform more advanced processing, download and install Avro for your choice of platform. At the time of this
writing, there are implementations available for C, C++, C#, Java, NodeJS, Perl, PHP, Python, and Ruby.
Apache Avro has complete Getting Started guides for Java and Python. You can also read the Getting Started with
Event Hubs Archive article.

How Event Hubs Archive is charged


Event Hubs Archive is metered similarly to throughput units, as an hourly charge. The charge is directly
proportional to the number of throughput units purchased for the namespace. As throughput units are increased
and decreased, Event Hubs Archive increases and decreases to provide matching performance. The meters occur in
tandem. The charge for Event Hubs Archive is $0.10 per hour per throughput unit, offered at a 50% discount during
the preview period.
Event Hubs Archive is the easiest way to get data into Azure. Using Azure Data Lake, Azure Data Factory, and Azure
HDInsight, you can perform batch processing and other analytics of your choosing using familiar tools and
platforms at any scale you need.

Next steps
You can learn more about Event Hubs by visiting the following links:
A complete sample application that uses Event Hubs.
The Scale out Event Processing with Event Hubs sample.
Event Hubs overview
Create an Event Hubs namespace with Event Hub
and enable Archive using an Azure Resource
Manager template
3/7/2017 3 min to read Edit on GitHub

This article shows how to use an Azure Resource Manager template that creates a namespace of type Event Hubs,
with one Event Hub, and also enables the Archive feature on the Event Hub. The article describes how to define
which resources are deployed, and how to define parameters that are specified when the deployment is executed.
You can use this template for your own deployments, or customize it to meet your requirements
For more information about creating templates, see Authoring Azure Resource Manager templates.
For more information on practice and patterns on Azure Resources naming conventions, please see Azure
Resources Naming Conventions.
For the complete template, see the Event Hub and enable Archive template on GitHub.

NOTE
To check for the latest templates, visit the Azure Quickstart Templates gallery and search for Event Hubs.

What will you deploy?


With this template, you deploy an Event Hubs namespace with an Event Hub, and also enable Event Hubs Archive.
Event Hubs is an event processing service used to provide event and telemetry ingress to Azure at massive scale,
with low latency and high reliability. Event Hubs Archive enables you to automatically deliver the streaming data in
your Event Hubs to Azure Blob storage of your choice within a specified time or size interval of your choosing.
To run the deployment automatically, click the following button:

Parameters
With Azure Resource Manager, you define parameters for values you want to specify when the template is
deployed. The template includes a section called Parameters that contains all the parameter values. You should
define a parameter for those values that vary based on the project you are deploying or based on the environment
you are deploying to. Do not define parameters for values that always stay the same. Each parameter value is used
in the template to define the resources that are deployed.
The template defines the following parameters.
eventHubNamespaceName
The name of the Event Hubs namespace to create.
"eventHubNamespaceName":{
"type":"string",
"metadata":{
"description":"Name of the EventHub namespace"
}
}

eventHubName
The name of the Event Hub created in the Event Hubs namespace.

"eventHubName":{
"type":"string",
"metadata":{
"description":"Name of the Event Hub"
}
}

messageRetentionInDays
The number of days to retain the messages in the Event Hub.

"messageRetentionInDays":{
"type":"int",
"defaultValue": 1,
"minValue":"1",
"maxValue":"7",
"metadata":{
"description":"How long to retain the data in Event Hub"
}
}

partitionCount
The number of partitions to create in the Event Hub.

"partitionCount":{
"type":"int",
"defaultValue":2,
"minValue":2,
"maxValue":32,
"metadata":{
"description":"Number of partitions chosen"
}
}

archiveEnabled
Enable Archive on the Event Hub.

"archiveEnabled":{
"type":"string",
"defaultValue":"true",
"allowedValues": [
"false",
"true"],
"metadata":{
"description":"Enable or disable the Archive for your Event Hub"
}
}
archiveEncodingFormat
The encoding format you specify to serialize the event data.

"archiveEncodingFormat":{
"type":"string",
"defaultValue":"Avro",
"allowedValues":[
"Avro"],
"metadata":{
"description":"The encoding format Archive serializes the EventData"
}
}

archiveTime
The time interval in which Event Hubs Archive starts archiving the data in Azure Blob storage.

"archiveTime":{
"type":"int",
"defaultValue":300,
"minValue":60,
"maxValue":900,
"metadata":{
"description":"the time window in seconds for the archival"
}
}

archiveSize
The size interval at which Archive starts archiving the data in Azure Blob storage.

"archiveSize":{
"type":"int",
"defaultValue":314572800,
"minValue":10485760,
"maxValue":524288000,
"metadata":{
"description":"the size window in bytes for archival"
}
}

destinationStorageAccountResourceId
Archive requires an Azure Storage account resource ID to enable archiving to your desired Storage account.

"destinationStorageAccountResourceId":{
"type":"string",
"metadata":{
"description":"Your existing storage Account resource id where you want the blobs be archived"
}
}

blobContainerName
The blob container in which to archive your event data.
"blobContainerName":{
"type":"string",
"metadata":{
"description":"Your existing storage container that you want the blobs archived in"
}
}

apiVersion
The API version of the template.

"apiVersion":{
"type":"string",
"defaultValue":"2015-08-01",
"metadata":{
"description":"ApiVersion used by the template"
}
}

Resources to deploy
Creates a namespace of type EventHubs, with one Event Hub, and also enables Archive.
"resources":[
{
"apiVersion":"[variables('ehVersion')]",
"name":"[parameters('eventHubNamespaceName')]",
"type":"Microsoft.EventHub/Namespaces",
"location":"[variables('location')]",
"sku":{
"name":"Standard",
"tier":"Standard"
},
"resources":[
{
"apiVersion":"[variables('ehVersion')]",
"name":"[parameters('eventHubName')]",
"type":"EventHubs",
"dependsOn":[
"[concat('Microsoft.EventHub/namespaces/', parameters('eventHubNamespaceName'))]"
],
"properties":{
"path":"[parameters('eventHubName')]",
"MessageRetentionInDays":"[parameters('messageRetentionInDays')]",
"PartitionCount":"[parameters('partitionCount')]",
"ArchiveDescription":{
"enabled":"[parameters('archiveEnabled')]",
"encoding":"[parameters('archiveEncodingFormat')]",
"intervalInSeconds":"[parameters('archiveTime')]",
"sizeLimitInBytes":"[parameters('archiveSize')]",
"destination":{
"name":"EventHubArchive.AzureBlockBlob",
"properties":{
"StorageAccountResourceId":"
[parameters('destinationStorageAccountResourceId')]",
"BlobContainer":"[parameters('blobContainerName')]"
}
}
}

}
]
}
]

Commands to run deployment


To deploy the resources to Azure, you must be logged in to your Azure account and you must use the Azure
Resource Manager module. To learn about using Azure Resource Manager with either Azure PowerShell or Azure
CLI, see:
Using Azure PowerShell with Azure Resource Manager
Using the Azure CLI for Mac, Linux, and Windows with Azure Resource Management.
The following examples assume you already have a resource group in your account with the specified name.

PowerShell
New-AzureRmResourceGroupDeployment -ResourceGroupName \<resource-group-name\> -TemplateFile
https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-eventhubs-create-namespace-and-
enable-archive/azuredeploy.json
Azure CLI
azure config mode arm

azure group deployment create \<my-resource-group\> \<my-deployment-name\> --template-uri


[https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-eventhubs-create-namespace-and-
enable-archive/azuredeploy.json][]

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Event Hubs Archive walkthrough: Python
2/2/2017 4 min to read Edit on GitHub

Event Hubs Archive is a new feature of Event Hubs that enables you to automatically deliver the streaming data in
your Event Hub to an Azure Blob Storage account of your choice. This makes it easy to perform batch processing on
real-time streaming data. This article describes how to use Event Hubs Archive with Python. For more information
about Event Hubs Archive, see the overview article.
This sample uses the Azure Python SDK to demonstrate the Archive feature. The sender.py program sends
simulated environmental telemetry to Event Hubs in JSON format. The Event Hub is configured to use the Archive
feature to write this data to blob storage in batches. The archivereader.py app then reads these blobs and creates an
append file per device, then writes the data into .csv files.
What will be accomplished
1. Create an Azure Blob Storage account and a blob container within it, using the Azure portal.
2. Create an Event Hub namespace, using the Azure portal.
3. Create an Event Hub with the Archive feature enabled, using the Azure portal.
4. Send data to the Event Hub with a Python script.
5. Read the files from the archive and process them with another Python script.
Prerequisites
Python 2.7.x
An Azure subscription
An active Event Hubs namespace and Event Hub.

NOTE
To complete this tutorial, you need an Azure account. You can activate your MSDN subscriber benefits or sign up for a free
account.

Create an Azure Storage account


1. Log on to the Azure portal.
2. In the left navigation pane of the portal, click New, then click Storage, and then click Storage Account.
3. Complete the fields in the storage account blade and then click Create.
4. After you see the Deployments Succeeded message, click the name of the new storage account and in the
Essentials blade, click Blobs. When the Blob service blade opens, click + Container at the top. Name the
container archive, then close the Blob service blade.
5. Click Access keys in the left blade and copy the name of the storage account and the value of key1. Save these
values to Notepad or some other temporary location.

Create a Python script to send events to your Event Hub


1. Open your favorite Python editor, such as Visual Studio Code.
2. Create a script called sender.py. This script will send 200 events to your Event Hub. They are simple
environmental readings sent in JSON.
3. Paste the following code into sender.py:
import uuid
import datetime
import random
import json
from azure.servicebus import ServiceBusService

sbs = ServiceBusService(service_namespace='INSERT YOUR NAMESPACE NAME',


shared_access_key_name='RootManageSharedAccessKey', shared_access_key_value='INSERT YOUR KEY')
devices = []
for x in range(0, 10):
devices.append(str(uuid.uuid4()))

for y in range(0,20):
for dev in devices:
reading = {'id': dev, 'timestamp': str(datetime.datetime.utcnow()), 'uv': random.random(),
'temperature': random.randint(70, 100), 'humidity': random.randint(70, 100)}
s = json.dumps(reading)
sbs.send_event('INSERT YOUR EVENT HUB NAME', s)
print y

4. Update the preceding code to use your namespace name, key value, and Event Hub name that you obtained
when you created the Event Hubs namespace.

Create a Python script to read your archive files


1. Fill out the blade and click Create.
2. Create a script called archivereader.py. This script will read the archive files and create a file per device to write
the data only for that device.
3. Paste the following code into archivereader.py:
import os
import string
import json
import avro.schema
from avro.datafile import DataFileReader, DataFileWriter
from avro.io import DatumReader, DatumWriter
from azure.storage.blob import BlockBlobService

def processBlob(filename):
reader = DataFileReader(open(filename, 'rb'), DatumReader())
dict = {}
for reading in reader:
parsed_json = json.loads(reading["Body"])
if not 'id' in parsed_json:
return
if not dict.has_key(parsed_json['id']):
list = []
dict[parsed_json['id']] = list
else:
list = dict[parsed_json['id']]
list.append(parsed_json)
reader.close()
for device in dict.keys():
deviceFile = open(device + '.csv', "a")
for r in dict[device]:
deviceFile.write(", ".join([str(r[x]) for x in r.keys()])+'\n')

def startProcessing(accountName, key, container):


print 'Processor started using path: ' + os.getcwd()
block_blob_service = BlockBlobService(account_name=accountName, account_key=key)
generator = block_blob_service.list_blobs(container)
for blob in generator:
if blob.properties.content_length != 0:
print('Downloaded a non empty blob: ' + blob.name)
cleanName = string.replace(blob.name, '/', '_')
block_blob_service.get_blob_to_path(container, blob.name, cleanName)
processBlob(cleanName)
os.remove(cleanName)
block_blob_service.delete_blob(container, blob.name)
startProcessing('YOUR STORAGE ACCOUNT NAME', 'YOUR KEY', 'archive')

4. Be sure to paste the appropriate values for your storage account name and key in the call to startProcessing .

Run the scripts


1. Open a command prompt that has Python in its path, and then run these commands to install Python
prerequisite packages:

pip install azure-storage


pip install azure-servicebus
pip install avro

If you have an earlier version of either azure-storage or azure you may need to use the --upgrade option
You might also need to run the following (not necessary on most systems):

pip install cryptography

2. Change your directory to wherever you saved sender.py and archivereader.py, and run this command:
start python sender.py

This starts a new Python process to run the sender.


3. Now wait a few minutes for the archive to run. Then type the following command into your original
command window:

python archivereader.py

This archive processor uses the local directory to download all the blobs from the storage account/container.
It processes any that are not empty, and writes the results as .csv files into the local directory.

Next steps
You can learn more about Event Hubs by visiting the following links:
Overview of Event Hubs Archive
A complete sample application that uses Event Hubs.
The Scale out Event Processing with Event Hubs sample.
Event Hubs overview
1 min to read
Edit on Git Hub
Create an Event Hubs namespace with Event Hub
and consumer group using an Azure Resource
Manager template
3/8/2017 2 min to read Edit on GitHub

This article shows how to use an Azure Resource Manager template that creates a namespace of type Event Hubs,
with one Event Hub and one consumer group. The article shows how to define which resources are deployed and
how to define parameters that are specified when the deployment is executed. You can use this template for your
own deployments, or customize it to meet your requirements
For more information about creating templates, please see Authoring Azure Resource Manager templates.
For the complete template, see the Event Hub and consumer group template on GitHub.

NOTE
To check for the latest templates, visit the Azure Quickstart Templates gallery and search for Event Hubs.

What will you deploy?


With this template, you will deploy an Event Hubs namespace with an Event Hub and a consumer group.
Event Hubs is an event processing service used to provide event and telemetry ingress to Azure at massive scale,
with low latency and high reliability.
To run the deployment automatically, click the following button:

Parameters
With Azure Resource Manager, you define parameters for values you want to specify when the template is
deployed. The template includes a section called Parameters that contains all of the parameter values. You should
define a parameter for those values that will vary based on the project you are deploying or based on the
environment you are deploying to. Do not define parameters for values that will always stay the same. Each
parameter value in the template defines the resources that are deployed.
The template defines the following parameters.
eventHubNamespaceName
The name of the Event Hubs namespace to create.

"eventHubNamespaceName": {
"type": "string"
}

eventHubName
The name of the Event Hub created in the Event Hubs namespace.
"eventHubName": {
"type": "string"
}

eventHubConsumerGroupName
The name of the consumer group created for the Event Hub.

"eventHubConsumerGroupName": {
"type": "string"
}

apiVersion
The API version of the template.

"apiVersion": {
"type": "string"
}

Resources to deploy
Creates a namespace of type EventHubs, with an Event Hub and a consumer group.
"resources":[
{
"apiVersion":"[variables('ehVersion')]",
"name":"[parameters('namespaceName')]",
"type":"Microsoft.EventHub/namespaces",
"location":"[variables('location')]",
"sku":{
"name":"Standard",
"tier":"Standard"
},
"resources":[
{
"apiVersion":"[variables('ehVersion')]",
"name":"[parameters('eventHubName')]",
"type":"EventHubs",
"dependsOn":[
"[concat('Microsoft.EventHub/namespaces/', parameters('namespaceName'))]"
],
"properties":{
"path":"[parameters('eventHubName')]"
},
"resources":[
{
"apiVersion":"[variables('ehVersion')]",
"name":"[parameters('consumerGroupName')]",
"type":"ConsumerGroups",
"dependsOn":[
"[parameters('eventHubName')]"
],
"properties":{

}
}
]
}
]
}
],

Commands to run deployment


To deploy the resources to Azure, you must be logged in to your Azure account and you must use the Azure
Resource Manager module. To learn about using Azure Resource Manager with either Azure PowerShell or Azure
CLI, see:
Using Azure PowerShell with Azure Resource Manager
Using the Azure CLI for Mac, Linux, and Windows with Azure Resource Management.
The following examples assume you already have a resource group in your account with the specified name.

PowerShell
New-AzureRmResourceGroupDeployment -ResourceGroupName \<resource-group-name\> -TemplateFile
https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-event-hubs-create-event-hub-and-
consumer-group/azuredeploy.json

Azure CLI
azure config mode arm

azure group deployment create \<my-resource-group\> \<my-deployment-name\> --template-uri


[https://raw.githubusercontent.com/azure/azure-quickstart-templates/master/201-event-hubs-create-event-hub-and-
consumer-group/azuredeploy.json][]

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Event Hubs messaging exceptions
2/7/2017 7 min to read Edit on GitHub

This article lists some exceptions generated by the Azure Service Bus messaging APIs, which includes Event Hubs.
This reference is subject to change, so check back for updates.

Exception categories
The Event Hubs APIs generate exceptions that can fall into the following categories, along with the associated
action you can take to try to fix them.
1. User coding error: System.ArgumentException, System.InvalidOperationException,
System.OperationCanceledException, System.Runtime.Serialization.SerializationException. General action: try to
fix the code before proceeding.
2. Setup/configuration error: Microsoft.ServiceBus.Messaging.MessagingEntityNotFoundException,
Microsoft.Azure.EventHubs.MessagingEntityNotFoundException, System.UnauthorizedAccessException. General
action: review your configuration and change if necessary.
3. Transient exceptions: Microsoft.ServiceBus.Messaging.MessagingException,
Microsoft.ServiceBus.Messaging.ServerBusyException, Microsoft.Azure.EventHubs.ServerBusyException,
Microsoft.ServiceBus.Messaging.MessagingCommunicationException. General action: retry the operation or
notify users.
4. Other exceptions: System.Transactions.TransactionException, System.TimeoutException,
Microsoft.ServiceBus.Messaging.MessageLockLostException,
Microsoft.ServiceBus.Messaging.SessionLockLostException. General action: specific to the exception type; please
refer to the table in the following section.

Exception types
The following table lists messaging exception types, and their causes, and notes suggested action you can take.

NOTE ON
DESCRIPTION/CAUSE/EXAMPLE AUTOMATIC/IMMEDIATE
EXCEPTION TYPE S SUGGESTED ACTION RETRY

TimeoutException The server did not respond Check the system state for Retry might help in some
to the requested operation consistency and retry if cases; add retry logic to
within the specified time, necessary. code.
which is controlled by See TimeoutException.
OperationTimeout. The
server may have completed
the requested operation.
This can happen due to
network or other
infrastructure delays.
NOTE ON
DESCRIPTION/CAUSE/EXAMPLE AUTOMATIC/IMMEDIATE
EXCEPTION TYPE S SUGGESTED ACTION RETRY

InvalidOperationException The requested user Check the code and the Retry will not help.
operation is not allowed documentation. Make sure
within the server or service. the requested operation is
See the exception message valid.
for details. For example,
Complete generates this
exception if the message was
received in
ReceiveAndDelete mode.

OperationCanceledException An attempt is made to Check the code and make Retry will not help.
invoke an operation on an sure it does not invoke
object that has already been operations on a disposed
closed, aborted or disposed. object.
In rare cases, the ambient
transaction is already
disposed.

UnauthorizedAccessExceptio The TokenProvider object Make sure the token Retry might help in some
n could not acquire a token, provider is created with the cases; add retry logic to
the token is invalid, or the correct values. Check the code.
token does not contain the configuration of the Access
claims required to perform Control service.
the operation.

ArgumentException One or more arguments Check the calling code and Retry will not help.
ArgumentNullException supplied to the method are make sure the arguments
ArgumentOutOfRangeExcep invalid. The URI supplied to are correct.
tion NamespaceManager or
Create contains path
segment(s). The URI scheme
supplied to
NamespaceManager or
Create is invalid. The
property value is larger than
32KB.

Microsoft.ServiceBus.Messag Entity associated with the Make sure the entity exists. Retry will not help.
ing.MessagingEntityNotFoun operation does not exist or it
dException has been deleted.
Microsoft.Azure.EventHubs.
MessagingEntityNotFoundEx
ception

MessageNotFoundException Attempt to receive a Make sure the message has Retry will not help.
message with a particular not been received already.
sequence number. This Check the deadletter queue
message is not found. to see if the message has
been deadlettered.

MessagingCommunicationEx Client is not able to establish Make sure the supplied host Retry might help if there are
ception a connection to Event Hub. name is correct and the host intermittent connectivity
is reachable. issues.
NOTE ON
DESCRIPTION/CAUSE/EXAMPLE AUTOMATIC/IMMEDIATE
EXCEPTION TYPE S SUGGESTED ACTION RETRY

Microsoft.ServiceBus.Messag Service is not able to process Client can wait for a period Client may retry after certain
ing.ServerBusyException the request at this time. of time, then retry the interval. If a retry results in a
Microsoft.Azure.EventHubs.S operation. different exception, check
erverBusyException See ServerBusyException. retry behavior of that
exception.

MessageLockLostException Lock token associated with Dispose the message. Retry will not help.
the message has expired, or
the lock token is not found.

SessionLockLostException Lock associated with this Abort the MessageSession Retry will not help.
session is lost. object.

MessagingException Generic messaging exception Check the code and ensure Retry behavior is undefined
that may be thrown in the that only serializable objects and might not help.
following cases: An attempt are used for the message
is made to create a body (or use a custom
QueueClient using a name serializer). Check the
or path that belongs to a documentation for the
different entity type (for supported value types of the
example, a topic). An properties and only use
attempt is made to send a supported types. Check the
message larger than 256KB. IsTransient property. If it is
The server or service true, you can retry the
encountered an error during operation.
processing of the request.
Please see the exception
message for details. This is
usually a transient exception.

MessagingEntityAlreadyExist Attempt to create an entity Delete the existing entity or Retry will not help.
sException with a name that is already choose a different name for
used by another entity in the entity to be created.
that service namespace.

QuotaExceededException The messaging entity has Create space in the entity by Retry might help if messages
reached its maximum receiving messages from the have been removed in the
allowable size. This can entity or its sub-queues. meantime.
happen if the maximum See
number of receivers (which is QuotaExceededException
5) has already been opened
on a per-consumer group
level.

SessionCannotBeLockedExce Attempt to accept a session Make sure the session is Retry might help if the
ption with a specific session ID, unlocked by other clients. session has been released in
but the session is currently the interim.
locked by another client.

TransactionSizeExceededExce Too many operations are Reduce the number of Retry will not help.
ption part of the transaction. operations that are part of
this transaction.
NOTE ON
DESCRIPTION/CAUSE/EXAMPLE AUTOMATIC/IMMEDIATE
EXCEPTION TYPE S SUGGESTED ACTION RETRY

MessagingEntityDisabledExc Request for a runtime Activate the entity. Retry might help if the entity
eption operation on a disabled has been activated in the
entity. interim.

Microsoft.ServiceBus.Messag A message payload exceeds Reduce the size of the Retry will not help.
ing.MessageSizeExceededExc the 256K limit. Note that the message payload, then retry
eption 256k limit is the total the operation.
Microsoft.Azure.EventHubs. message size, which can
MessageSizeExceededExcepti include system properties
on and any .NET overhead.

TransactionException The ambient transaction Retry will not help.


(Transaction.Current) is
invalid. It may have been
completed or aborted. Inner
exception may provide
additional information.

TransactionInDoubtExceptio An operation is attempted Your application must -


n on a transaction that is in handle this exception (as a
doubt, or an attempt is special case), as the
made to commit the transaction may have
transaction and the already been committed.
transaction becomes in
doubt.

QuotaExceededException
QuotaExceededException indicates that a quota for a specific entity has been exceeded.
This can happen if the maximum number of receivers (5) has already been opened on a per-consumer group level.
Event Hubs
Event Hubs has a limit of 20 consumer groups per Event Hub. When you attempt to create more, you receive a
QuotaExceededException.

TimeoutException
A TimeoutException indicates that a user-initiated operation is taking longer than the operation timeout.
For Event Hubs, the timeout is specified either as part of the connection string, or through
ServiceBusConnectionStringBuilder. The error message itself might vary, but it always contains the timeout value
specified for the current operation.
Common causes
There are two common causes for this error: incorrect configuration, or a transient service error.
1. Incorrect configuration The operation timeout might be too small for the operational condition. The default
value for the operation timeout in the client SDK is 60 seconds. Check to see if your code has the value set to
something too small. Note that the condition of the network and CPU usage can affect the time it takes for a
particular operation to complete, so the operation timeout should not be set to a very small value.
2. Transient service error Sometimes the Event Hubs service can experience delays in processing requests; for
example, during periods of high traffic. In such cases, you can retry your operation after a delay, until the
operation is successful. If the same operation still fails after multiple attempts, please visit the Azure service
status site to see if there are any known service outages.

ServerBusyException
A Microsoft.ServiceBus.Messaging.ServerBusyException or Microsoft.Azure.EventHubs.ServerBusyException
indicates that a server is overloaded. There are two relevant error codes for this exception.
Error code 50002
This error can occur for one of two reasons:
1. The load is not evenly distributed across all partitions on the Event Hub, and one partition hits the local
throughput unit limitation.
Resolution: Revising the partition distribution strategy or trying
EventHubClient.Send(eventDataWithOutPartitionKey) might help.
2. The Event Hubs namespace does not have sufficient throughput units (you can check the Metrics blade on
Event Hubs namespace blade in the Azure portal to confirm). Note that the portal shows aggregated (1
minute) information, but we measure the throughput in real time so it is only an estimate.
Resolution: Increasing the throughput units on the namespace can help. You can do this on the portal, in the
Scale blade of the Event Hubs namespace blade.
Error code 50001
This error should rarely occur. It happens when the container running code for your namespace is low on CPU
not more than a few seconds before the Event Hubs load balancer begins.

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Event Hubs quotas
3/7/2017 1 min to read Edit on GitHub

This section lists basic quotas and limits in Microsoft Azure Event Hubs.
The following table lists quotas and limits specific to Azure Event Hubs. For information about Event Hubs pricing,
see Event Hubs Pricing.

BEHAVIOR WHEN
LIMIT SCOPE TYPE EXCEEDED VALUE

Number of Event Namespace Static Subsequent requests 10


Hubs per namespace for creation of a new
namespace will be
rejected.

Number of partitions Entity Static - 32


per Event Hub

Number of consumer Entity Static - 20


groups per Event Hub

Number of AMQP Namespace Static Subsequent requests 5,000


connections per for additional
namespace connections will be
rejected and an
exception will be
received by the calling
code.

Maximum size of System-wide Static - 256KB


Event Hubs event

Maximum size of an Entity Static - 50 characters


Event Hub name

Number of non- Entity Static - 5


epoch receivers per
consumer group

Maximum retention Entity Static - 1-7 days


period of event data
BEHAVIOR WHEN
LIMIT SCOPE TYPE EXCEEDED VALUE

Maximum throughput Namespace Static Exceeding the 20


units throughput unit limit
will cause your data
to be throttled and
generate a
ServerBusyExceptio
n. You can request a
larger number of
throughput units for
a Standard tier by
filing a support ticket.
Additional
throughput units are
available in blocks of
twenty on a
committed purchase
basis.

Next steps
You can learn more about Event Hubs by visiting the following links:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ
Event Hubs samples
3/8/2017 1 min to read Edit on GitHub

The Event Hubs samples demonstrate key features in Azure Event Hubs. This article categorizes and describes the
samples available, with links to each.
At the time of this writing, Event Hubs samples are located in several different places:
MSDN developer code samples
GitHub
For more information about different versions of the .NET Framework, see Frameworks and Targets.
More samples will be added over time, so check back here frequently for updates.

.NET Standard
The following samples demonstrate how to send and receive events using the Event Hubs client for the .NET
Standard library.
Send events
The Get started sending sample shows how to write a .NET Core console application that sends events to an Event
Hub.
Receive events
The Get started receiving with the Event Processor Host sample is a .NET Core console application that receives
messages from an Event Hub using the Event Processor Host .

.NET Framework
These samples demonstrate various other features of Azure Event Hubs, targeting the .NET Framework library.
Notify users of events received
The AppToNotifyUsers sample notifies users of data received from sensors or other systems.
Get started with Event Hubs
The Event Hubs Getting Started sample demonstrates the basic capabilities of Event Hubs, such as how to create an
Event Hub, send events to an Event Hub, and consume events using the Event Processor Host.
Scale out event processing
The Scale out event processing sample demonstrates how to use the Event Processor Host to distribute the
workload of Event Hubs stream consumption. It shows how to implement the EventProcessor and
EventProcessorFactory objects to manage the event stream.
Pull data from SQL into an Event Hub
The Pulling SQL data sample shows how to pull data from a SQL table and push it to an Event Hub, to use as an
input in downstream analytical applications.
Pull web data into an Event Hub
The Import data from the web sample shows how to pull data from public feeds (such as the Department of
Transportation's traffic information feed) and push it to an Event Hub.
Next steps
Learn more about .NET Framework versions by visiting the following links:
Frameworks and Targets
.NET Framework 4.6 and 4.5
You can learn more about Event Hubs in the following articles:
Event Hubs overview
Create an Event Hub
Event Hubs FAQ

Potrebbero piacerti anche