Sei sulla pagina 1di 4

Inside Windows NT High Resolution Timers

Copyright 1997 Mark Russinovich


last updated March 1, 1997
Timers can Cause Jitter In Quantum Tracking Code
Note: The information presented here is the result of my own study. No
source code was used.
Introduction
High resolution timers are desirable in a wide variety of
different applications. For example, the most common use
of such timers in Windows is by multimedia applications
that are producing sound or audio that require precise
control. MIDI is a perfect example because MIDI
sequencers must maintain the pace of MIDI events with 1
millisecond accuracy. This article describes how high
resolution timers are implemented in NT and documents
NtSetTimerResolution, the NT kernel function that
manipulates the system clock, and NtQueryTimerResolution,
the NT kernel function that returns information about the
system's timer capabilities. Unfortunately, neither
NtSetTimerResolution nor NtQueryTimerResolution are
exported by the NT kernel, so they are not available to
kernel-mode device drivers. What makes
NtSetTimerResolution interesting however, is the fact
that it oversimplifies thread quantum tracking. As a
result, when certain timer values are passed to it
quantums end up varying in length, which causes context
switches to occur at irregular invervals and can
adversely affect performance.
The Timer API
Windows NT bases all of its timer support off of one
system clock interrupt, which by default runs at a 10
millisecond granularity. This is therefore the resolution
of standard Windows timers. When a multimedia application
uses the timeBeginPeriod mutlimedia API, which is
exported by the Windows NT dynamic link library
WINMM.DLL, the call is redirected into the Windows NT
kernel-mode function NtSetTimerResolution, which is
exported by the native Windows NT library NTDLL.DLL.
NtSetTimerResolution is defined as follows:
NTSTATUS NtSetTimerResolution (
IN ULONG RequestedResolution,
IN BOOLEAN Set,
OUT PULONG ActualResolution
);
Parameters:
RequestedResolution
The desired timer resolution. Must be specified in

hundreds of nanoseconds and be within the legal range


of system timer values supported by NT. On standard x86
systems this is 1-10 milliseconds. Values that are
within the acceptable range are rounded to the next
highest millisecond boundary by the standard x86 HAL.
This parameter is ignored if the Set parameter is
FALSE.
Set
This is TRUE if a new timer resolution is being
requested, and FALSE if the application is indicating
it no longer needs a previously implemented resolution.
ActualResolution
The timer resolution in effect after the call is
returned in this parameter, in hundreds of nanoseconds.
Comments
NtSetTimerResolution returns STATUS_SUCCESS if the
resolution requested is within the valid range of timer
values. If Set is FALSE, the caller must have made a
previous call to NtSetTimerResolution or
STATUS_TIMER_RESOLUTION_NOT_SET is returned.
NtQueryTimerResolution is defined as follows:
NTSTATUS NtQueryTimerResolution(
OUT PULONG LowestResolution,
OUT PULONG HighestResolution,
OUT PULONG CurrentResolution
);
Parameters
LowestResolution
This is the lowest resolution in hundreds of
nanoseconds the system supports for its timer. It is
also the interval that the scheduler's quantum tracking
code is invoked at. On x86 systems with a standard HAL
this is 0x1873 (approximately 10ms).
HighestResolution
This is the highest resolution in hundreds of
nanoseconds the system supports for its timer.
Interestingly enought, on x86 systems 0x2710 (1ms) is
hard-wired into the kernel for this number.
CurrentResolution
This is the resolution that the system clock is
currently set to. Note that the system timer is
manipulated by NT's Win16 emulation subsystem to affect
interrupt delivery latencies.

Implementation Details
NTQueryTimerResolution returns STATUS_SUCCESS
unless an
invalid pointer is passed from a user-mode caller.
NtSetTimerResolution can be called to set timer
resolutions by more than on application. To support a

subsequent process setting a timer resolution without


violating the resolution assumptions of a previous
caller, NtSetTimerResolution never lowers the timer's
resolution, only raises it. For example, if a process
sets the resolution to 5 milliseconds, subequent calls to
set the resolution to between 5 and 10 millseconds will
return a status code indicating success, but the timer
will be left at 5 milliseconds.
NtSetTimerResolution also keeps track of whether a
process has set the timer resolution in its process
control block, so that when a call is made with Set equal
to FALSE it can verify that the caller has previously
requested a new resolution. Every time a new resolution
is set a global counter is incremented, and every time it
is reset the counter is decremented. When the counter
becomes 0 on a reset call the timer is changed back to
its default rate, otherwise no action is taken. Again,
this preserves the timer resolution assumptions of all
the applications that have requested high resolution
timers by guaranteeing that the resolution will be at
least as good as what they specified.
Context Switching
Threads are allowed to execute for a length of time known
as a quantum before NT may decide to let another thread
take over the CPU. When the system timer is ticking at
its default rate of 10 milliseconds, the NT scheduler
wakes up at each tick, adjusts the currently executing
thread's quantum counter by decrementing it a fixed
amount, and then sees if it has reached zero, which means
that its slice has expired and that another thread may be
scheduled.
When the timer is ticking at a faster rate as the result
of a call to NtSetTimerResolution, a global quantum
interval counter is maintained. It is initialized to 10
milliseconds, and each tick of the clock it is
decremented by the clock resolution. When the quantum
interval counter drops below zero, the quantum adjusting
code is called. A value of 10 milliseconds is then added
to the counter and the decrementing continues. For
instance, if the clock is ticking every millisecond, the
quantum interval counter will be decremented by 1
millisecond until it reaches 0, and then the scheduler
will check to see if the quantum of the currently
executing thread has run out.
NT's ad-hoc method of determining when the quantum
adjusting code should be executed works great when the
system timer is set to a rate that is integrally
divisible into 10 milliseconds, like 1, 2, or 5
milliseconds. When the rate is not integrally divisible,
the rate at which the quantum adjusting code is called
will diverge from 10 milliseconds. Lets say that the
clock's resolution is set to 4 milliseconds. On the first
tick the quantum interval counter is decremented to 6, on
the second to 2, and it is only on the third tick after

12 milliseconds that the scheduler is invoked. In this


case the quantum is 20% longer than it should be. Then 10
milliseconds is added to the interval counter's value of
-2 milliseconds so it becomes 8 milliseconds. The next
tick of the clock the counter is decremented to 4 and on
the following tick to 0, where the scheduler is invoked
again. As this example demonstrates, the quantum ends up
jittering between 8 and 12 milliseconds.
This situation exists for timer values of 3, 4, 6, 7, 8,
and 9 milliseconds where the jitter is between 9 and 12
milliseconds, 8 and 12 milliseconds, 6 and 12
milliseconds, 7 and 14 milliseconds, 8 and 16
milliseconds and 9 and 18 milliseconds, respectively. The
rate at which the jitter occurs also varies between timer
resolutions, and while the average scheduler invocation
rate averages to 10 milliseconds over long periods, the
jitter can cause specific threads to be unfairly
scheduled.
Comments
Since Microsoft claims
evaluation has lead to
Windows NT implements,
to inadvertently alter
of the timer API.

that extensive performance


the hard-wired quantum values
the ability of user-mode programs
them is an undesirable side-effect

Why aren't NtSetTimerResolution, ZwSetTimerResolution,


NtQueryTimerResolution or ZwQueryTimerResolution exported
for use by kernel-mode device drivers? This is one of the
mysteries of the thinking at Redmond. One can only guess
that they have decided that kernel-mode drivers have no
need for high resolution timers.

Potrebbero piacerti anche