Sei sulla pagina 1di 439

ORTHOGONAL POLYNOMIALS OF SEVERAL VARIABLES

Serving both as an introduction to the subject and as a reference, this book


presents the theory in elegant form and with modern concepts and notation. It
covers the general theory and emphasizes the classical types of orthogonal
polynomials whose weight functions are supported on standard domains. The
approach is a blend of classical analysis and symmetry-group-theoretic methods.
Finite reection groups are used to motivate and classify the symmetries of
weight functions and the associated polynomials.
This revised edition has been updated throughout to reect recent
developments in the eld. It contains 25 percent new material including two
brand new chapters, on orthogonal polynomials in two variables, which will be
especially useful for applications, and on orthogonal polynomials on the unit
sphere. The most modern and complete treatment of the subject available, it will
be useful to a wide audience of mathematicians and applied scientists, including
physicists, chemists and engineers.
Encyclopedia of Mathematics and Its Applications
This series is devoted to signicant topics or themes that have wide application
in mathematics or mathematical science and for which a detailed development of
the abstract theory is less important than a thorough and concrete exploration of
the implications and applications.
Books in the Encyclopedia of Mathematics and Its Applications cover their
subjects comprehensively. Less important results may be summarized as
exercises at the ends of chapters. For technicalities, readers can be referred to the
bibliography, which is expected to be comprehensive. As a result, volumes are
encyclopedic references or manageable guides to major subjects.
ENCYCL OP E DI A OF MAT HE MAT I CS AND I T S AP P L I CAT I ONS
All the titles listed below can be obtained from good booksellers or from Cambridge
University Press. For a complete series listing visit
www.cambridge.org/mathematics.
106 A. Markoe Analytic Tomography
107 P. A. Martin Multiple Scattering
108 R. A. Brualdi Combinatorial Matrix Classes
109 J. M. Borwein and J. D. Vanderwerff Convex Functions
110 M.-J. Lai and L. L. Schumaker Spline Functions on Triangulations
111 R. T. Curtis Symmetric Generation of Groups
112 H. Salzmann et al. The Classical Fields
113 S. Peszat and J. Zabczyk Stochastic Partial Differential Equations with L evy Noise
114 J. Beck Combinatorial Games
115 L. Barreira and Y. Pesin Nonuniform Hyperbolicity
116 D. Z. Arov and H. Dym J-Contractive Matrix Valued Functions and Related Topics
117 R.Glowinski, J.-L.Lions and J. He Exact and Approximate Controllability for Distributed Parameter
Systems
118 A. A. Borovkov and K. A. Borovkov Asymptotic Analysis of Random Walks
119 M. Deza and M. Dutour Sikiri c Geometry of Chemical Graphs
120 T. Nishiura Absolute Measurable Spaces
121 M. Prest Purity, Spectra and Localisation
122 S. Khrushchev Orthogonal Polynomials and Continued Fractions
123 H. Nagamochi and T. Ibaraki Algorithmic Aspects of Graph Connectivity
124 F. W. King Hilbert Transforms I
125 F. W. King Hilbert Transforms II
126 O. Calin and D.-C. Chang Sub-Riemannian Geometry
127 M. Grabisch et al. Aggregation Functions
128 L.W. Beineke and R. J. Wilson (eds.) with J. L. Gross and T. W. Tucker Topics in Topological Graph
Theory
129 J. Berstel, D. Perrin and C. Reutenauer Codes and Automata
130 T. G. Faticoni Modules over Endomorphism Rings
131 H. Morimoto Stochastic Control and Mathematical Modeling
132 G. Schmidt Relational Mathematics
133 P. Kornerup and D. W. Matula Finite Precision Number Systems and Arithmetic
134 Y. Crama and P. L. Hammer (eds.) Boolean Models and Methods in Mathematics, Computer Science,
and Engineering
135 V. Berth e and M. Rigo (eds.) Combinatorics, Automata and Number Theory
136 A. Krist aly, V. D. R adulescu and C. Varga Variational Principles in Mathematical Physics, Geometry,
and Economics
137 J. Berstel and C. Reutenauer Noncommutative Rational Series with Applications
138 B. Courcelle and J. Engelfriet Graph Structure and Monadic Second-Order Logic
139 M. Fiedler Matrices and Graphs in Geometry
140 N. Vakil Real Analysis through Modern Innitesimals
141 R. B. Paris Hadamard Expansions and Hyperasymptotic Evaluation
142 Y. Crama and P. L. Hammer Boolean Functions
143 A. Arapostathis, V. S. Borkar and M. K. Ghosh Ergodic Control of Diffusion Processes
144 N. Caspard, B. Leclerc and B. Monjardet Finite Ordered Sets
145 D. Z. Arov and H. Dym Bitangential Direct and Inverse Problems for Systems of Integral and
Differential Equations
146 G. Dassios Ellipsoidal Harmonics
147 L. W. Beineke and R. J. Wilson (eds.) with O. R. Oellermann Topics in Structural Graph Theory
148 L. Berlyand, A. G. Kolpakov and A. Novikov Introduction to the Network Approximation Method for
Materials Modeling
149 M. Baake and U. Grimm Aperiodic Order I: A Mathematical Invitation
150 J. Borwein et al. Lattice Sums Then and Now
151 R. Schneider Convex Bodies: The BrunnMinkowski Theory (Second Edition)
152 G. Da Prato and J. Zabczyk Stochastic Equations in Innite Dimensions (Second Edition)
153 D. Hofmann, G. J. Seal and W. Tholen (eds.) Monoidal Topology
154 M. Cabrera Garca and

A. Rodrguez Palacios Non-Associative Normed Algebras I: The
VidavPalmer and GelfandNaimark Theorems
155 C. F. Dunkl and Y. Xu Orthogonal Polynomials of Several Variables (Second Edition)
ENCYCLOPEDIA OF MATHEMATICS AND ITS APPLICATIONS
Orthogonal Polynomials
of Several Variables
Second Edition
CHARLES F. DUNKL
University of Virginia
YUAN XU
University of Oregon
University Printing House, Cambridge CB2 8BS, United Kingdom
Cambridge University Press is part of the University of Cambridge.
It furthers the Universitys mission by disseminating knowledge in the pursuit of
education, learning and research at the highest international levels of excellence.
www.cambridge.org
Information on this title: www.cambridge.org/9781107071896
First edition c Cambridge University Press 2001
Second edition c Charles F. Dunkl and Yuan Xu 2014
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2001
Second edition 2014
Printed in the United Kingdom by CPI Group Ltd, Croydon CR0 4YY
A catalogue record for this publication is available from the British Library
Library of Congress Cataloguing in Publication data
Dunkl, Charles F., 1941
Orthogonal polynomials of several variables / Charles F. Dunkl,
University of Virginia, Yuan Xu, University of Oregon. Second edition.
pages cm. (Encyclopedia of mathematics and its applications; 155)
Includes bibliographical references and indexes.
ISBN 978-1-107-07189-6
1. Orthogonal polynomials. 2. Functions of several real variables.
I. Xu, Yuan, 1957 II. Title.
QA404.5.D86 2014
515
/
.55dc23
2014001846
ISBN 978-1-107-07189-6 Hardback
Cambridge University Press has no responsibility for the persistence or accuracy of
URLs for external or third-party internet websites referred to in this publication,
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
To our wives
Philomena and Litian
with deep appreciation
Contents
Preface to the Second Edition page xiii
Preface to the First Edition xv
1 Background 1
1.1 The Gamma and Beta Functions 1
1.2 Hypergeometric Series 3
1.2.1 Lauricella series 5
1.3 Orthogonal Polynomials of One Variable 6
1.3.1 General properties 6
1.3.2 Three-term recurrence 9
1.4 Classical Orthogonal Polynomials 13
1.4.1 Hermite polynomials 13
1.4.2 Laguerre polynomials 14
1.4.3 Gegenbauer polynomials 16
1.4.4 Jacobi polynomials 20
1.5 Modied Classical Polynomials 22
1.5.1 Generalized Hermite polynomials 24
1.5.2 Generalized Gegenbauer polynomials 25
1.5.3 A limiting relation 27
1.6 Notes 27
2 Orthogonal Polynomials in Two Variables 28
2.1 Introduction 28
2.2 Product Orthogonal Polynomials 29
2.3 Orthogonal Polynomials on the Unit Disk 30
2.4 Orthogonal Polynomials on the Triangle 35
2.5 Orthogonal Polynomials and Differential Equations 37
2.6 Generating Orthogonal Polynomials of Two Variables 38
2.6.1 A method for generating orthogonal polynomials 38
viii Contents
2.6.2 Orthogonal polynomials for a radial weight 40
2.6.3 Orthogonal polynomials in complex variables 41
2.7 First Family of Koornwinder Polynomials 45
2.8 A Related Family of Orthogonal Polynomials 48
2.9 Second Family of Koornwinder Polynomials 50
2.10 Notes 54
3 General Properties of Orthogonal Polynomials in Several Variables 57
3.1 Notation and Preliminaries 58
3.2 Moment Functionals and Orthogonal Polynomials
in Several Variables 60
3.2.1 Denition of orthogonal polynomials 60
3.2.2 Orthogonal polynomials and moment matrices 64
3.2.3 The moment problem 67
3.3 The Three-Term Relation 70
3.3.1 Denition and basic properties 70
3.3.2 Favards theorem 73
3.3.3 Centrally symmetric integrals 76
3.3.4 Examples 79
3.4 Jacobi Matrices and Commuting Operators 82
3.5 Further Properties of the Three-Term Relation 87
3.5.1 Recurrence formula 87
3.5.2 General solutions of the three-term relation 94
3.6 Reproducing Kernels and Fourier Orthogonal Series 96
3.6.1 Reproducing kernels 97
3.6.2 Fourier orthogonal series 101
3.7 Common Zeros of Orthogonal Polynomials
in Several Variables 103
3.8 Gaussian Cubature Formulae 107
3.9 Notes 112
4 Orthogonal Polynomials on the Unit Sphere 114
4.1 Spherical Harmonics 114
4.2 Orthogonal Structures on S
d
and on B
d
119
4.3 Orthogonal Structures on B
d
and on S
d+m1
125
4.4 Orthogonal Structures on the Simplex 129
4.5 Van der CorputSchaake Inequality 133
4.6 Notes 136
5 Examples of Orthogonal Polynomials in Several Variables 137
5.1 Orthogonal Polynomials for Simple Weight Functions 137
5.1.1 Product weight functions 138
5.1.2 Rotation-invariant weight functions 138
Contents ix
5.1.3 Multiple Hermite polynomials on R
d
139
5.1.4 Multiple Laguerre polynomials on R
d
+
141
5.2 Classical Orthogonal Polynomials on the Unit Ball 141
5.2.1 Orthonormal bases 142
5.2.2 Appells monic orthogonal and biorthogonal
polynomials 143
5.2.3 Reproducing kernel with respect to W
B

on B
d
148
5.3 Classical Orthogonal Polynomials on the Simplex 150
5.4 Orthogonal Polynomials via Symmetric Functions 154
5.4.1 Two general families of orthogonal polynomials 154
5.4.2 Common zeros and Gaussian cubature formulae 156
5.5 Chebyshev Polynomials of Type A
d
159
5.6 Sobolev Orthogonal Polynomials on the Unit Ball 165
5.6.1 Sobolev orthogonal polynomials dened via the
gradient operator 165
5.6.2 Sobolev orthogonal polynomials dened via the
Laplacian operator 168
5.7 Notes 171
6 Root Systems and Coxeter Groups 174
6.1 Introduction and Overview 174
6.2 Root Systems 176
6.2.1 Type A
d1
179
6.2.2 Type B
d
179
6.2.3 Type I
2
(m) 180
6.2.4 Type D
d
181
6.2.5 Type H
3
181
6.2.6 Type F
4
182
6.2.7 Other types 182
6.2.8 Miscellaneous results 182
6.3 Invariant Polynomials 183
6.3.1 Type A
d1
invariants 185
6.3.2 Type B
d
invariants 186
6.3.3 Type D
d
invariants 186
6.3.4 Type I
2
(m) invariants 186
6.3.5 Type H
3
invariants 186
6.3.6 Type F
4
invariants 187
6.4 DifferentialDifference Operators 187
6.5 The Intertwining Operator 192
6.6 The -Analogue of the Exponential 200
6.7 Invariant Differential Operators 202
6.8 Notes 207
x Contents
7 Spherical Harmonics Associated with Reection Groups 208
7.1 h-Harmonic Polynomials 208
7.2 Inner Products on Polynomials 217
7.3 Reproducing Kernels and the Poisson Kernel 221
7.4 Integration of the Intertwining Operator 224
7.5 Example: Abelian Group Z
d
2
228
7.5.1 Orthogonal basis for h-harmonics 228
7.5.2 Intertwining and projection operators 232
7.5.3 Monic orthogonal basis 235
7.6 Example: Dihedral Groups 240
7.6.1 An orthonormal basis of H
n
(h
2
,
) 241
7.6.2 Cauchy and Poisson kernels 248
7.7 The Dunkl Transform 250
7.8 Notes 256
8 Generalized Classical Orthogonal Polynomials 258
8.1 Generalized Classical Orthogonal Polynomials
on the Ball 258
8.1.1 Denition and differentialdifference equations 258
8.1.2 Orthogonal basis and reproducing kernel 263
8.1.3 Orthogonal polynomials for Z
d
2
-invariant
weight functions 266
8.1.4 Reproducing kernel for Z
d
2
-invariant weight functions 268
8.2 Generalized Classical Orthogonal Polynomials on the Simplex 271
8.2.1 Weight function and differentialdifference equation 271
8.2.2 Orthogonal basis and reproducing kernel 273
8.2.3 Monic orthogonal polynomials 276
8.3 Generalized Hermite Polynomials 278
8.4 Generalized Laguerre Polynomials 283
8.5 Notes 287
9 Summability of Orthogonal Expansions 289
9.1 General Results on Orthogonal Expansions 289
9.1.1 Uniform convergence of partial sums 289
9.1.2 Ces` aro means of the orthogonal expansion 293
9.2 Orthogonal Expansion on the Sphere 296
9.3 Orthogonal Expansion on the Ball 299
9.4 Orthogonal Expansion on the Simplex 304
9.5 Orthogonal Expansion of Laguerre and Hermite
Polynomials 306
9.6 Multiple Jacobi Expansion 311
9.7 Notes 315
Contents xi
10 Orthogonal Polynomials Associated with Symmetric Groups 318
10.1 Partitions, Compositions and Orderings 318
10.2 Commuting Self-Adjoint Operators 320
10.3 The Dual Polynomial Basis 322
10.4 S
d
-Invariant Subspaces 329
10.5 Degree-Changing Recurrences 334
10.6 Norm Formulae 337
10.6.1 Hook-length products and the pairing norm 337
10.6.2 The biorthogonal-type norm 341
10.6.3 The torus inner product 343
10.6.4 Monic polynomials 346
10.6.5 Normalizing constants 346
10.7 Symmetric Functions and Jack Polynomials 350
10.8 Miscellaneous Topics 357
10.9 Notes 362
11 Orthogonal Polynomials Associated with Octahedral Groups,
and Applications 364
11.1 Introduction 364
11.2 Operators of Type B 365
11.3 Polynomial Eigenfunctions of Type B 368
11.4 Generalized Binomial Coefcients 376
11.5 Hermite Polynomials of Type B 383
11.6 CalogeroSutherland Systems 385
11.6.1 The simple harmonic oscillator 386
11.6.2 Root systems and the Laplacian 387
11.6.3 Type A models on the line 387
11.6.4 Type A models on the circle 389
11.6.5 Type B models on the line 392
11.7 Notes 394
References 396
Author Index 413
Symbol Index 416
Subject Index 418
Preface to the Second Edition
In this second edition, several major changes have been made to the structure of
the book. A new chapter on orthogonal polynomials in two variables has been
added to provide a more convenient source of information for readers concerned
with this topic. The chapter collects results previously scattered in the book, spe-
cializing results in several variables to two variables whenever necessary, and
incorporates further results not covered in the rst edition. We have also added
a new chapter on orthogonal polynomials on the unit sphere, which consolidates
relevant results in the rst edition and adds further results on the topic. Since
the publication of the rst edition in 2001, considerable progress has been made
in this research area. We have incorporated several new developments, updated
the references and, accordingly, edited the notes at the ends of relevant chapters.
In particular, Chapter 5, Examples of Orthogonal Polynomials in Several Vari-
ables, has been completely rewritten and substantially expanded. New materials
have also been added to several other chapters. An index of symbols is given at
the end of the book.
Another change worth mentioning is that orthogonal polynomials have been
renormalized. Some families of orthogonal polynomials in several variables have
expressions in terms of classical orthogonal polynomials in one variable. To pro-
vide neater expressions without constants in square roots they are now given in
the form of orthogonal rather than orthonormal polynomials as in the rst edition.
The L
2
norms have been recomputed accordingly.
The second author gratefully acknowledges support from the National Science
Foundation under grant DMS-1106113.
Charles F. Dunkl
Yuan Xu
Preface to the First Edition
The study of orthogonal polynomials of several variables goes back at least as
far as Hermite. There have been only a few books on the subject since: Appell
and de F eriet [1926] and Erd elyi et al. [1953]. Twenty-ve years have gone by
since Koornwinders survey article [1975]. A number of individuals who need
techniques from this topic have approached us and suggested (even asked) that
we write a book accessible to a general mathematical audience.
It is our goal to present the developments of very recent research to a readership
trained in classical analysis. We include applied mathematicians and physicists,
and even chemists and mathematical biologists, in this category.
While there is some material about the general theory, the emphasis is on clas-
sical types, by which we mean families of polynomials whose weight functions
are supported on standard domains such as the simplex and the ball, or Gaus-
sian types, which satisfy differentialdifference equations and for which fairly
explicit formulae exist. The term difference refers to operators associated with
reections in hyperplanes. The most desirable situation occurs when there is a
set of commuting self-adjoint operators whose simultaneous eigenfunctions form
an orthogonal basis of polynomials. As will be seen, this is still an open area of
research for some families.
With the intention of making this book useful to a wide audience, for both ref-
erence and instruction, we use familiar and standard notation for the analysis on
Euclidean space and assume a basic knowledge of Fourier and functional analy-
sis, matrix theory and elementary group theory. We have been inuenced by the
important books of Bailey [1935], Szeg o [1975] and Lebedev [1972] in style and
taste.
Here is an overview of the contents. Chapter 1 is a summary of the key one-
variable methods and denitions: gamma and beta functions, the classical and
related orthogonal polynomials and their structure constants, and hypergeometric
xvi Preface to the First Edition
and Lauricella series. The multivariable analysis begins in Chapter 2 with some
examples of orthogonal polynomials and spherical harmonics and specic two-
variable examples such as Jacobi polynomials on various domains and disk
polynomials. There is a discussion of the moment problem, general properties
of orthogonal polynomials of several variables and matrix three-term recurrences
in Chapter 3. Coxeter groups are treated systematically in a self-contained way,
in a style suitable for the analyst, in Chapter 4 (a knowledge of representation
theory is not necessary). The chapter goes on to introduce differentialdifference
operators, the intertwining operator and the analogue of the exponential function
and concludes with the construction of invariant differential operators. Chapter 5
is a presentation of h-harmonics, the analogue of harmonic homogeneous poly-
nomials associated with reection groups; there are some examples of specic
reection groups as well as an application to proving the isometric properties
of the generalized Fourier transform. This transform uses an analogue of the
exponential function. It contains the classical Hankel transform as a special case.
Chapter 6 is a detailed treatment of orthogonal polynomials on the simplex, the
ball and of Hermite type. Then, summability theorems for expansions in terms of
these polynomials are presented in Chapter 7; the main method is Ces` aro (C, )
summation, and there are precise results on which values of give positive or
bounded linear operators. Nonsymmetric Jack polynomials appear in Chapter
8; this chapter contains all necessary details for their derivation, formulae for
norms, hook-length products and computations of the structure constants. There
is a proof of the MacdonaldMehtaSelberg integral formula. Finally, Chapter 9
shows how to use the nonsymmetric Jack polynomials to produce bases associ-
ated with the octahedral groups. This chapter has a short discussion of how these
polynomials and related operators are used to solve the Schr odinger equations
of CalogeroSutherland systems; these are exactly solvable models of quantum
mechanics involving identical particles in a one-dimensional space. Both Chap-
ters 8 and 9 discuss orthogonal polynomials on the torus and of Hermite type.
The bibliography is intended to be reasonably comprehensive into the near
past; the reader is referred to Erd elyi et al. [1953] for older papers, and Inter-
net databases for the newest articles. There are occasions in the book where we
suggest some algorithms for possible symbolic algebra use; the reader is encour-
aged to implement them in his/her favorite computer algebra system but again the
reader is referred to the Internet for specic published software.
There are several areas of related current research that we have deliber-
ately avoided: the role of special functions in the representation theory of
Lie groups (see Dieudonn e [1980], Hua [1963], Vilenkin [1968], Vilenkin
and Klimyk [1991a, b, c, 1995]), basic hypergeometric series and orthogo-
nal polynomials of q-type (see Gasper and Rahman [1990], Andrews, Askey
and Roy [1999]), quantum groups (Koornwinder [1992], Noumi [1996],
Preface to the First Edition xvii
Koelink [1996] and Stokman [1997]), Macdonald symmetric polynomials (a gen-
eralization of the q-type) (see Macdonald [1995, 1998]). These topics touch on
algebra, combinatorics and analysis; and some classical results can be obtained as
limiting cases for q 1. Nevertheless, the material in this book can stand alone
and q is not needed in the proofs.
We gratefully acknowledge support from the National Science Foundation over
the years for our original research, some of which is described in this book. Also
we are grateful to the mathematics departments of the University of Oregon for
granting sabbatical leave and the University of Virginia for inviting Y. X. to visit
for a year, which provided the opportunity for this collaboration.
Charles F. Dunkl
Yuan Xu
1
Background
The theory of orthogonal polynomials of several variables, especially those of
classical type, uses a signicant amount of analysis in one variable. In this chapter
we give concise descriptions of the needed tools.
1.1 The Gamma and Beta Functions
It is our opinion that the most interesting and amenable objects of consideration
have expressions which are rational functions of the underlying parameters. This
leads us immediately to a discussion of the gamma function and its relatives.
Denition 1.1.1 The gamma function is dened for Re x > 0 by the integral
(x) =
_

0
t
x1
e
t
dt.
It is directly related to the beta function:
Denition 1.1.2 The beta function is dened for Rex > 0 and Rey > 0 by
B(x, y) =
_
1
0
t
x1
(1t)
y1
dt.
By making the change of variables s = uv and t = (1 u)v in the integral
(x)(y) =
_

0
_

0
s
x1
t
y1
e
(s+t)
dsdt, one obtains
(x)(y) =(x +y)B(x, y).
This leads to several useful denite integrals, valid for Rex > 0 and Rey > 0:
1.
_
/2
0
sin
x1
cos
y1
d =
1
2
B
_
x
2
,
y
2
_
=
1
2

_
x
2
_

_
y
2
_

_
x+y
2
_ ;
2 Background
2.
_
1
2
_
=

(set x = y = 1 in the previous integral);


3.
_

0
t
x1
exp(at
2
)dt =
1
2
a
x/2

_
x
2
_
, for a > 0;
4.
_
1
0
t
x1
(1t
2
)
y1
dt =
1
2
B
_
x
2
, y
_
=
1
2

_
x
2
_
(y)

_
x
2
+y
_ ;
5. (x)(1x) = B(x, 1x) =

sinx
.
The last equation can be proven by restricting x to 0 <x <1, making the substi-
tution s =t/(1t) in the beta integral
_
1
0
[t/(1t)]
x1
(1t)
1
dt and computing
the resulting integral by residues. Of course, one of the fundamental properties
of the gamma function is the recurrence formula (obtained from integration by
parts)
(x +1) = x(x),
which leads to the fact that can be analytically continued to a meromorphic
function on the complex plane; also, 1/ is entire, with (simple) zeros exactly at
0, 1, 2, . . .. Note that interpolates the factorial; indeed, (n +1) = n! for
n = 0, 1, 2, . . .
Denition 1.1.3 The Pochhammer symbol, also called the shifted factorial, is
dened for all x by
(x)
0
= 1, (x)
n
=
n

i=1
(x +i 1) for n = 1, 2, 3, . . .
Alternatively, one can recursively dene (x)
n
by
(x)
0
= 1 and (x)
n+1
= (x)
n
(x +n) for n = 1, 2, 3, . . .
Here are some important consequences of Denition 1.1.3:
1. (x)
m+n
= (x)
m
(x +m)
n
for m, n N
0
;
2. (x)
n
= (1)
n
(1nx)
n
(writing the product in reverse order);
3. (x)
ni
= (x)
n
(1)
i
/(1nx)
i
.
The Pochhammer symbol incorporates binomial-coefcient and factorial
notation:
1. (1)
n
= n!, 2
n
_
1
2
_
n
= 135 (2n1);
2. (n+m)! = n!(n+1)
m
;
3.
_
n
i
_
= (1)
i
(n)
i
i!
, where
_
n
i
_
is the binomial coefcient;
4. (x)
2n
= 2
2n
_
x
2
_
n
_
x+1
2
_
n
, the duplication formula.
1.2 Hypergeometric Series 3
The last property includes the formula for the gamma function:
(2x) =
2
2x1

(x +
1
2
)(x).
For appropriate values of x and n, the formula (x +n)/(x) = (x)
n
holds, and
this can be used to extend the denition of the Pochhammer symbol to values of
n / N
0
.
1.2 Hypergeometric Series
The two most common types of hypergeometric series are (they are convergent
for [x[ < 1)
1
F
0
(a; x) =

n=0
(a)
n
n!
x
n
,
2
F
1
_
a, b
c
; x
_
=

n=0
(a)
n
(b)
n
(c)
n
n!
x
n
,
where a and b are the numerator parameters and c is the denominator param-
eter. Later we will also use
3
F
2
series (with a corresponding denition). The
2
F
1
series is the unique solution analytic at x = 0 and satisfying f (0) = 1 of
x(1x)
d
2
dx
2
f (x) +[c (a+b+1)x]
d
dx
f (x) abf (x) = 0.
Generally, classical orthogonal polynomials can be expressed as hypergeo-
metric polynomials, which are terminating hypergeometric series for which a
numerator parameter has a value in N
0
. The two series can be represented
in closed form. Obviously
1
F
0
(a; x) = (1 x)
a
; this is the branch analytic in
x C : [x[ < 1, which has the value 1 at x = 0. The Gauss integral formula for
2
F
1
is as follows.
Proposition 1.2.1 For Re(c b) > 0, Re b > 0 and [x[ < 1,
2
F
1
_
a, b
c
; x
_
=
(c)
(b)(c b)
_
1
0
t
b1
(1t)
cb1
(1xt)
a
dt.
Proof Use the
1
F
0
series in the integral and integrate term by term to obtain a
multiple of

n=0
(a)
n
n!
x
n
_
1
0
t
b+n1
(1t)
cb1
dt =

n=0
(a)
n
(b+n)(c b)
n!(c +n)
x
n
,
from which the stated formula follows.
4 Background
Corollary 1.2.2 For Re c > Re(a + b) and Re b > 0 the Gauss summation
formula is
2
F
1
_
a, b
c
; 1
_
=
(c)(c ba)
(c a)(c b)
.
The terminating case of this formula, known as the ChuVandermonde sum, is
valid for a more general range of parameters.
Proposition 1.2.3 For n N
0
, any a, b, and c ,= 0, 1, . . . , n 1 the following
hold:
n

i=0
(a)
ni
(b)
i
(ni)!i!
=
(a+b)
n
n!
and
2
F
1
_
n, b
c
; 1
_
=
(c b)
n
(c)
n
.
Proof The rst formula is deduced from the coefcient of x
n
in the expression
(1x)
a
(1x)
b
= (1x)
(a+b)
. The left-hand side can be written as
(a)
n
n!
n

i=0
(n)
i
(b)
i
(1na)
i
i!
.
Now let a = 1 n c; simple computations involving reversals such as
(1nc)
n
= (1)
n
(c)
n
nish the proof.
The following transformation often occurs:
Proposition 1.2.4 For [x[ < 1,
2
F
1
_
a, b
c
; x
_
= (1x)
a
2
F
1
_
a, c b
c
;
x
x 1
_
.
Proof Temporarily assume that Re c > Re b > 0; then from the Gauss integral
we have
2
F
1
_
a, b
c
; x
_
=
(c)
(b)(c b)
_
1
0
t
b1
(1t)
cb1
(1xt)
a
dt
=
(c)
(b)(c b)
_
1
0
s
cb1
(1s)
b1
(1x)
a
_
1
xs
x1
_
a
ds,
where one makes the change of variable t = 1 s. The formula follows from
another application of the Gauss integral. Analytic continuation in the parameters
extends the validity to all values of a, b, c excluding c N
0
. For this purpose we
tacitly consider the modied series

n=0
(a)
n
(b)
n
(c +n)n!
x
n
,
which is an entire function in a, b, c.
1.2 Hypergeometric Series 5
Corollary 1.2.5 For [x[ < 1,
2
F
1
_
a, b
c
; x
_
= (1x)
cab
2
F
1
_
c a, c b
c
; x
_
.
Proof Using Proposition 1.2.4 twice,
2
F
1
_
a, b
c
; x
_
= (1x)
a
2
F
1
_
a, c b
c
;
x
x 1
_
= (1x)
a
_
1
x
x 1
_
bc
2
F
1
_
c a, c b
c
; x
_
and 1x/(x 1) = (1x)
1
.
Equating coefcients of x
n
on the two sides of the formula in Corollary 1.2.5
proves the Saalsch utz summation formula for a balanced terminating
3
F
2
series.
Proposition 1.2.6 For n = 0, 1, 2, . . . and c, d ,= 0, 1, 2, . . . , n with
n+a+b+1 = c +d (the balanced condition), we have
3
F
2
_
n, a, b
c, d
; 1
_
=
(c a)
n
(c b)
n
(c)
n
(c ab)
n
=
(c a)
n
(d a)
n
(c)
n
(d)
n
.
Proof Considering the coefcient of x
n
in the equation
(1x)
a+bc
2
F
1
_
a, b
c
; x
_
=
2
F
1
_
c a, c b
c
; x
_
yields
n

j=0
(c ab)
nj
(a)
j
(b)
j
(n j)! j! (c)
j
=
(c a)
n
(c b)
n
n! (c)
n
,
but
(c ab)
nj
(n j)!
=
(c ab)
n
(n)
j
n! (1nc +a+b)
j
;
this proves the rst formula with d = n + a + b c + 1. Further,
(c b)
n
= (1)
n
(1 n c + b)
n
= (1)
n
(d a)
n
and (1)
n
(c ab)
n
=
(1nc +a+b)
n
= (d)
n
, which proves the second formula.
1.2.1 Lauricella series
There are many ways to dene multivariable analogues of the hypergeometric
series. One straightforward and useful approach consists of the Lauricella gen-
eralizations of the
2
F
1
series; see Exton [1976]. Fix d = 1, 2, 3, . . . vector
parameters a, b, c R
d
, scalar parameters , and the variable x R
d
. For
6 Background
concise formulation we use the following: let m N
d
0
, m! =

d
j=1
(m
j
)!, [m[ =

d
j=1
m
j
, (a)
m
=
d
j=1
(a
j
)
m
j
and x
m
=
d
j=1
x
m
j
j
.
The four types of Lauricella functions are (with summations over m N
d
0
):
1. F
A
(, b; c; x) =

m
()
[m[
(b)
m
(c)
m
m!
x
m
, convergent for
d

j=1
[x
j
[ < 1;
2. F
B
(a, b;; x) =

m
(a)
m
(b)
m
()
[m[
m!
x
m
, convergent for max
j
[x
j
[ < 1;
3. F
C
(, ; c; x) =

m
()
[m[
()
[m[
(c)
m
m!
x
m
, convergent for
d

j=1
[x
j
[
1/2
<1;
4. F
D
(, b; ; x) =

m
()
[m[
(b)
m
()
[m[
m!
x
m
, convergent for max
j
[x
j
[ < 1.
There are integral representations of Euler type (the following are subject to
obvious convergence conditions; further, any argument of a gamma function must
have a positive real part):
1. F
A
(, b; c; x) =
d

j=1
(c
j
)
(b
j
)(c
j
b
j
)

_
[0,1]
d
d

j=1
_
u
b
j
1
j
(1u
j
)
c
j
b
j
1
_
_
1
d

j=1
u
j
x
j
_

du;
2. F
B
(a, b;; x) =
d

j=1
(a
j
)
1
()
()

_
T
d
d

j=1
_
u
a
j
1
j
(1u
j
x
j
)
b
j
_
_
1
d

j=1
u
j
_
1
du,
where =

d
j=1
a
j
and T
d
is the simplex u R
d
: u
j
0 for all j, and

d
j=1
u
j
1;
3. F
D
(, b; ; x) =
()
()( )

_
1
0
u
1
(1u)
1
d

j=1
(1ux
j
)
b
j
du,
a single integral.
1.3 Orthogonal Polynomials of One Variable
1.3.1 General properties
We start with a determinant approach to the GramSchmidt process, a method for
producing orthogonal bases of functions given a linearly (totally) ordered basis.
Suppose that X is a region in R
d
(for d 1), is a probability measure on X
1.3 Orthogonal Polynomials of One Variable 7
and f
i
(x) : i = 1, 2, 3. . . is a set of functions linearly independent in L
2
(X, ).
Denote the inner product
_
X
f gd as f , g) and the elements of the Gram matrix
f
i
, f
j
) as g
i j
, i, j N.
Denition 1.3.1 For n N let d
n
= det (g
i j
)
n
i, j=1
and
D
n
(x) = det
_

_
g
11
g
12
. . . g
1n
.
.
.
.
.
. . . .
.
.
.
g
n1,1
g
n1,2
. . . g
n1,n
f
1
(x) f
2
(x) . . . f
n
(x)
_

_
.
Proposition 1.3.2 The functions D
n
(x) : n 1 are orthogonal in L
2
(X, ),
spanD
j
(x) : 1 j n = span f
j
(x) : 1 j n and D
n
, D
n
) = d
n1
d
n
.
Proof By linear independence, d
n
> 0 for all n; thus D
n
(x) = d
n1
f
n
(x) +

j<n
c
j
f
j
(x) for some coefcients c
j
(where d
0
= 1) and spanD
j
: j n =
spanf
j
: j n. The inner product f
j
, D
n
) is the determinant of the matrix in the
denition of D
n
with the last row replaced by (g
j1
, g
j2
, . . . , g
jn
) and hence is zero
for j <n. Thus D
j
, D
n
) =0 for j <n and D
n
, D
n
) =d
n1
f
n
, D
n
) =d
n1
d
n
.
There are integral formulae for d
n
and D
n
(x) which are interesting fore-
shadowings of multivariable weight functions P
n
involving the discriminant, as
follows.
Denition 1.3.3 For n N and x
1
, x
2
, . . . , x
n
X let
P
n
(x
1
, x
2
, . . . , x
n
) = det( f
j
(x
i
))
n
i, j=1
.
Proposition 1.3.4 For n N and x
1
, x
2
, . . . , x
n
X,
_
X
n
P
n
(x
1
, x
2
, . . . , x
n
)
2
d(x
1
) d(x
n
) = n!d
n
,
and
_
X
n
P
n
(x
1
, x
2
, . . . , x
n
)P
n+1
(x
1
, x
2
, . . . , x
n
, x)d(x
1
) d(x
n
) = n!D
n+1
(x).
Proof In the rst integral, make the expansion
P
n
(x
1
, x
2
, . . . , x
n
)
2
=

i=1
f
i
(x
i
) f
i
(x
i
),
where the summations are over the symmetric group S
n
(on n objects);

, i
denote the sign of the permutation and the action of on i, respectively. Inte-
grating over X
n
gives the sum

n
i=1
g
i,i
= n!

n
i=1
g
i,i
= n!d
n
.
8 Background
The summation over is done by rst xing and then replacing i by
1
i and
by
1
. Similarly,
P
n
(x
1
, x
2
, . . . , x
n
)P
n+1
(x
1
, x
2
, . . . , x
n
, x)
=

i=1
f
i
(x
i
) f
i
(x
i
) f
(n+1)
(x),
and the -sum is over S
n+1
. As before, the integral has the value

i=1
g
i,i
f
(n+1)
(x),
which reduces to the expression n!

n
i=1
g
i,i
f
(n+1)
(x) = n!D
n+1
(x).
We now specialize to orthogonal polynomials; let be a probability mea-
sure supported on a (possibly innite) interval [a, b] such that
_
b
a
[x[
n
d < for
all n. We may as well assume that is not a nite discrete measure, so that
1, x, x
2
, x
3
, . . . is linearly independent in L
2
(); it is not difcult to modify the
results to the situation where L
2
() is of nite dimension. We apply Proposition
1.3.2 to the basis f
j
(x) = x
j1
; the Gram matrix has the form of a Hankel matrix
g
i j
= c
i+j2
, where the nth moment of is
c
n
=
_
b
a
x
n
d(x)
and the orthonormal polynomials p
n
(x) : n 0 are dened by
p
n
(x) = (d
n+1
d
n
)
1/2
D
n+1
(x);
they satisfy
_
b
a
p
m
(x)p
n
(x)d(x) =
mn
, and the leading coefcient of p
n
is
(d
n
/d
n+1
)
1/2
>0. Of course this implies that
_
b
a
p
n
(x)q(x)d(x) =0 for any poly-
nomial q(x) of degree n 1. The determinant P
n
in Denition 1.3.3 is exactly
the Vandermonde determinant det(x
j1
i
)
n
i, j=1
=
1i<jn
(x
j
x
i
).
Proposition 1.3.5 For n 0,
_
[a,b]
n

1i<jn
(x
j
x
i
)
2
d(x
1
) d(x
n
) = n!d
n
,
_
[a,b]
n
n

i=1
(x x
i
)

1i<jn
(x
j
x
i
)
2
d(x
1
) d(x
n
) = n!(d
n
d
n+1
)
1/2
p
n
(x).
It is a basic fact that p
n
(x) has n distinct (and simple) zeros in [a, b].
1.3 Orthogonal Polynomials of One Variable 9
Proposition 1.3.6 For n 1, the polynomial p
n
(x) has n distinct zeros in the
open interval (a, b).
Proof Suppose that p
n
(x) changes sign at t
1
, . . . ,t
m
in (a, b). Then it follows
that p
n
(x)
m
i=1
(x t
i
) 0 on [a, b] for = 1 or 1. If m < n then
_
b
a
p
n
(x)

m
i=1
(x t
i
)d(x) = 0, which implies that the integrand is zero on the support of
, a contradiction.
In many applications one uses orthogonal, rather than orthonormal, polyno-
mials (by reason of their neater notation, generating function and normalized
values at an end point, for example). This means that we have a family of
nonzero polynomials P
n
(x) : n 0 with P
n
(x) of exact degree n and for which
_
b
a
P
n
(x)x
j
d(x) = 0 for j < n.
We say that the squared norm
_
b
a
P
n
(x)
2
d(x) = h
n
is a structural con-
stant. Further, p
n
(x) =h
1/2
n
P
n
(x); the sign depends on the leading coefcient
of P
n
(x).
1.3.2 Three-term recurrence
Besides the Gram matrix of moments there is another important matrix associated
with a family of orthogonal polynomials, the Jacobi matrix. The principal minors
of this tridiagonal matrix provide an interpretation of the three-term recurrence
relations. For n 0 the polynomial xP
n
(x) is of degree n+1 and can be expressed
in terms of P
j
: j n+1, but more is true.
Proposition 1.3.7 There exist sequences A
n

n0
, B
n

n0
, C
n

n1
such that
P
n+1
(x) = (A
n
x +B
n
)P
n
(x) C
n
P
n1
(x),
where
A
n
=
k
n+1
k
n
, C
n
=
k
n+1
k
n1
h
n
k
2
n
h
n1
, B
n
=
k
n+1
k
n
h
n
_
b
a
xP
n
(x)
2
d(x),
and k
n
is the leading coefcient of P
n
(x).
Proof Expanding xP
n
(x) in terms of polynomials P
j
gives
n+1
j=0
a
j
P
j
(x) with a
j
=
h
1
j
_
b
a
xP
n
(x)P
j
(x)d(x). By the orthogonality property, a
j
=0 unless [n j[ 1.
The value of a
n+1
=A
1
n
is obtained by matching the coefcients of x
n+1
. Shifting
the label gives the value of C
n
.
Corollary 1.3.8 For the special case of monic orthogonal polynomials, the
three-term recurrence is
P
n+1
(x) = (x +B
n
)P
n
(x) C
n
P
n1
(x),
10 Background
where
C
n
=
d
n+1
d
n1
d
2
n
and B
n
=
d
n
d
n+1
_
b
a
xP
n
(x)
2
d(x).
Proof In the notation from the end of the last subsection, the structure constant
for the monic case is h
n
= d
n+1
/d
n
.
It is convenient to restate the recurrence, and some other, relations for orthog-
onal polynomials with arbitrary leading coefcients in terms of the moment
determinants d
n
(see Denition 1.3.1).
Proposition 1.3.9 Suppose that the leading coefcient of P
n
(x) is k
n
, and let
b
n
=
_
b
a
xp
n
(x)
2
d(x); then
h
n
= k
2
n
d
n+1
d
n
,
xP
n
(x) =
k
n
k
n+1
P
n+1
(x) +b
n
P
n
(x) +
k
n1
h
n
k
n
h
n1
P
n1
(x).
Corollary 1.3.10 For the case of orthonormal polynomial p
n
,
xp
n
(x) = a
n
p
n+1
(x) +b
n
p
n
(x) +a
n1
p
n1
(x),
where a
n
= k
n
/k
n+1
= (d
n
d
n+2
/d
2
n+1
)
1/2
.
With these formulae one can easily nd the reproducing kernel for polynomials
of degree n, the ChristoffelDarboux formula:
Proposition 1.3.11 For n 1, if k
n
is the leading coefcient of p
n
then we have
n

j=0
p
j
(x)p
j
(y) =
k
n
k
n+1
p
n+1
(x)p
n
(y) p
n
(x)p
n+1
(y)
x y
,
n

j=0
p
j
(x)
2
=
k
n
k
n+1
_
p
/
n+1
(x)p
n
(x) p
/
n
(x)p
n+1
(x)

.
Proof By the recurrence in the previous proposition, for j 0,
(x y)p
j
(x)p
j
(y) =
k
j
k
j+1
_
p
j+1
(x)p
j
(y) p
j
(x)p
j+1
(y)

+
k
j1
k
j
_
p
j1
(x)p
j
(y) p
j
(x)p
j1
(y)

.
The terms involving b
j
arising from setting n = j in Corollary 1.3.10 cancel out.
Now sum these equations over 0 j n; the terms telescope (and note that
the case j = 0 is special, with p
1
= 0). This proves the rst formula in the
proposition; the second follows from LHospitals rule.
1.3 Orthogonal Polynomials of One Variable 11
The Jacobi tridiagonal matrix associated with the three-term recurrence is the
semi-innite matrix
J =
_

_
b
0
a
0
_
a
0
b
1
a
1
a
1
b
2
a
2
a
2
b
3
.
.
.
_
.
.
.
.
.
.
_

_
where a
n1
=

C
n
=

d
n+1
d
n1
/d
n
(a
n1
is the same as the coefcient a
n
in
Corollary 1.3.10 orthonormal polynomials). Let J
n
denote the left upper subma-
trix of J of size n n, and let I
n
be the n n identity matrix. Then det(xI
n+1

J
n+1
) = (x b
n
)det(xI
n
J
n
) a
2
n1
det(xI
n1
J
n1
), a simple matrix identity;
hence P
n
(x) =det(xI
n
J
n
). Note that the fundamental result regarding the eigen-
values of symmetric tridiagonal matrices with all superdiagonal entries nonzero
shows again that the zeros of P
n
are simple and real. The eigenvectors of J
n
have
elegant expressions in terms of p
j
and provide an explanation for the Gaussian
quadrature.
Let
1
,
2
, . . . ,
n
be the zeros of p
n
(x).
Theorem 1.3.12 For 1 j n let
v
( j)
= (p
0
(
j
), p
1
(
j
), . . . , p
n1
(
j
))
T
;
then J
n
v
( j)
=
j
v
( j)
. Further,
n

j=1

j
p
r
(
j
)p
s
(
j
) =
rs
where 0 r, s n1 and
j
=
_

n1
i=0
p
i
(
j
)
2
_
1
.
Proof We want to show that the ith entry of (J
n

j
I
n
)v
( j)
is zero. The typical
equation for this entry is
a
i2
p
i2
(
j
) +(b
i1

j
)p
i1
(
j
) +a
i1
p
i
(
j
)
=
_
a
i2

k
i2
k
i1
_
p
i2
(
j
) +
_
a
i1

k
i1
k
i
_
p
i
(
j
)
= 0,
because k
i
= (d
i
/d
i+1
)
1/2
and a
i1
= (d
i+1
d
i1
/d
2
i
)
1/2
= k
i1
/k
i
for i 1,
where k
i
is the leading coefcient of p
i
. This uses the recurrence in Propo-
sition 1.3.9; when i = 1 the term p
1
(
j
) = 0 and for i = n the fact that
p
n
(
j
) =0 is crucial. Since J
n
is symmetric, the eigenvectors are pairwise orthog-
onal, that is,

n
i=1
p
i1
(
j
)p
i1
(
r
) =
jr
/
j
for 1 j, r n. Hence the matrix
12 Background
_

1/2
j
p
i1
(
j
)
_
n
i, j=1
is orthogonal. The orthonormality of the rows of the matrix
is exactly the second part of the theorem.
This theorem has some interesting implications. First, the set p
i
(x) : 0
i n 1 comprises the orthonormal polynomials for the discrete measure

n
j=1

j
(
j
), where (x) denotes a unit mass at x. This measure has total mass 1
because p
0
= 1. Second, the theorem implies that
n
j=1

j
q(
j
) =
_
b
a
qd for any
polynomial q(x) of degree n 1, because this relation is valid for every basis
element p
i
for 0 i n1. In fact, this is a description of Gaussian quadrature.
The ChristoffelDarboux formula, Proposition 1.3.11, leads to another expression
(perhaps more concise) for the weights
j
.
Theorem 1.3.13 For n 1 let
1
,
2
, . . . ,
n
be the zeros of p
n
(x) and let
j
=
(
n1
i=0
p
i
(
j
)
2
)
1
. Then

j
=
k
n
k
n1
p
/
n
(
j
)p
n1
(
j
)
,
and for any polynomial q(x) of degree 2n1 the following holds:
_
b
a
qd =
n

j=1

j
q(
j
).
Proof In the ChristoffelDarboux formula for y =x, set x =
j
(thus p
n
(
j
) =0).
This proves the rst part of the theorem (note that 0 <
j
1 because p
0
= 1).
Suppose that q(x) is a polynomial of degree 2n 1; then, by synthetic division,
q(x) = p
n
(x)g(x)+r(x) where g(x), r(x) are polynomials of degree n1. Thus
r(
j
) = q(
j
) for each j and, by orthogonality,
_
b
a
qd =
_
b
a
p
n
gd +
_
b
a
r d =
_
b
a
r d =
n

j=1

j
r(
j
).
This completes the proof.
One notes that the leading coefcients k
i
(for orthonormal polynomials, see
Corollary 1.3.10) can be recovered from the Jacobi matrix J; indeed k
i
=

i
j=1
c
1
j
and k
0
= 1, that is, p
0
= 1. This provides the normalization of the asso-
ciated discrete measures
n
j=1

j
(
j
). The moment determinants are calculated
from d
i
=
i1
j=1
k
2
j
.
If the measure is nite with point masses at t
1
, t
2
, . . . ,t
n
then the Jacobi
matrix has the entry c
n
= 0 and J
n
produces the orthogonal polynomials
p
0
, p
1
, . . . , p
n1
for , and the eigenvalues of J
n
are t
1
, t
2
, . . . , t
n
.
1.4 Classical Orthogonal Polynomials 13
1.4 Classical Orthogonal Polynomials
This section is concerned with the Hermite, Laguerre, Chebyshev, Legendre,
Gegenbauer and Jacobi polynomials. For each family the orthogonality measure
has the form d(x) = cw(x)dx, with weight function w(x) > 0 on an interval and
normalization constant c = [
_
R
w(x)dx]
1
. Typically the polynomials are dened
by a Rodrigues relation which easily displays the required orthogonality prop-
erty. Then some computations are needed to determine the other structures: h
n
(which is always stated with respect to a normalized ), the three-term recur-
rences, expressions in terms of hypergeometric series, differential equations and
so forth. When the weight function is even, so that w(x) =w(x), the coefcients
b
n
in Proposition 1.3.9 are zero for all n and the orthogonal polynomials satisfy
P
n
(x) = (1)
n
P
n
(x).
1.4.1 Hermite polynomials
For the Hermite polynomials H
n
(x), the weight function w(x) is e
x
2
on x R
and the normalization constant c is (
1
2
)
1
=
1/2
.
Denition 1.4.1 For n 0 let H
n
(x) = (1)
n
e
x
2
_
d
dx
_
n
e
x
2
.
Proposition 1.4.2 For 0 m < n,
_
R
x
m
H
n
(x)e
x
2
dx = 0
and
h
n
=
1/2
_
R
[H
n
(x)]
2
e
x
2
dx = 2
n
n!
Proof For any polynomial q(x) we have
_
R
q(x)H
n
(x)e
x
2
dx =
_
R
_
d
dx
_
n
q(x)e
x
2
dx
using n-times-repeated integration by parts. From the denition it is clear that the
leading coefcient k
n
= 2
n
, and thus h
n
=
1/2
_
R
2
n
n!e
x
2
dx = 2
n
n!
Proposition 1.4.3 For x, r R, the generating function is
exp(2rx r
2
) =

n=0
H
n
(x)
r
n
n!
and
H
n
(x) =

jn/2
n!
(n2j)! j!
(1)
j
(2x)
n2 j
,
d
dx
H
n
(x) = 2nH
n1
(x).
14 Background
Proof Expand f (r) = exp[(xr)
2
] in a Taylor series at r = 0 for xed x. Write
the generating function as

m=0
1
m!
m

j=0
_
m
j
_
(2x)
mj
r
m+j
(1)
j
,
set m=n+ j and collect the coefcient of r
n
/n! Nowdifferentiate the generating
function and compare the coefcients of r
n
.
The three-term recurrence (see Proposition 1.3.7) is easily computed using
k
n
= 2
n
, h
n
= 2
n
n! and b
n
= 0:
H
n+1
(x) = 2xH
n
(x) 2nH
n1
(x)
with a
n
=
_
1
2
(n+1). This together with the fact that H
/
n
(x) = 2nH
n1
(x), used
twice, establishes the differential equation
H
//
n
(x) 2xH
/
n
(x) +2nH
n
(x) = 0.
1.4.2 Laguerre polynomials
For a parameter > 1, the weight function for the Laguerre polynomials is
x

e
x
on R
+
=x : x 0 with constant ( +1)
1
.
Denition 1.4.4 For n 0 let L

n
(x) =
1
n!
x

e
x
_
d
dx
_
n
(x
n+
e
x
).
Proposition 1.4.5 For n 0,
L

n
(x) =
( +1)
n
n!
n

j=0
(n)
j
( +1)
j
x
j
j!
.
Proof Expand the denition to
L

n
(x) =
1
n!

n
j=0
_
n
j
_
(n)
nj
(1)
n
x
j
(with the n-fold product rule) and write (n )
nj
= (1)
nj
( + 1)
n
/
( +1)
j
.
Arguing as for Hermite polynomials, the following hold for the Laguerre
polynomials:
1.
_

0
q(x)L

n
(x)x

e
x
dx =
(1)
n
n!
_

0
_
d
dx
_
n
q(x)x
+n
e
x
dx
for any polynomial q(x);
2.
_

0
x
m
L

n
(x)x

e
x
dx = 0 for 0 m < n;
1.4 Classical Orthogonal Polynomials 15
3. the leading coefcient is k
n
=
(1)
n
n!
;
4. h
n
=
1
( +1)
_

0
[L

n
(x)]
2
x

e
x
dx
=
1
( +1)n!
_

0
x
+n
e
x
dx =
( +1)
n
n!
.
Proposition 1.4.6 The three-term recurrence for the Laguerre polynomials is
L

n+1
(x) =
1
n+1
(x +2n+1+)L

n
(x)
n+
n+1
L

n1
(x).
Proof The coefcients A
n
and C
n
are computed from the values of h
n
and k
n
(see
Proposition 1.3.7). To compute B
n
we have that
L

n
(x) =
(1)
n
n!
_
x
n
n( +n)x
n1

+terms of lower degree;


hence
1
( +1)
_

0
xL

n
(x)
2
x

e
x
dx
=
1
n!( +1)
_

0
[(n+1)x n( +n)]x
+n
e
x
dx
=
1
n!( +1)
[(n+1)( +n+2) n( +n)( +n+2)]
=
1
n!
( +1)
n
(2n+1+).
The value of B
n
is obtained by multiplying this equation by k
n+1
/(k
n
h
n
) =
n!/[(n+1)( +1)
n
].
The coefcient a
n
=
_
(n+1)(n+1+). The generating function is
(1r)
1
exp
_
xr
1r
_
=

n=0
L

n
(x)r
n
, [r[ < 1.
This follows from expanding the left-hand side as

j=0
(xr)
j
j!
(1r)
j1
=

j=0
(xr)
j
j!

m=0
( +1+ j)
m
m!
r
m
,
and then changing the index of summation to m = n j and extracting the coef-
cient of r
n
; note that ( +1+ j)
nj
= ( +1)
n
/( +1)
j
. From the
1
F
1
series for
L

n
(x) the following hold:
1. L

n
(0) =
( +1)
n
n!
;
16 Background
2.
d
dx
L

n
(x) =L
+1
n1
(x);
3. x
d
2
dx
2
L

n
(x) +( +1x)
d
dx
L

n
(x) +nL

n
(x) = 0.
The Laguerre polynomials will be used in the several-variable theory, as func-
tions of [x[
2
. The one-variable version of this appears for the weight function
[x[
2
e
x
2
for x R, with constant
_
+
1
2
_
1
. Orthogonal polynomials for this
weight are given by P
2n
(x) = L
1/2
n
(x
2
) and P
2n+1
(x) = xL
+1/2
n
(x
2
). This fam-
ily is discussed in more detail in what follows. In particular, when = 0 there
is a relation to the Hermite polynomials (to see this, match up the leading coef-
cients): H
2n
(x) = (1)
n
2
2n
n!L
1/2
n
(x
2
) and H
2n+1
(x) = (1)
n
2
2n+1
n!xL
1/2
n
(x
2
).
1.4.3 Gegenbauer polynomials
The Gegenbauer polynomials are also called ultraspherical polynomials. For a
parameter >
1
2
their weight function is (1 x
2
)
1/2
on 1 < x < 1; the
constant is B(
1
2
, +
1
2
)
1
. The special cases =0, 1,
1
2
correspond to the Cheby-
shev polynomials of the rst and second kind and the Legendre polynomials,
respectively. The usual generating function does not work for = 0, neither
does it obviously imply orthogonality; thus we begin with a different choice of
normalization.
Denition 1.4.7 For n 0, let
P

n
(x) =
(1)
n
2
n
( +
1
2
)
n
(1x
2
)
1/2
d
n
dx
n
(1x
2
)
n+1/2
.
Proposition 1.4.8 For n 0,
P

n
(x) =
_
1+x
2
_
n
2
F
1
_
n, n +
1
2
+
1
2
;
x 1
x +1
_
=
2
F
1
_
n, n+2
+
1
2
;
1x
2
_
.
Proof Expand the formula in the denition with the product rule to obtain
P

n
(x) =
(1)
n
2
n
( +
1
2
)
n
n

j=0
__
n
j
_
(n +
1
2
)
nj
(n +
1
2
)
j
(1)
j
(1x)
j
(1+x)
nj
_
.
This produces the rst
2
F
1
formula; note that (n +
1
2
)
nj
=(1)
nj
( +
1
2
)
n
/
( +
1
2
)
j
. The transformation in Proposition 1.2.4 gives the second
expression.
1.4 Classical Orthogonal Polynomials 17
As before, this leads to the orthogonality relations and structure constants.
1. The leading coefcient is k
n
=
(n+2)
n
2
n
( +1/2)
n
=
2
n1
( +1)
n1
(2 +1)
n1
,
where the second equality holds for n 1;
2.
_
1
1
q(x)P

n
(x)(1x
2
)
1/2
dx=
2
n
( +
1
2
)
n
_
1
1
d
n
dx
n
q(x)(1x
2
)
n+1/2
dx,
for any polynomial q(x);
3.
_
1
1
x
m
P

n
(x)(1x
2
)
1/2
dx = 0 for 0 m < n;
4. h
n
=
1
B(
1
2
, +
1
2
)
_
1
1
[P

n
(x)]
2
(1x
2
)
1/2
dx =
n!(n+2)
2(2 +1)
n
(n+)
;
5. P

n+1
(x) =
2(n+)
n+2
xP

n
(x)
n
n+2
P

n1
(x),
where the coefcients are calculated from k
n
and h
n
;
6. a
n
=
_
(n+1)(n+2)
4(n+)(n+1+)
_
1/2
.
Proposition 1.4.9 For n 1,
d
dx
P

n
(x) =
n(n+2)
1+2
P
+1
n1
(x)
and P

n
(x) is the unique polynomial solution of
(1x
2
) f
//
(x) (2 +1)x f
/
(x) +n(n+2) f (x) = 0, f (1) = 1.
Proof The
2
F
1
series satises the differential equation
d
dx
2
F
1
_
a, b
c
; x
_
=
ab
c
2
F
1
_
a+1, b+1
c +1
; x
_
;
applying this to P

n
(x) proves the rst identity. Similarly, the differential equation
satised by
2
F
1
_
n, n+2
+
1
2
;t
_
leads to the stated equation, using t =
1
2
(1 x),
and d/dt =2d/dx.
There is a generating function which works for > 0 and, because of its
elegant form, it is used to dene Gegenbauer polynomials with a different
normalization. This also explains the choice of parameter.
Denition 1.4.10 For n 0, the polynomial C

n
(x) is dened by
(12rx +r
2
)

n=0
C

n
(x)r
n
, [r[ < 1, [x[ 1.
18 Background
Proposition 1.4.11 For n 0 and > 0, C

n
(x) =
(2)
n
n!
P

n
(x) and
C

n
(x) =
()
n
2
n
n!
x
n
2
F
1
_

n
2
,
1n
2
1n
;
1
x
2
_
.
Proof First, expand the generating function as follows:
[1+2r(1x) 2r +r
2
]

= (1r)
2
_
1
2r(x 1)
(1r)
2
_

j=0
()
j
j!
2
j
r
j
(x 1)
j
(1r)
22 j
=

i, j=0
()
j
(2 +2 j)
i
j!i!
2
j
r
j+i
(x 1)
j
=

n=0
r
n
(2)
n
n!
n

j=0
(n)
j
(2 +n)
j
( +
1
2
)
j
j!
_
1x
2
_
j
,
where in the last line i has been replaced by n j, and (2 + 2 j)
nj
=
(2)
2 j
(2 +2j)
nj
/(2)
2 j
= (2)
n
(2 +n)
j
/[2
2 j
()
j
( +
1
2
)
j
].
Second, expand the generating function as follows:
[1(2rx r
2
)]

j=0
()
j
j!
(2rx r
2
)
j
=

j=0
j

i=0
()
j
( j i)!i!
(1)
j
(2x)
ji
r
i+j
=

n=0
r
n

in/2
()
ni
i!(n2i)!
(1)
i
(2x)
n2i
,
where in the last line j has been replaced by ni. The latter sum equals the stated
formula.
We now list the various structure constants for C

n
(x) and use primes to
distinguish them from the values for P

n
(x):
1. k
/
n
=
()
n
2
n
n!
;
2. h
/
n
=
(2)
n
(n+)n!
=

n+
C

n
(1);
3. C

n+1
(x) =
2(n+)
n+1
xC

n
(x)
n+2 1
n+1
C

n1
(x);
4.
d
dx
C

n
(x) = 2C
+1
n1
(x);
5. C

n
(1) =
(2)
n
n!
.
1.4 Classical Orthogonal Polynomials 19
There is another expansion resembling the generating function which is used
in the Poisson kernel for spherical harmonics.
Proposition 1.4.12 For [x[ 1 and [r[ < 1,
1r
2
(12xr +r
2
)
+1
=

n=0
n+

n
(x)r
n
=

n=0
1
h
n
P

n
(x)r
n
.
Proof Expand the left-hand side as

n=0
_
C
+1
n
(x) C
+1
n2
(x)
_
r
n
. Then
C
+1
n
(x) C
+1
n2
(x) =

in/2
( +1)
ni
i!(n2i)!
(1)
i
(2x)
n2i


jn/21
( +1)
n2j
j!(n22 j)!
(1)
j
(2x)
n22 j
=

in/2
( +1)
n1i
i!(n2i)!
(1)
i
(2x)
n2i
( +n).
In the sum for C
+1
n2
(x) replace j by i 1, and combine the two sums. The end
result is exactly [(n+/)]C

n
(x). Further,
(n+)(2)
n
n!
= h
1
n
(the form for h
n
given in the list before Proposition 1.4.9 avoids the singularity at
= 0).
Legendre and Chebyshev polynomials
The special case =
1
2
corresponds to the Legendre polynomials, often denoted
by P
n
(x) = P
1/2
n
(x) = C
1/2
n
(x) (in later discussions we will avoid this notation
in order to reserve P
n
for general polynomials). Observe that these polynomials
are orthogonal for dx on 1 x 1. Related formulae are for the Legendre
polynomials h
n
= 1/(2n+1) and
(n+1)P
n+1
(x) = (2n+1)xP
n
(x) nP
n1
(x).
The case = 0 corresponds to the Chebyshev polynomials of the rst kind
denoted by T
n
(x) = P
0
n
(x). Here h
0
= 1 and h
n
=
1
2
for n > 0. The three-term
recurrence for n 1 is
T
n+1
(x) = 2xT
n
(x) T
n1
(x)
20 Background
and T
1
(x) = xT
0
. Of course, together with the identity cos(n + 1) =
2cos cos n cos(n 1) for n 1, this shows that T
n
(cos) = cosn.
Accordingly the zeros of T
n
are at
cos
(2j 1)
2n
for j = 1, 2, . . . , n. Also, the associated Gaussian quadrature has equal weights at
the nodes (a simple but amusing calculation).
The case = 1 corresponds to the Chebyshev polynomials of the second kind
denoted by U
n
(x) = (n+1)P
1
n
(x) =C
1
n
(x). For this family h
n
= 1 and, for n 0,
U
n+1
(x) = 2xU
n
(x) U
n1
(x).
This relation, together with
sin(n+2)
sin
= 2cos
sin(n+1)
sin

sinn
sin
for n 0, implies that
U
n
(cos) =
sin(n+1)
sin
.
1.4.4 Jacobi polynomials
For parameters , > 1 the weight function for the Jacobi polynomials is
(1x)

(1+x)

on 1 < x < 1; the constant c is 2


1
B( +1, +1)
1
.
Denition 1.4.13 For n 0, let
P
(,)
n
(x) =
(1)
n
2
n
n!
(1x)

(1+x)

d
n
dx
n
_
(1x)
+n
(1+x)
+n
_
.
Proposition 1.4.14 For n 0,
P
(,)
n
(x) =
( +1)
n
n!
_
1+x
2
_
n
2
F
1
_
n, n
+1
;
x 1
x +1
_
=
( +1)
n
n!
2
F
1
_
n, n+ + +1
+1
;
1x
2
_
.
Proof Expand the formula in Denition 1.4.13 with the product rule to obtain
P
(,)
n
(x) =
(1)
n
2
n
n!
n

j=0
__
n
j
_
(n)
nj
(n)
j
(1)
j
(1x)
j
(1+x)
nj
_
.
This produces the rst
2
F
1
formula (note that (n )
nj
= (1)
nj
( +1)
n
/
( + 1)
j
. The transformation in Proposition 1.2.4 gives the second
expression.
1.4 Classical Orthogonal Polynomials 21
The orthogonality relations and structure constants for the Jacobi polynomials
are found in the same way as for the Gegenbauer polynomials.
1. The leading coefcient is k
n
=
(n+ + +1)
n
2
n
n!
;
2.
_
1
1
q(x)P
(,)
n
(x)(1x)

(1+x)

dx
=
1
2
n
n!
_
1
1
_
d
dx
_
n
q(x)(1x)
+n
(1+x)
+n
dx
for any polynomial q(x);
3.
_
1
1
x
m
P
(,)
n
(x)(1x)

(1+x)

dx = 0 for 0 m < n;
4. h
n
=
1
2
++1
B( +1, +1)
_
1
1
P
(,)
n
(x)
2
(1x)

(1+x)

dx
=
( +1)
n
( +1)
n
( + +n+1)
n!( + +2)
n
( + +2n+1)
;
5. P
(,)
n
(x) = (1)
n
P
(,)
n
(x) and P
(,)
n
(1) = (1)
n
( +1)
n
/n!.
The differential relations follow easily from the hypergeometric series.
Proposition 1.4.15 For n 1,
d
dx
P
(,)
n
(x) =
n+ + +1
2
P
(+1,+1)
n1
(x)
where P
(,)
n
(x) is the unique polynomial solution of
(1x
2
) f
//
(x) ( +( + +2)x) f
/
(x) +n(n+ + +1) f (x) = 0
with f (1) = ( +1)
n
/n!.
The three-term recurrence calculation requires extra work because the weight
lacks symmetry.
Proposition 1.4.16 For n 0,
P
(,)
n+1
(x) =
(2n+ + +1)(2n+ + +2)
2(n+1)(n+ + +1)
xP
(,)
n
(x)
+
(2n+ + +1)(
2

2
)
2(n+1)(n+ + +1)(2n+ +)
P
(,)
n
(x)

( +n)( +n)(2n+ + +2)


(n+1)(n+ + +1)(2n+ +)
P
(,)
n1
(x).
22 Background
Proof The values of A
n
= k
n+1
/k
n
and C
n
= k
n+1
k
n1
h
n
/k
2
n
h
n1
(see Propo-
sition 1.3.7) can be computed directly. To compute the B
n
term, suppose
that
q(x) = v
0
_
1x
2
_
n
+v
1
_
1x
2
_
n1
;
then
2
1
B( +1, +1)
1
_
1
1
xq(x)P
(,)
n
(x)(1x)

(1+x)

dx
= (1)
n
2
2n1
1
n!
B( +1, +1)
1

_
1
1
[(n+1)v
0
(1x) +(v
0
2v
1
)](1x)
n+
(1+x)
n+
dx
=
(1)
n
( +1)
n
( +1)
n
( + +2)
2n
v
0
_
2(n+1)( +n+1)
+ +2n+2
+12
v
1
v
0
_
.
From the series for P
(,)
n
(x) we nd that
v
0
= (2)
n
k
n
,
v
1
v
0
=
n( +n)
+ +2n
and the factor within large parentheses evaluates to

2
( + +2n)( + +2n+2)
.
Multiply the integral by k
n+1
/(k
n
h
n
) to get the stated result.
Corollary 1.4.17 The coefcient for the corresponding orthonormal polynomi-
als (and n 0) is
a
n
=
2
+ +2n+2
_
( +n+1)( +n+1)(n+1)( + +n+1)
( + +2n+1)( + +2n+3)
_
1/2
.
Of course, the Jacobi weight includes the Gegenbauer weight as a special case.
In terms of the usual notation the relation is
C

n
(x) =
(2)
n
_
+
1
2
_
n
P
(1/2,1/2)
n
(x).
In the next section we will use the Jacobi polynomials to nd the orthogonal
polynomials for the weight [x[
2
(1x
2
)
1/2
on 1 < x < 1.
1.5 Modied Classical Polynomials
The weights for the Hermite and Gegenbauer polynomials can be modied
by multiplying them by [x[
2
. The resulting orthogonal polynomials can be
1.5 Modied Classical Polynomials 23
expressed in terms of the Laguerre and Jacobi polynomials. Further, there is an
integral transform which maps the original polynomials to the modied ones.
We also present in this section a limiting relation between the Gegenbauer and
Hermite polynomials.
Note that the even- and odd-degree polynomials in the modied families have
separate denitions. For reasons to be explained later, the integral transform is
denoted by V (it is an implicit function of ).
Denition 1.5.1 For 0, let c

=
_
2
2
B(, +1)

1
(see Denition 1.1.2)
and, for any polynomial q(x), let the integral transformV be dened by
Vq(x) = c

_
1
1
q(tx)(1t)
1
(1+t)

dt.
Note that
_
1
1
(1t)
1
(1+t)

dt = 2
_
1
0
(1t
2
)
1
dt = B(
1
2
, );
thus c

can also be written as B


_
1
2
,
_
1
(and this proves the duplication formula
for , given in item 5 of the list after Denition 1.1.2). The limit
lim
0
c

_
1
1
f (x)(1x
2
)
1
dx =
1
2
[ f (1) + f (1)] (1.5.1)
shows that V is the identity operator when = 0.
Lemma 1.5.2 For n 0,
V(x
2n
) =
_
1
2
_
n
_
+
1
2
_
n
x
2n
and V(x
2n+1
) =
_
1
2
_
n+1
_
+
1
2
_
n+1
x
2n+1
.
Proof For the even-index case
c

_
1
1
t
2n
(1t)
1
(1+t)

dt = 2c

_
1
0
t
2n
(1t
2
)
1
dt
=
B
_
n+
1
2
,
_
B
_
1
2
,
_ =
_
1
2
_
n
_
+
1
2
_
n
,
and for the odd-index case
c

_
1
1
t
2n+1
(1t)
1
(1+t)

dt = 2c

_
1
0
t
2n+2
(1t
2
)
1
dt
=
B
_
n+
3
2
,
_
B
_
1
2
,
_ =
_
1
2
_
n+1
_
+
1
2
_
n+1
.
24 Background
1.5.1 Generalized Hermite polynomials
For the generalized Hermite polynomials H

n
, the weight function is [x[
2
e
x
2
on
x R and the constant is
_
+
1
2
_
1
. From the formula
_
R
f (x
2
)[x[
2
e
x
2
dx =
_

0
f (t)t
1/2
e
t
dt,
we see that the polynomials P
n
(x) : n 0 given by P
2n
(x) = L
1/2
n
(x
2
) and
P
2n+1
(x) = xL
+1/2
n
(x
2
) form an orthogonal family. The normalization is chosen
so that the leading coefcient becomes 2
n
.
Denition 1.5.3 For 0 the generalized Hermite polynomials H

n
(x) are
dened by
H

2n
(x) = (1)
n
2
2n
n!L
1/2
n
(x
2
),
H

2n+1
(x) = (1)
n
2
2n+1
n!xL
+1/2
n
(x
2
).
The usual computations show the following (for n 0):
1. k
n
= 2
n
;
2. h
2n
= 2
4n
n!
_
+
1
2
_
n
and h
2n+1
= 2
4n+2
n!
_
+
1
2
_
n+1
;
3. H

2n+1
(x) = 2xH

2n
(x) 4nH

2n1
(x),
H

2n+2
(x) = 2xH

2n+1
(x) 2(2n+1+2)H

2n
(x).
The integral transform V maps the Hermite polynomials H
2n
and H
2n+1
onto
this family:
Theorem 1.5.4 For n 0 and > 0,
VH
2n
=
_
1
2
_
n
_
+
1
2
_
n
H

2n
and VH
2n+1
=
_
1
2
_
n+1
_
+
1
2
_
n+1
H

2n+1
.
Proof The series for the Hermite polynomials can now be used. For the even-
index case,
VH
2n
(x) =
n

j=0
(2n)!
(2n2 j)! j!
(1)
j
(2x)
2n2 j
_
1
2
_
nj
_
+
1
2
_
nj
= 2
2n
(1)
n
(
1
2
)
n

ni=0
(n)
i
i!
_
+
1
2
_
i
x
2i
= 2
2n
(1)
n
_
1
2
_
n
n!
_
+
1
2
_
n
L
1/2
n
(x
2
).
1.5 Modied Classical Polynomials 25
For the odd-index case,
VH
2n+1
(x) =
n

j=0
(2n+1)!
(2n+12 j)! j!
(1)
j
(2x)
2n+12 j
_
1
2
_
nj+1
_
+
1
2
_
nj+1
= (1)
n
(2n+1)!
n!
x
n

i=0
(n)
i
i!
_
+
1
2
_
i+1
x
2i
= (1)
n
(2n+1)!
_
+
1
2
_
n+1
xL
+1/2
n
(x
2
),
where we have changed the index of summation by setting i = n j and used the
relations (2n)! = 2
2n
n!
_
1
2
_
n
and (2n+1)! = 2
2n+1
n!
_
1
2
_
n+1
.
1.5.2 Generalized Gegenbauer polynomials
For parameters , with >
1
2
and 0 the weight function is [x[
2
(1
x
2
)
1/2
on 1 < x < 1 and the constant c is B
_
+
1
2
, +
1
2
_
1
. Suppose that
f and g are polynomials; then
_
1
1
f (x
2
)g(x
2
)[x[
2
(1x
2
)
1/2
dx
= 2

_
1
1
f
_
1+t
2
_
g
_
1+t
2
_
(1+t)
1/2
(1t)
1/2
dt
and
_
1
1
(x f (x
2
))(xg(x
2
))[x[
2
(1x
2
)
1/2
dx
= 2
1
_
1
1
f
_
1+t
2
_
g
_
1+t
2
_
(1+t)
+1/2
(1t)
1/2
dt.
Accordingly, the orthogonal polynomials P
n
: n 0 for the above weight
function are given by the formulae
P
2n
(x)=P
(1/2,1/2)
n
(2x
2
1) and P
2n+1
(x)=xP
(1/2,+1/2)
n
(2x
2
1).
As in the Hermite polynomial case, we choose a normalization which corresponds
in a natural way with V.
Denition 1.5.5 For >
1
2
, 0, n 0, the generalized Gegenbauer
polynomials C
(,)
n
(x) are dened by
C
(,)
2n
(x) =
( +)
n
_
+
1
2
_
n
P
(1/2,1/2)
n
(2x
2
1),
26 Background
C
(,)
2n+1
(x) =
( +)
n+1
_
+
1
2
_
n+1
xP
(1/2,+1/2)
n
(2x
2
1).
The structural constants are as follows (for n 0):
1. k
(,)
2n
=
( +)
2n
_
+
1
2
_
n
n!
and k
(,)
2n+1
=
( +)
2n+1
_
+
1
2
_
n+1
n!
;
2. h
(,)
2n
=
_
+
1
2
_
n
( +)
n
( +)
n!
_
+
1
2
_
n
( + +2n)
,
h
(,)
2n+1
=
_
+
1
2
_
n
( +)
n+1
( +)
n!
_
+
1
2
_
n+1
( + +2n+1)
;
3. C
(,)
2n+1
(x) =
2( + +2n)
2 +2n+1
xC
(,)
2n
(x)
2 +2n1
2 +2n+1
C
(,)
2n1
(x),
C
(,)
2n+2
(x) =
+ +2n+1
n+1
xC
(,)
2n+1
(x)
+ +n
n+1
C
(,)
2n
(x);
4. C
(,)
n
(1) =
n+ +
+
h
(,)
n
.
The modied polynomials C
(,)
n
are transforms of the ordinary Gegenbauer
polynomials C
(+)
n
.
Theorem 1.5.6 For >
1
2
, 0, n 0,
VC
+
n
(x) =C
(,)
n
(x).
Proof We use the series found in Subsection 1.4.3. In the even-index case,
VC
+
2n
(x) =
n

j=0
( +)
2nj
j!(2n2 j)!
(1)
j
(2x)
2n2 j
_
1
2
_
nj
_
+
1
2
_
nj
= (1)
n
( +)
n
n!
2
F
1
_
n, n+ +
+
1
2
; x
2
_
=
(1)
n
( +)
n
_
+
1
2
_
n
P
(1/2,1/2)
n
(12x
2
)
=
( +)
n
_
+
1
2
_
n
P
(1/2,1/2)
n
(2x
2
1),
1.6 Notes 27
and in the odd-index case
VC
+
2n+1
(x) =
n

j=0
( +)
2n+1j
j!(2n+12j)!
(1)
j
(2x)
2n+12 j
_
1
2
_
n+1j
_
+
1
2
_
n+1j
= (1)
n
x
( +)
n+1
n!
_
+
1
2
_
2
F
1
_
n, n+ + +1
+
3
2
; x
2
_
=
x(1)
n
( +)
n+1
_
+
1
2
_
n+1
P
(+1/2,1/2)
n
(12x
2
)
=
x( +)
n+1
_
+
1
2
_
n+1
P
(1/2,+1/2)
n
(2x
2
1).
Note that the index of summation was changed to i = n j in both cases.
1.5.3 A limiting relation
With suitable renormalization the weight
_
1 x
2
/
_
1/2
tends to e
x
2
. The
effect on the corresponding orthogonal polynomials is as follows.
Proposition 1.5.7 For , n 0, lim

n/2
C
(,)
n
(x/

) = s
n
H

n
(x), where
s
2n
=
_
2
2n
n!
_
+
1
2
_
n

1
, s
2n+1
=
_
2
2n+1
n!
_
+
1
2
_
n+1
_
1
.
Proof Replace x by x/

in the
2
F
1
series expressions (with argument x
2
) found
above. Observe that lim

j
( +a)
j
= 1 for any xed a and any j = 1, 2, . . .
By specializing to = 0 the following is obtained.
Corollary 1.5.8 For n 0, lim

n/2
C

n
_
x

_
=
1
n!
H
n
(x).
1.6 Notes
The standard reference on the hypergeometric functions is the book of Bai-
ley [1935]. It also contains the Lauricella series of two variables. See also the
books Andrews, Askey and Roy [1999], Appell and de F eriet [1926], and Erd elyi,
Magnus, Oberhettinger and Tricomi [1953]. For multivariable Lauricella series,
see Exton [1976].
The standard reference for orthogonal polynomials in one variable is the book
of Szeg o [1975]. See also Chihara [1978], Freud [1966] and Ismail [2005].
2
Orthogonal Polynomials in Two Variables
We start with several examples of families of orthogonal polynomials of two vari-
ables. Apart from a brief introduction in the rst section, the general properties
and theory will be deferred until the next chapter where they will be given in
d variables for all d 2. Here we focus on explicit constructions and concrete
examples, most of which will be given explicitly in terms of classical orthogonal
polynomials of one variable.
2.1 Introduction
A polynomial of two variables in the variables x, y is a nite linear combination
of the monomials x
j
y
k
. The total degree of x
j
y
k
is dened as j +k and the total
degree of a polynomial is dened as the highest degree of the monomials that
it contains. Let
2
= R[x, y] be the space of polynomials in two variables. For
n N
0
, the space of homogeneous polynomials of degree n is denoted by
P
2
n
:= spanx
j
y
k
: j +k = n, j, k N
0

and the space of polynomials of total degree n is denoted by

2
n
:= spanx
j
y
k
: j +k n, j, k N
0
.
Evidently,
2
n
is a direct sum of P
2
m
for m = 0, 1, . . . , n. Furthermore,
dimP
2
n
= n+1 and dim
2
n
=
_
n+2
2
_
.
Let , ) be an inner product dened on the space of polynomials of two
variables. An example of such an inner product is given by
f , g)

=
_
R
2
f (x)g(x)d(x),
2.2 Product Orthogonal Polynomials 29
where d is a positive Borel measure on R
2
such that the integral is well dened
on all polynomials. Mostly we will work with d = W(x, y)dxdy, where W is a
nonnegative function called the weight function.
Denition 2.1.1 A polynomial P is an orthogonal polynomial of degree n with
respect to an inner product , ) if P
2
n
and
P, Q) = 0 Q
2
n1
. (2.1.1)
When the inner product is dened via a weight function W, we say that P is
orthogonal with respect to W.
According to the denition, P is an orthogonal polynomial if it is orthogonal to
all polynomials of lower degree; orthogonality to other polynomials of the same
degree is not required. Given an inner product one can apply the GramSchmidt
process to generate an orthogonal basis, which exists under certain conditions to
be discussed in the next chapter. In the present chapter we deal with only specic
weight functions for which explicit bases can be constructed and veried directly.
We denote by V
2
n
the space of orthogonal polynomials of degree n:
V
2
n
:= spanP
2
n
: P, Q) = 0, Q
2
n1
.
When the inner product is dened via a weight function W, we sometimes write
V
2
n
(W). The dimension of V
2
n
is the same as that of P
2
n
:
dimV
2
n
= dimP
2
n
= n+1.
A basis of V
2
n
is often denoted by P
n
k
: 0 k n. If, additionally, P
n
k
, P
n
j
) = 0
for j ,=k then the basis is said to be mutually orthogonal and if, further, P
n
k
, P
n
k
) =
1 for 0 k n then the basis is said to be orthonormal.
In contrast with the case of one variable, there can be many distinct bases for
the space V
2
n
. In fact let P
n
:=P
n
k
: 0 k n denote a basis of V
2
n
, where we
regard P
n
both as a set and as a column vector. Then, for any nonsingular matrix
M R
n+1,n+1
, MP
n
is also a basis of V
2
n
.
2.2 Product Orthogonal Polynomials
Let W be the product weight function dened by W(x, y) = w
1
(x)w
2
(y), where w
1
and w
2
are two weight functions of one variable.
Proposition 2.2.1 Let p
k

k=0
and q
k

k=0
be sequences of orthogonal poly-
nomials with respect to w
1
and w
2
, respectively. Then a mutually orthogonal basis
of V
2
n
with respect to W is given by
P
n
k
(x, y) = p
k
(x)q
nk
(y), 0 k n.
Furthermore, if p
k
and q
k
are orthonormal then so is P
n
k
.
30 Orthogonal Polynomials in Two Variables
Proof Verication follows from writing
_
R
2
P
n
k
(x, y)P
m
j
(x, y)W(x, y)dxdy
=
_
R
p
k
(x)p
j
(x)w
1
(x)dx
_
R
q
nk
(y)q
mj
(y)w
2
(y)dy
and using the orthogonality of p
n
and q
n
.
Below are several examples of weight functions and the corresponding classi-
cal orthogonal polynomials.
Product Hermite polynomials
W(x, y) = e
x
2
y
2
and
P
n
k
(x, y) = H
k
(x)H
nk
(y), 0 k n. (2.2.1)
Product Laguerre polynomials
W(x, y) = x

e
xy
,
and
P
n
k
(x, y) = L

k
(x)L

nk
(y), 0 k n. (2.2.2)
Product Jacobi polynomials
W(x, y) = (1x)

(1+x)

(1y)

(1+y)

,
and
P
n
k
(x, y) = P
(,)
k
(x)P
(,)
nk
(y), 0 k n. (2.2.3)
There are also mixed-product orthogonal polynomials, such as the product of
the Hermite polynomials and the Laguerre polynomials.
2.3 Orthogonal Polynomials on the Unit Disk
The unit disk of R
2
is dened as B
2
:=(x, y) : x
2
+y
2
1, on which we consider
a weight function dened as follows:
W

(x, y) :=
+
1
2

(1x
2
y
2
)
1/2
, >
1
2
,
which is normalized in such a way that its integral over B
2
is 1. Let
f , g)

:=
_
B
2
f (x, y)g(x, y)W

(x, y)dxdy
in this section. There are several distinct explicit orthogonal bases.
2.3 Orthogonal Polynomials on the Unit Disk 31
First orthonormal basis This basis is given in terms of the Gegenbauer
polynomials.
Proposition 2.3.1 Let > 0. For 0 k n dene the polynomials
P
n
k
(x, y) =C
k++1/2
nk
(x)(1x
2
)
k/2
C

k
_
y

1x
2
_
, (2.3.1)
which holds in the limit lim
0

1
C

k
(x) = (2/k)T
k
(x) if = 0. Then P
n
k
:
0 k n is a mutually orthogonal basis for V
2
n
(W

). Moreover,
P
n
k
, P
n
k
)

=
(2k +2 +1)
nk
(2)
k
()
k
( +
1
2
)
(nk)!k!( +
1
2
)
k
(n+ +
1
2
)
, (2.3.2)
where, when =0, we multiply (2.3.2) by
2
and set 0 if k 1 and multiply
by an additional 2 if k = 0.
Proof Since the Gegenbauer polynomial C

k
is even if k is even and odd if k is
odd, it follows readily that P
k,n
is a polynomial of degree at most n. Using the
integral relation
_
B
2
f (x, y)dxdy =
_
1
1
_

1x
2

1x
2
f (x, y)dxdy
=
_
1
1
_
1
1
f
_
x, t
_
1x
2
_
dt
_
1x
2
dx
and the orthogonality of the Gegenbauer polynomials, we obtain
P
m
j
, P
n
k
)

=
+
1
2

h
k++1/2
nk
h

n,m

j,k
, 0 j m, 0 k n,
where h

k
=
_
1
1
_
C

k
(x)

2
(1x
2
)
1/2
dx, from which the constant in (2.3.2) can
be easily veried.
In the case = 0, the above proposition holds under the limit relation
lim
0

1
C

k
(x) = (2/k)T
k
(x), where T
k
is the Chebyshev polynomial of the
rst kind. We state this case as a corollary.
Corollary 2.3.2 For 0 k n, dene the polynomials
P
n
k
(x, y) =C
k+1/2
nk
(x)(1x
2
)
k/2
T
k
_
y

1x
2
_
. (2.3.3)
Then P
n
k
: 0 k n is a mutually orthogonal basis for V
2
n
(W
0
). Moreover,
P
n
0
, P
n
0
)
0
= 1/(2n+1) and, for 0 < k n,
P
n
k
, P
n
k
)
0
=
(2k +1)
nk
k!
2(nk)!(
1
2
)
k
(2n+1)
. (2.3.4)
32 Orthogonal Polynomials in Two Variables
Second orthonormal basis The basis is given in terms of the Jacobi polynomi-
als in the polar coordinates (x, y) = (r cos, r sin), 0 r 1 and 0 2.
Proposition 2.3.3 For 1 j
n
2
, dene
P
j,1
(x, y) = [h
j,n
]
1
P
(1/2,n2 j)
j
(2r
2
1)r
n2 j
cos(n2 j),
P
j,2
(x, y) = [h
j,n
]
1
P
(1/2,n2 j)
j
(2r
2
1)r
n2 j
sin(n2 j),
(2.3.5)
where the constant is given by
[h
j,n
]
2
=
( +
1
2
)
j
(n j)!(n j + +
1
2
)
j!( +
3
2
)
nj
(n+ +
1
2
)
_
2 n ,= 2 j,
1 n = 2 j.
(2.3.6)
Then P
j,1
: 0 j
n
2
P
j,2
: 0 j <
n
2
is an orthonormal basis of the space
V
2
n
(W().
Proof From Eulers formula (x+iy)
m
= r
m
(cosm +i sinm), it follows readily
that r
m
cosm and r
m
sinm are both polynomials of degree m in (x, y). Conse-
quently, P
j,1
and P
j,2
are both polynomials of degree at most n in (x, y). Using the
formula
_
B
2
f (x, y)dxdy =
_
1
0
r
_
2
0
f (r cos, r sin)dr d,
that P
j,1
and P
k,2
are orthogonal to each other follows immediately from the fact
that sin(n 2k) and cos(n 2k)) are orthogonal in L
2
([0, 2]). Further, we
have
P
j,1
, P
k,1
)

= A
j
+
1
2

[h
j,n
]
2
_
1
0
r
2n2 j+1
P
(1/2,n2 j)
j
(2r
2
1)
P
(1/2,n2 j)
k
(2r
2
1)(1r
2
)
1/2
dr,
where A
j
= if 2 j ,=n and 2 if 2 j =n. Make the change of variables t 2r
2
1,
the orthogonality of P
j,1
and P
k,1
follows from that of the Jacobi polynomials. The
orthogonality of P
j,2
and P
k,2
follows similarly.
Orthogonal polynomials via the Rodrigues formula Classical orthogonal
polynomials of one variable all satisfy Rodrigues formula. For polynomials in
two variables, an orthogonal basis can be dened by an analogue of the Rodrigues
formula.
Proposition 2.3.4 For 0 k n, dene
U

k,n
(x, y) = (1x
2
y
2
)
+1/2

n
x
k
y
nk
_
(1x
2
y
2
)
n+1/2
_
.
Then U

k,n
: 0 k n is a basis for V
2
n
(W

).
2.3 Orthogonal Polynomials on the Unit Disk 33
Proof Let Q
2
n1
. Directly from the denition,
U

k,n
, Q)

=
+
1
2

_
B
2

n
x
k
y
nk
_
(1x
2
y
2
)
n+1/2
_
Q(x, y)dxdy,
which is zero after integration by parts n times and using the fact that the nth-order
derivative of Q is zero.
This basis is not mutually orthogonal, that is, U

k,n
,U

j,n
)

,= 0 for j ,= k. Let
V

m,n
be dened by
V

m,n
(x, y) =
m/2|

i=0
n/2|

j=0
(m)
2i
(n)
2 j
( +
1
2
)
m+nij
2
2i+2 j
i! j!( +
1
2
)
m+n
x
m2i
y
n2 j
.
Then V
m,n
is the orthogonal projection of x
m
y
n
in L
2
(B
2
,W

) and the two families


of polynomials are biorthogonal.
Proposition 2.3.5 The set V

k,nk
: 0 k n is a basis of V
2
n
(W

) and, for
0 j, k n,
U

k,n
,V

j,nj
)

=
( +
1
2
)k!(nk)!
n+ +
1
2

j,k
.
The proof of this proposition will be given in Chapter 4 as a special case of
orthogonal polynomials on a d-dimensional ball.
An orthogonal basis for constant weight In the case of a constant weight
function, say W
1/2
(x) = 1/, an orthonormal basis can be given in terms of the
Chebyshev polynomials U
n
of the second kind; this has the distinction of being
related to the Radon transform. Let denote the line (, t) = (x, y) : xcos +
ysin =t for 1 t 1, which is perpendicular to the direction (cos, sin),
[t[ being the distance between the line and the origin. Let I(, t) = (, t) B
2
,
the line segment of inside B
2
. The Radon projection R

( f ;t) of a function f in
the direction with parameter t [1, 1] is dened by
R

( f ;t) :=
_
I(,t)
f (x, y)d
=
_

1t
2

1t
2
f (t cos ssin, t sin +scos)ds, (2.3.7)
where d denotes the Lebesgue measure on the line I(, t).
Proposition 2.3.6 For 0 k n, dene
P
n
k
(x, y) =U
n
_
x cos
k
n+1
+y sin
k
n+1
_
. (2.3.8)
Then P
n
k
: 0 k n is an orthonormal basis of V
2
n
(W
1/2
).
34 Orthogonal Polynomials in Two Variables
Proof For [0, ] and g : R R, dene g

(x, y) := g(xcos +ysin). The


change of variables t = xcos +y sin and s = xsin +ycos amounts to a
rotation and leads to
f , g

) :=
1

_
B
2
f (x, y)g

(x, y)dxdy =
1

_
1
1
R

( f ;t)g(t)dt. (2.3.9)
Now, a change of variable in (2.3.7) gives
R

( f ;t) =
_
1t
2

_
1
1
f
_
t cos s
_
1t
2
sin, t sin +s
_
1t
2
cos
_
ds.
If f is a polynomial of degree m then the last integral is a polynomial in t
of the same degree, since an odd power of

1t in the integrand is always
companied by an odd power of s, which has integral zero. Therefore, Q(t) :=
R

( f ; t)/

1t
2
is a polynomial of degree m in t for every . Further, the inte-
gral also shows that Q(1) = 2 f (cos, sin). Consequently, by the orthogonality
of U
k
on [1, 1], (2.3.9) shows that f , P
n
k
) = 0 for all f
2
m
whenever m < n.
Hence P
n
k
V
2
n
(W
1/2
).
In order to prove orthonormality, we consider R

(P
n
j
;t). By (2.3.9) and the
fact that P
n
j
V
2
n
(W
1/2
),
_
1
1
R

(P
n
j
;t)

1t
2
U
m
(t)
_
1t
2
dt =
_
B
2
P
n
j
(x, y)(U
m
)

(x, y)dxdy = 0,
for m = 0, 1, . . . , n1. Thus the polynomial Q(t) =R

(P
n
j
;t)/

1t
2
is orthog-
onal to U
m
for 0 m n 1. Since Q is a polynomial of degree n, it must be
a multiple of U
n
; thus Q(t) = cU
n
(t) for some constant independent of t. Setting
t = 1 and using the fact that U
n
(1) = n+1, we have
c =
P
n
j
(cos , sin )
n+1
=
2U
n
(cos
j
n+1
)
(n+1)
.
Consequently, we conclude that
P
n
k
, P
n
j
) =
2

U
n
_
cos
(kj)
n+1
_
n+1
_
1
1
[U
n
(t)]
2
_
1t
2
dt =
k, j
,
using the fact that U
n
_
cos
(kj)
n+1
_
= sin(k j)/sin
(kj)
n+1
= 0 when k ,= j.
Since P
n
j
is a basis of V
2
(W
1/2
), the proof of the proposition immediately
implies the following corollary.
Corollary 2.3.7 If P V
2
n
(W
1/2
) then for each t (1, 1), 0 2,
R

(P;t) =
2
n+1
_
1t
2
U
n
(t)P(cos, sin).
2.4 Orthogonal Polynomials on the Triangle 35
2.4 Orthogonal Polynomials on the Triangle
We now consider the triangle T
2
:=(x, y) : 0 x, y, x+y 1. For , , >1,
dene the Jacobi weight function on the triangle,
W
,,
(x, y) =
( + + +3)
( +1)( +1)( +1)
x

(1x y)

, (2.4.1)
normalized so that its integral over T
2
is 1. Dene, in this section,
f , g)
,,
=
_
T
2
f (x, y)g(x, y)W
,,
(x, y)dxdy.
An orthonormal basis An orthonormal basis can be given in the Jacobi
polynomials.
Proposition 2.4.1 For 0 k n, dene
P
n
k
(x, y) = P
(2k+++1,)
nk
(2x 1)(1x)
k
P
(,)
k
_
2y
1x
1
_
. (2.4.2)
Then P
n
k
: 0 k n is a mutually orthogonal basis of V
d
n
(W
,,
) and, further.
P
n
k
, P
n
k
)
,,
=
( +1)
nk
( +1)
k
( +1)
k
( + +2)
n+k
(nk)!k!( + +2)
k
( + + +3)
n+k

(n+k + + + +2)(k + + +1)


(2n+ + + +2)(2k + + +1)
. (2.4.3)
Proof It is evident that P
n
k

2
n
. We need the integral relation
_
T
2
f (x, y)dxdy =
_
1
0
_
1x
0
f (x, y)dxdy
=
_
1
0
_
1
0
f (x, (1x)t))(1x)dtdx. (2.4.4)
Since P
n
k
(x, (1 x)y) = P
(2k+++1,)
nk
(2x 1)(1 x)
k
P
(,)
k
(2y 1) and, more-
over, W
,,
(x, (1x)y) =c
,,
x

(1x)
+
y

(1y)

, where c
,,
denotes the
normalization constant of W
,,
, we obtain from the orthogonality of the Jacobi
polynomials P
(a,b)
n
(2u1) on the interval [0, 1] that
P
m
j
, P
n
k
)
,,
= c
,,
h
(2k+++1,)
nk
h
(,)
k

j,k

m,n
,
where h
(a,b)
k
=
_
1
0
[P
(a,b)
k
(2t 1)[
2
(1 t)
a
t
b
dt and the constant (2.4.3) can be
easily veried.
The double integral over the triangle T
2
can be expressed as an iterated integral
in different ways. This leads to two other orthonormal bases of V
2
(W
,,
).
36 Orthogonal Polynomials in Two Variables
Proposition 2.4.2 Denote by P
,,
k,n
the polynomials dened in (2.4.2) and by
h
,,
k,n
the constant (2.4.3). Dene the polynomials
Q
n
k
(x, y) := P
,,
k,n
(y, x) and R
n
k
(x, y) := P
,,
k,n
(1x y, y).
Then Q
n
k
: 0 k nand R
n
k
: 0 k n are also mutually orthogonal bases
of V
d
n
(W
,,
), and, furthermore,
Q
n
k
, Q
n
k
)
,,
= h
,,
k,n
and R
n
k
, R
n
k
)
,,
= h
,,
k,n
.
Proof For Q
n
k
, we exchange x and y in the integral (2.4.4) and the same proof
then applies. For R
n
k
, we make the change of variables (x, y) (s, t) with x =
(1s)(1t) and y = (1s)t in the integral over the triangle T
2
to obtain
_
T
2
f (x, y)dxdy =
_
1
0
_
1
0
f ((1s)(1t), (1s)t)(1s)dsdt.
Since P
,,
k,n
(1 x y, y) = P
,,
k,n
(s, (1 s)t) and W
,,
(x, y) = c
,,
(1 s)
+
s

(1 t)

, the proof follows, again from the orthogonality of the


Jacobi polynomials.
These bases illustrate the symmetry of the triangle T
2
, which is invariant under
the permutations of (x, y, 1x y).
Orthogonal polynomials via the Rodrigues formula The Jacobi polynomials
can be expressed in the Rodrigues formula. There is an analogue for the triangle.
Proposition 2.4.3 For 0 k n, dene
U
,,
k,n
(x, y) :=
_
W
,,
(x)

1

n
x
k
y
nk
_
x
k+
y
nk+
(1x y)
n+
_
.
Then the set U
,,
k,n
(x, y) : 0 k n is a basis of V
2
n
(W
,,
).
Proof For Q
2
n1
, the proof of U
k
n , Q)
,,
= 0 follows evidently from
integration by parts and the fact that the nth derivative of Q is zero.
Just as for the disk, this basis is not mutually orthogonal. We can again
dene a biorthogonal basis by considering the orthogonal projection of x
m
y
n
onto
V
2
n
(W
,,
). Let V
,,
m,n
be dened by
V
(,,)
m,n
(x, y) =
m

i=0
n

j=0
(1)
n+m+i+j
_
m
i
__
n
j
_

( +
1
2
)
m
( +
1
2
)
n
( + + +
1
2
)
n+m+i+j
( +
1
2
)
i
( +
1
2
)
j
( + + +
1
2
)
2n+2m
x
i
y
j
.
2.5 Orthogonal Polynomials and Differential Equations 37
Proposition 2.4.4 The set V
,,
k,nk
: 0 k n is a basis of V
2
n
(W
,,
) and,
for 0 j, k n,
U
,,
k,n
,V
,,
j,nj
)
,,
=
( +
1
2
)
k
( +
1
2
)
nk
( +
1
2
)
n
k!(nk)!
( + + +
3
2
)
2n

k, j
.
The proof of this proposition will be given in Chapter 4, as a special case of
orthogonal polynomials on a d-dimensional simplex.
2.5 Orthogonal Polynomials and Differential Equations
The classical orthogonal polynomials in one variable, the Hermite, Laguerre
and Jacobi polynomials, are eigenfunctions of second-order linear differential
operators. A similar result also holds in the present situation.
Denition 2.5.1 A linear second-order partial differential operator L dened by
Lv := A(x, y)v
xx
+2B(x, y)v
xy
+C(x, y)v
yy
+D(x, y)v
x
+E(x, y)v
y
,
where v
x
:= v/x and v
xx
:=
2
v/x
2
etc., is called admissible if for each
nonnegative integer n there exists a number
n
such that the equation
Lv =
n
v
has n+1 linearly independent solutions that are polynomials of degree n and no
nonzero solutions that are polynomials of degree less than n.
If a system of orthogonal polynomials satises an admissible equation then all
orthogonal polynomials of degree n are eigenfunctions of the admissible differen-
tial operator for the same eigenvalue
n
. In other words, the eigenfunction space
for each eigenvalue is V
2
n
. This requirement excludes, for example, the product
Jacobi polynomial P
(,)
k
(x)P
(,d)
nk
, which satises a second-order equation of the
form Lu =
k,n
u, where
k,n
depends on both k and n.
Upon considering lower-degree monomials, it is easy to see that for L to be
admissible it is necessary that
A(x, y) = Ax
2
+a
1
x +b
1
y +c
1
, D(x, y) = Bx +d
1
,
B(x, y) = Axy +a
2
x +b
2
y +c
2
, E(x, y) = By +d
2
,
C(x, y) = Ay
2
+a
3
x +b
3
y +c
3
,
and, furthermore, for each n = 0, 1, 2, . . . ,
nA+B ,= 0, and
n
=n[A(n1) +B].
A classication of admissible equations is to be found in Krall and Shef-
fer [1967] and is given below without proof. Up to afne transformations, there
38 Orthogonal Polynomials in Two Variables
are only nine equations. The rst ve are satised by orthogonal polynomials that
we have already encountered. They are as follows:
1. v
xx
+v
yy
(xv
x
+yv
y
) =nv;
2. xv
xx
+yv
yy
+(1+ x)v
x
+(1+ y)v
y
=nv;
3. v
xx
+yv
yy
xv
x
+(1+ y)v
y
=nv;
4. (1x
2
)v
xx
2xyv
xy
+(1y
2
)v
yy
(2 +1)(xv
x
+yv
y
) =n(n+2 +2)v;
5. x(1x)v
xx
2xyv
xy
+y(1y)v
yy

_
( + + +
3
2
)x ( +
1
2
)

v
x

_
( + + +
3
2
)y ( +
1
2
)

v
y
=n
_
n+ + + +
1
2
_
v.
These are the admissible equations for, respectively, product Hermite polyno-
mials, product Laguerre polynomials, product HermiteLaguerre polynomials,
orthogonal polynomials on the disk V
2
n
(W

) and orthogonal polynomials on


the triangle V
2
n
(W
,,,
). The rst three can be easily veried directly from the
differential equations of one variable satised by the Hermite and Laguerre poly-
nomials. The fourth and the fth equations will be proved in Chapter 7 as special
cases of the d-dimensional ball and simplex.
The other four admissible equations are listed below:
6. 3yv
xx
+2v
xy
xv
x
yv
y
=v;
7. (x
2
+y +1)v
xx
+(2xy +2x)v
xy
+(y
2
+2y +1)v
yy
+g(xv
x
+yv
y
) =v;
8. x
2
v
xx
+2xyv
xy
+(y
2
y)v
yy
+g[(x 1)v
x
+(y )v
y
] =v;
9. (x +)v
xx
+2(y +1)v
yy
+xv
x
+yv
y
=v.
The solutions for these last four equations are weak orthogonal polynomials, in
the sense that the polynomials, are orthogonal with respect to a linear functional
that is not necessarily positive denite.
2.6 Generating Orthogonal Polynomials of Two Variables
The examples of orthogonal polynomials that we have described so far are given
in terms of orthogonal polynomials of one variable. In this section, we consider
two methods that can be used to generate orthogonal polynomials of two variables
from those of one variable.
2.6.1 A method for generating orthogonal polynomials
Let w
1
and w
2
be weight functions dened on the intervals (a, b) and (d, c),
respectively. Let be a positive function dened on (a, b) such that
Case I is a polynomial of degree 1;
Case II is the square root of a nonnegative polynomial of degree at most 2;
we assume that c =d > 0 and that w
2
is an even function on (c, c).
2.6 Generating Orthogonal Polynomials of Two Variables 39
For each k N
0
let p
n,k

n=0
denote the system of orthonormal polynomials
with respect to the weight function
2k+1
(x)w
1
(x), and let q
n
be the system of
orthonormal polynomials with respect to the weight function w
2
(x).
Proposition 2.6.1 Dene polynomials P
n
k
of two variables by
P
n
k
(x, y) = p
nk,k
(x)[(x)]
k
q
k
_
y
(x)
_
, 0 k n. (2.6.1)
Then P
n
k
: 0 k n is a mutually orthogonal basis of V
d
n
(W) for the weight
function
W(x, y) = w
1
(x)w
2
(
1
(x)y), (x, y) R, (2.6.2)
where the domain R is dened by
R =(x, y) : a < x < b, d(x) < y < c(x). (2.6.3)
Proof That the P
n
k
are polynomials of degree n is evident in case 1; in case 2
it follows from the fact that q
k
has the same parity as k, since w
2
is assumed to
be even. The orthogonality of P
n
k
can be veried by a change of variables in the
integral
_ _
R
P
n
k
(x, y)P
m
j
(x, y)W(x, y)dx dy
=
_
b
a
p
nk,k
(x)p
mj, j
(x)
k+j+1
(x)w
1
(x)dx
_
d
c
q
k
(y)q
j
(y)w
2
(y)dy
= h
k,n

n,m

k, j
for 0 j m and 0 k n.
Let us give several special cases that can be regarded as extensions of Jacobi
polynomials in two variables.
Jacobi polynomials on the square Let w
1
(x) = (1x)

(1+x)

and w
2
(x) =
(1 x)

(1 +x)

on [1, 1] and let (x) = 1. Then the weight function (2.6.2)


becomes
W(x, y) = (1x)

(1+x)

(1y)

(1+y)

, , , , >1,
on the domain [1, 1]
2
and the orthogonal polynomials P
n
k
in (2.6.1) are exactly
those in (2.2.3).
Jacobi polynomials on the disk Let w
1
(x) = w
2
(x) = (1x
2
)
1/2
on [1, 1]
and let (x) = (1x
2
)
1/2
. Then the weight function (2.6.2) becomes
40 Orthogonal Polynomials in Two Variables
W

(x, y) = (1x
2
y
2
)
1/2
, >
1
2
,
on the domain B
2
=(x, y) : x
2
+y
2
1, and the orthogonal polynomials P
n
k
in
(2.6.1) are exactly those in (2.3.1).
Jacobi polynomials on the triangle Let w
1
(x) = x

(1 x)
+
and w
2
(x) =
x

(1x)

, both dened on the interval (0, 1), and (x) = 1x. Then the weight
function (2.6.2) becomes
W
,,
(x, y) = x

(1x y)

, , , >1,
on the triangle T
2
= (x, y) : x 0, y 0, 1 x y 0, and the orthogonal
polynomials P
n
k
in (2.6.1) are exactly those in (2.4.2).
Orthogonal polynomials on a parabolic domain Let w
1
(x) = x
a
(1 x)
b
on
[0, 1], w
2
(x) = (1 x
2
)
a
on [1, 1], and (x) =

x. Then the weight function


(2.6.2) becomes
W
a,b
(x, y) = (1x)
b
(x y
2
)
a
, y
2
< x < 1, , >1.
The domain R =(x, y) : y
2
<x <1 is bounded by a straight line and a parabola.
The mutually orthogonal polynomials P
n
k
in (2.6.1) are given by
P
n
k
(x, y) = p
(a,b+k+1/2)
nk
(2x 1)x
k/2
p
(b,b)
k
(y/

x), n k 0. (2.6.4)
The set of polynomials P
k,n
: 0 k n is a mutually orthogonal basis of
V
d
n
(W
a,b
).
2.6.2 Orthogonal polynomials for a radial weight
A weight function W is called radial if it is of the form W(x, y) = w(r), where
r =
_
x
2
+y
2
. For such a weight function, an orthonormal basis can be given in
polar coordinates (x, y) = (r cos, r sin).
Proposition 2.6.2 Let p
(k)
m
denote the orthogonal polynomial of degree m with
respect to the weight function r
k+1
w(r) on [0, ). Dene
P
j,1
(x, y) = p
(2n4 j)
2 j
(r)r
n2 j
cos(n2 j), 0 j
n
2
,
P
j,2
(x, y) = p
(2n4 j)
2 j
(r)r
n2 j
sin(n2j), 0 j <
n
2
.
(2.6.5)
Then P
j,1
: 0 j n/2 P
j,2
: 0 j < n/2 is a mutually orthogonal basis
of V
2
n
(W) with W(x, y) = w(
_
x
2
+y
2
).
The proof of this proposition follows the same line as that of Proposition 2.3.3
with an obvious modication. In the case of w(r) = (1 r
2
)
1/2
, the basis
2.6 Generating Orthogonal Polynomials of Two Variables 41
(2.6.5) is, up to a normalization constant, that given in (2.3.5). Another exam-
ple gives the second orthogonal basis for the product Hermite weight function
W(x, y) = e
x
2
y
2
, for which an orthogonal basis is already given in (2.2.1).
Product Hermite weight w(r) = e
r
2
The basis (2.6.5) is given by
P
j,1
(x, y) = L
n2 j
j
(r
2
)r
(n2 j)
cos(n2 j), 0 j
n
2
,
P
j,2
(x, y) = L
n2 j
2 j
(r
2
)r
(n2 j)
sin(n2j), 0 j <
n
2
(2.6.6)
in terms of the Laguerre polynomials of one variable.
Likewise, we call a weight function
1
-radial if it is of the form W(x, y) =
w(s) with s = x +y. For such a weight function, a basis can be given in terms of
orthogonal polynomials of one variable.
Proposition 2.6.3 Let p
(k)
m
denote the orthogonal polynomial of degree m with
respect to the weight function s
k+1
w(s) on [0, ). Dene
P
n
k
(x, y) = p
(2k)
nk
(x +y)(x +y)
k
P
k
_
2
x
x +y
1
_
, 0 k n, (2.6.7)
where P
k
is the Legendre polynomial of degree k. Then P
n
k
: 0 k n is a
mutually orthogonal basis of V
2
n
(W) with W(x, y) = w(x +y) on R
2
+
.
Proof It is evident that P
n
k
is a polynomial of degree n in (x, y). To verify the
orthogonality we use the integral formula
_
R
2
+
f (x, y)dxdy =
_

0
_
_
1
0
f (st, s(1t)) dt
_
sds
and the orthogonality of p
(2k)
m
and P
k
.
One example of proposition 2.6.3 is given by the Jacobi weight (2.4.1) on the
triangle for = = 0, for which w(r) = (1 r)

. Another example is given


by the product Laguerre weight function W(x, y) = e
xy
, for which a basis was
already given in (2.2.2).
Product Laguerre weight w(r) = e
r
The basis (2.6.7) is given by
P
n
k
(x, y) = L
2k+1
nk
(x +y)(x +y)
k
P
k
_
2
x
x +y
1
_
, 0 k n, (2.6.8)
in terms of the Laguerre polynomials and Legendre polynomials of one variable.
2.6.3 Orthogonal polynomials in complex variables
For a real-valued weight function W dened on R
2
, orthogonal polynomials
of two variables with respect to W can be given in complex variables z and z.
42 Orthogonal Polynomials in Two Variables
For this purpose, we identify R
2
with the complex plane C by setting z = x +iy
and regard as a subset of C. We then consider polynomials in z and z that are
orthogonal with respect to the inner product
f , g)
C
W
:=
_

f (z, z)g(z, z)w(z)dxdy, (2.6.9)


where w(z) =W(x, y). Let V
2
n
(W, C) denote the space of orthogonal polynomials
in z and z with respect to the inner product (2.6.9). In this subsection we denote by
P
k,n
(x, y) real orthogonal polynomials with respect to W and denote by Q
k,n
(z, z)
orthogonal polynomials in V
2
n
(W, C).
Proposition 2.6.4 The space V
2
n
(W, C) has a basis Q
k,n
that satises
Q
k,n
(z, z) = Q
nk,n
(z, z), 0 k n. (2.6.10)
Proof Let P
k,n
(x, y) : 0 k n be a basis of V
2
n
(W). We make the following
denitions:
Q
k,n
(z, z) :=
1

2
_
P
k,n
(x, y) iP
nk,n
(x, y)

, 0 k
_
n1
2
_
,
Q
k,n
(z, z) :=
1

2
_
P
nk,n
(x, y) +iP
k,n
(x, y)

,
_
n+1
2
_
< k n,
Q
n/2,n
(z, z) :=
1

2
P
n/2,n
(x, y) if n is even. (2.6.11)
If f and g are real-valued polynomials then f , g)
C
W
= f , g)
W
. Hence, it is easy
to see that Q
k,n
: 0 k n is a basis of V
2
n
(W, C) and that this basis satises
(2.6.10).
Conversely, given Q
k,n
: 0 k n V
2
n
(W, C) that satises (2.6.10), we can
dene
P
k,n
(x, y) :=
1

2
_
Q
k,n
(z, z) +Q
nk,n
(z, z)

, 0 k
n
2
,
P
k,n
(x, y) :=
1

2i
_
Q
k,n
(z, z) Q
kk,n
(z, z)

,
n
2
< k n.
(2.6.12)
It is easy to see that this is exactly the converse of (2.6.11). The relation (2.6.10)
implies that the P
k,n
are real polynomials. In fact P
k,n
(x, y) = ReQ
k,n
(z, z) for
0 k n/2 and P
k,n
(x, y) = ImQ
n
k
(z, z) for n/2 < k n. We summarize the
relation below.
Theorem 2.6.5 Let P
k,n
and Q
k,n
be related by (2.6.11) or, equivalently, by
(2.6.12). Then:
2.6 Generating Orthogonal Polynomials of Two Variables 43
(i) Q
k,n
: 0 k n is a basis of V
2
n
(W, C) that satises (2.6.10) if and only
if P
k,n
: 0 k n is a basis of V
2
n
(W);
(ii) Q
k,n
: 0 k n is an orthonormal basis of V
2
n
(W, C) if and only if P
k,n
:
0 k n is an orthonormal basis of V
2
n
(W).
Expressing orthogonal polynomials in complex variables can often lead to for-
mulas that are more symmetric and easier to work with, as can be seen from the
following two examples.
Complex Hermite polynomials
These are orthogonal with respect to the product Hermite weight
w
H
(z) = e
x
2
y
2
= e
[z[
2
, z = x +iy C.
We dene these polynomials by
H
k, j
(z, z) := (1)
k+j
e
z z
_

z
_
k
_

z
_
j
e
z z
, k, j = 0, 1, . . . ,
where, with z = x +iy,

z
=
1
2
_

x
i

y
_
and

z
=
1
2
_

x
+i

y
_
.
By induction, it is not difcult to see that the H
k, j
satisfy an explicit formula:
H
k, j
(z, z) = k! j!
mink, j

=0
(1)

!
z
k
z
j
(k )!( j )!
= z
k
z
j
2
F
0
_
k, j;
1
z z
_
from which it follows immediately that (2.6.10) holds, that is,
H
k, j
(z, z) = H
j,k
(z, z). (2.6.13)
Working with the above explicit formula by rewriting the summation in
2
F
0
in
reverse order, it is easy to deduce from (x)
ji
= (1)
i
(x)
j
/(1 j x)
i
that H
k, j
can be written as a summation in
1
F
1
, which leads to
H
k, j
(z, z) = (1)
j
j!z
kj
L
kj
j
([z[
2
), k j, (2.6.14)
where L

j
is a Laguerre polynomial.
Proposition 2.6.6 The complex Hermite polynomials satisfy the following
properties:
(i)

z
H
k, j
= zH
k, j
H
k, j+1
,

z
H
k, j
= zH
k, j
H
k+1, j
;
(ii) zH
k, j
= H
k+1, j
+ jH
k, j1
, zH
k, j
= H
k, j+1
+kH
k1, j
;
(iii)
_
C
H
k, j
(z, z)H
m,l
(z, z)w
H
(z)dxdy = j!k!
k,m

j,l
.
44 Orthogonal Polynomials in Two Variables
Proof Part (i) follows from directly from the above denition of the polynomials.
Part (ii) follows from (2.6.14) and the identity L

n
(x) = L
+1
n
(x) L
+1
n1
(x). The
orthogonal relation (iii) follows from writing the integral in polar coordinates and
applying the orthogonality of the Laguerre polynomials.
Disk polynomials
Let d

(z) :=W
+1/2
(x, y)dxdy and dene
P

k, j
(z, z) =
j!
( +1)
j
P
(,kj)
j
(2[z[
2
1)z
kj
, k > j, (2.6.15)
which is normalized by P

k, j
(1, 1) = 1. For k j, we use
P

k, j
(z, z) = P

j,k
( z, z). (2.6.16)
Written in terms of hypergeometric functions, these polynomials take a more
symmetric form:
P

k, j
(z, z) =
( +1)
k+j
( +1)
k
( +1)
j
z
k
z
j
2
F
1
_
k, j
k j
;
1
z z
_
, k, j 0.
To see this, write the summation in the
2
F
1
sum in reverse order and use (x)
ji
=
(1)
i
(x)
j
/(1 j x)
i
, to get
P

k, j
(z, z) =
(k j +1)
j
( +1)
j
(1)
j
2
F
1
_
j, k + +1
k j +1
; [z[
2
_
z
kj
,
which is (2.6.15) by Proposition 1.4.14 and P
(,)
n
(t) = (1)
n
P
(,)
n
(t).
Proposition 2.6.7 The disk polynomials satisfy the following properties:
(i) [P

k, j
(z, z)[ 1 for [z[ 1 and 0;
(ii) zP

k, j
(z, z) =
+k +1
+k + j +1
P

k+1, j
(z, z) +
j
+k + j +1
P

k, j1
(z, z)
with a similar relation for zP

k, j
(z) upon using (2.6.16);
(iii)
_
D
P

k, j
P

m,l
d

(z) =
+1
+k + j +1
k! j!
( +1)
k
( +1)
j

k,m

j,l
.
Proof Part (i) follows on taking absolute values inside the
2
F
1
sum and using the
fact that [P
(,)
n
(1)[ = ( +1)
n
/n!; the condition 0 is used to ensure that
the coefcients in the
2
F
1
sum are positive. The recursive relation (ii) is proved
using the following formula for Jacobi polynomials:
(2n+ + +1)P
(,)
n
(t) = (n+ + +1)P
(,+1)
n
(t) +(n+)P
(,+1)
n1
(t),
2.7 First Family of Koornwinder Polynomials 45
which can be veried using the
2
F
1
expansion of Jacobi polynomials in Propo-
sition 1.4.14. Finally, the orthogonal relation (iii) is established using the polar
coordinates z = re
i
and the structure constant of Jacobi polynomials.
2.7 First Family of Koornwinder Polynomials
The Koornwinder polynomials are based on symmetric polynomials. Throughout
this section let w be a weight function dened on [1, 1]. For >1, let us dene
a weight function of two variables,
B

(x, y) := w(x)w(y)[x y[
2+1
, (x, y) [1, 1]
2
. (2.7.1)
Since B

is evidently symmetric in x, y, we need only consider its restriction on


the triangular domain dened by
:=(x, y) : 1 < x < y < 1.
Let be the image of under the mapping (x, y) (u, v) dened by
u = x +y, v = xy. (2.7.2)
This mapping is a bijection between and since the Jacobian of the change of
variables is dudv =[x y[ dxdy. The domain is given by
:=(u, v) : 1+u+v > 0, 1u+v > 0, u
2
> 4v (2.7.3)
and is depicted in Figure 2.1. Under the mapping (2.7.2), the weight function B

becomes a weight function dened on the domain by


W

(u, v) := w(x)w(y)(u
2
4v)

, (u, v) , (2.7.4)
where the variables (x, y) and (u, v) are related by (2.7.2).
2 1 1 2
1.0
0.5
0.5
1.0
Figure 2.1 Domains for Koornwinder orthogonal polynomials.
46 Orthogonal Polynomials in Two Variables
Proposition 2.7.1 Let N = (k, n) : 0 k n. In N dene an order as
follows: ( j, m) (k, n) if m < n or m = n and j k. Dene monic polynomials
P
()
k,n
under the order ,
P
()
k,n
(u, v) = u
nk
v
k
+

( j,m)(k,n)
a
j,m
u
mj
v
j
, (2.7.5)
that satisfy the orthogonality condition
_

P
()
k,n
(u, v)u
mj
v
j
W

(u, v)dudv = 0, ( j, m) (k, n). (2.7.6)


Then these polynomials are uniquely determined and are mutually orthogonal
with respect to W

.
Proof It is easy to see that is a total order, so that the monomials can be ordered
linearly with this order. Applying the GramSchmidt orthogonalization process
to the monomials so ordered, the uniqueness follows from the fact that P
()
k,n
has
leading coefcient 1.
In the case =
1
2
, the orthogonal polynomials P
()
k,n
can be given explic-
itly in terms of orthogonal polynomials of one variable. Let p
n

m=0
be the
sequence of monic orthogonal polynomials p
n
(x) =x
n
+lower-degree terms, with
respect to w.
Proposition 2.7.2 The polynomials P
()
k,n
for =
1
2
are given by
P
(1/2)
k,n
(u, v) =
_
p
n
(x)p
k
(y) + p
n
(y)p
k
(x), k < n,
p
n
(x)p
n
(y), k = n
(2.7.7)
and
P
(1/2)
k,n
(u, v) =
p
n+1
(x)p
k
(y) p
n+1
(y)p
k
(x)
x y
, (2.7.8)
where (u, v) are related to (x, y) by x = u+v, y = uv.
Proof By the fundamental theorem of symmetric polynomials, each symmetric
polynomial in x, y can be written as a polynomial in the elementary symmetric
polynomials x +y and xy. A quick computation shows that
x
n
y
k
+y
n
x
k
= u
nk
v
k
+lower-degree terms in (u, v),
2.7 First Family of Koornwinder Polynomials 47
so that the right-hand sides of both (2.7.7) and (2.7.8) are of the form (2.7.5).
Furthermore, the change of variables (x, y) (u, v) shows that
_

f (u, v)W

(u, v)dudv =
_

f (x +y, xy)B

(x, y)dxdy
=
1
2
_
[1,1]
2
f (x +y, xy)B

(x, y)dxdy, (2.7.9)


where the second line follows since the integrand is a symmetric function of x
and y and where [1, 1]
2
is the union of and its image under (x, y) (y, x).
Using (2.7.9), the orthogonality of P
(1/2)
k,n
and P
(1/2)
k,n
in the sense of (2.7.6) can
be veried directly from the orthogonality of p
n
. The proof is then complete upon
using Proposition 2.7.1.
Of particular interest is the case when w is the Jacobi weight function w(x) =
(1x)

(1+x)

, for which the weight function W

becomes
W
,,
(u, v) = (1u+v)

(1+u+v)

(u
2
4v)

, (2.7.10)
where , , >1, + +
3
2
>0 and + +
3
2
> 0; these conditions guarantee
that W
,,
is integrable. The polynomials p
n
in (2.7.7) and (2.7.8) are replaced by
monic Jacobi polynomials P
(,)
n
, and we denote the orthogonal polynomials P
()
k,n
in Proposition 2.7.1 by P
,,
k,n
. This is the case originally studied by Koornwinder.
These polynomials are eigenfunctions of a second-order differential operator.
Proposition 2.7.3 Consider the differential operator
L
,,
g =(u
2
+2v +2)g
uu
+2u(1v)g
uv
+(u
2
2v
2
2v)g
vv
+[( + +2 +3)u+2( )] g
u
+[( )u(2 +2 +2 +5)v (2 +1)] g
v
.
Then, for (k, n) N ,
L
,,
P
,,
k,n
=
,,
k,n
P
,,
k,n
, (2.7.11)
where
,,
k,n
:= n(n+ + +2 +2) k(k + + +1).
Proof A straightforward computation shows that, for 0 k n,
L
,,
u
nk
v
k
=
,,
k,n
u
nk
v
k
+

( j,m)(k,n)
b
j,m
u
mj
v
j
(2.7.12)
for some b
j,m
, which implies that L
,,
P
,,
k,n
is a polynomial of the same form
and degree. From the denition of L
,,
it is easy to verify directly that L
,,
is
self-adjoint in L
2
(,W
,,
) so that, for ( j, m) N with ( j, m) (k, n),
48 Orthogonal Polynomials in Two Variables
_

_
L
,,
P
,,
k,n
_
P
,,
j,m
W
,,
dudv
=
_

P
,,
k,n
_
L
,,
P
,,
j,m
_
W
,,
dudv = 0,
by (2.7.6). By the uniqueness referred to in Proposition 2.7.1, we conclude that
L
,,
P
,,
k,n
is a constant multiple of P
,,
k,n
and that the constant is exactly
,,
k,n
,
as can be seen from (2.7.12).
It should be noted that although L
,,
is of the same type as the differen-
tial operator in Denition 2.5.1 it is not admissible, since the eigenvalues
,,
k,m
depend on both k and n.
2.8 A Related Family of Orthogonal Polynomials
We consider the family of weight functions dened by
W
,,
(x, y) :=[x y[
2+1
[x +y[
2+1
(1x
2
)

(1y
2
)

(2.8.1)
for (x, y) [1, 1]
2
, where , , >1, + +
3
2
> 0 and + +
3
2
> 0. These
weight functions are related to the W
,,
dened in (2.7.10). Indeed, let in
(2.7.3) be the domain of W
,,
and dene

by

:=(x, y) : 1 <y < x < y < 1,


which is a quadrant of [1, 1]
2
. We have the following relation:
Proposition 2.8.1 The mapping (x, y) (2xy, x
2
+y
2
1) is a bijection from

onto , and
W
,,
(x, y) = 4

[x
2
y
2
[W
,,
(2xy, x
2
+y
2
1) (2.8.2)
for (x, y) [1, 1]
2
. Further,
4

f (u, v)W
,,
(u, v)dudv
=
_
[1,1]
2
f (2xy, x
2
+y
2
1)W
,,
(x, y)dxdy. (2.8.3)
Proof For (x, y) [1, 1]
2
, let us write x = cos and y = cos, 0 , .
Then it is easy to see that
2xy = cos( ) +cos( +), x
2
+y
2
1 = cos( )cos( +),
from which it follows readily that (2xy, x
2
+y
2
1) . The identity (2.8.2)
follows from a straightforward verication. For the change of variable u = 2xy
and v = x
2
+y
2
1, we have dudv = 4[x
2
y
2
[ dxdy, from which the integration
relation (2.8.3) follows.
2.8 A Related Family of Orthogonal Polynomials 49
An orthogonal basis for W can be given accordingly. Let P
,,
k,n
, 0 k n,
denote a basis of mutually orthogonal polynomials of degree n for W
,,
.
Theorem 2.8.2 For n N
0
, a mutually orthogonal basis of V
2n
(W
,,
) is
given by
1
Q
,,
k,2n
(x, y) :=P
,,
k,n
(2xy, x
2
+y
2
1), 0 k n,
2
Q
,,
k,2n
(x, y) :=(x
2
y
2
)P
+1,+1,
k,n1
(2xy, x
2
+y
2
1), 0 k n1
and a mutually orthogonal basis of V
2n+1
(W
,,
) is given by
1
Q
,,
k,2n+1
(x, y) := (x +y)P
,+1,
k,n
(2xy, x
2
+y
2
1), 0 k n,
2
Q
,,
k,2n+1
(x, y) := (x y)P
+1,,
k,n
(2xy, x
2
+y
2
1), 0 k n.
In particular, when =
1
2
the basis can be given in terms of the Jacobi
polynomials of one variable, upon using (2.7.7) and (2.7.8).
Proof These polynomials evidently form a basis if they are orthogonal. Let us
denote by , )
W
,,
and , )
W
,,
the inner product in L
2
(,W
,,
) and in
L
2
([1, 1]
2
, W
,,
), respectively. By (2.8.3), for 0 j m and 0 k n,
_
1
Q
,,
k,2n
,
1
Q
,,
j,2m
_
W
,,
=P
,,
k,n
, P
,,
j,m
)
W
,,
= 0
for ( j, 2m) (k, 2n) and, furthermore,
_
1
Q
,,
k,2n
,
2
Q
,,
j,2m
_
W
,,
=
_
[1,1]
2
(x
2
y
2
)P
,,
k,n
(2xy, x
2
+y
2
1)
P
+1,+1,
k1,n
(2xy, x
2
+y
2
1)W
,,
(x, y)dxdy.
The right-hand side of the above equation changes sign under the change
of variables (x, y) (y, x), which shows that
_
1
Q
,,
k,2n
,
2
Q
,,
j,2m
_
W
,,
= 0.
Moreover, since (x
2
y
2
)
2
W
,,
(x, y)dx dy is equal to a constant multiple of
W
+1,+1,
(u, v)dudv, we see that
_
2
Q
,,
k,2n
,
2
Q
,,
j,2m
_
W
,,
=
_
P
+1,+1,
k,n1
, P
+1,+1,
j,m1
_
W
+1,+1,
= 0
for (k, 2n) ,= ( j, 2m). Furthermore, setting F = P
()
k,n
P
(),0,1
k,n
, we obtain
_
1
Q
,,
k,2n
,
1
Q
,,
j,2m+1
_
W
,,
=
_
[1,1]
2
(x +y)P
,,
k,n
(2xy, x
2
+y
2
1)
P
,+1,
k,n
(2xy, x
2
+y
2
1)W
,,
(x, y)dxdy,
50 Orthogonal Polynomials in Two Variables
which is equal to zero since the right-hand side changes sign under (x, y)
(x, y). The same proof shows also that
_
1
Q
,,
k,2n
,
2
Q
,,
j,2m+1
_
W
,,
= 0.
Together, these facts prove the orthogonality of
1
Q
,,
k,2n
and
2
Q
,,
k,2n
.
Since (x y)(x +y) = x
2
y
2
changes sign under (x, y) (y, x), the same
consideration shows that
_
1
Q
,,
k,2n+1
,
2
Q
,,
j,2m+1
_
W
,,
= 0. Finally,
_
1
Q
,,
k,2n+1
,
1
Q
,,
j,2m+1
_
W
,,
=
_
P
,+1,
k,n
, P
,+1,
j,m
_
W
,,+1
= 0,
_
2
Q
,,
k,2n+1
,
2
Q
,,
j,2m+1
_
W
,,
=
_
P
+1,,
k,n
, P
+1,,
j,m
_
W
+1,,
= 0
for (k, n) ,= ( j, m), proving the orthogonality of
i
Q
,,
k,2n+1
, i = 1, 2.
The structure of the basis in Theorem 2.8.2 can also be extended to the more
general weight function W

(x, y) = [x
2
y
2
[W

(2xy, x
2
+y
2
1), with W

as in
(2.7.4).
2.9 Second Family of Koornwinder Polynomials
The second family of Koornwinder polynomials is based on orthogonal expo-
nentials over a regular hexagonal domain. It is convenient to use homogeneous
coordinates in
R
3
H
:=t = (t
1
, t
2
, t
3
) R
3
: t
1
+t
2
+t
3
= 0,
in which the hexagonal domain becomes
:=t R
3
H
: 1 t
1
, t
2
, t
3
1,
as seen in Figure 2.2. In this section we adopt the convention of using bold face
letters, such as t and k, to denote homogeneous coordinates. For t R
3
H
and
k R
3
H
Z
3
, dene the exponential function

k
(t) := e
2ikt/3
= e
2i(k
1
t
1
+k
2
t
2
+k
3
t
3
)/3
.
(1,1,0)
(1,0,1) (1,0,1)
(0,1,1)
(0,1,1) (1,1,0)
O
Figure 2.2 A hexagonal domain in homogeneous coordinates.
2.9 Second Family of Koornwinder Polynomials 51
Proposition 2.9.1 For k, j R
3
H
Z
3
,
1
3
_

k
(t)
j
(t)dt =
k,j
.
Proof Using homogeneous coordinates, it is easy to see that
_

f (t)dt =
_
1
0
dt
1
_
0
1
f (t)dt
2
+
_
1
0
dt
2
_
0
1
f (t)dt
3
+
_
1
0
dt
3
_
0
1
f (t)dt
1
,
from which the orthogonality can be easily veried using the homogeneity of t,
k and j.
Evidently the hexagon (see Figure 2.2) is invariant under the reection group
A
2
generated by the reections in planes through the edges of the shaded equilat-
eral triangle and perpendicular to its plane. In homogeneous coordinates the three
reections
1
,
2
,
3
are dened by
t
1
:=(t
1
, t
3
, t
2
), t
2
:=(t
2
, t
1
, t
3
), t
3
:=(t
3
, t
2
, t
1
).
Because of the relations
3
=
1

1
=
2

2
, the group G is given by
G =1,
1
,
2
,
3
,
1

2
,
2

1
.
For a function f in homogeneous coordinates, the action of the group G on f is
dened by f (t) = f (t), G. We dene functions
TC
k
(t) :=
1
6
_

k
1
,k
2
,k
3
(t) +
k
2
,k
3
,k
1
(t) +
k
3
,k
1
,k
2
(t)
+
k
1
,k
3
,k
2
(t) +
k
2
,k
1
,k
3
(t) +
k
3
,k
2
,k
1
(t)

,
TS
k
(t) :=
1
6
i
_

k
1
,k
2
,k
3
(t) +
k
2
,k
3
,k
1
(t) +
k
3
,k
1
,k
2
(t)

k
1
,k
3
,k
2
(t)
k
2
,k
1
,k
3
(t)
k
3
,k
2
,k
1
(t)

.
Then TC
k
is invariant under A
2
and TS
k
is anti-invariant under A
2
; these
functions resemble the cosine and sine functions. Because of their invariance
properties, we can restrict them to one of the six congruent equilateral triangles
inside the hexagon. We will choose the shaded triangle in Figure 2.2:
:=(t
1
, t
2
, t
3
) : t
1
+t
2
+t
3
= 0, 0 t
1
, t
2
, t
3
1. (2.9.1)
Since TC
k
(t) = TC
k
(t) = TC
k
(t) for A
2
and t , we can restrict the
index of TC
k
to the index set
:=k H: k
1
0, k
2
0, k
3
0. (2.9.2)
A direct verication shows that TS
k
(t) = 0 whenever k has a zero component;
hence, TS
k
(t) is dened only for k

, where

:=k H: k
1
> 0, k
2
> 0, k
3
< 0,
the set of the interior points of .
52 Orthogonal Polynomials in Two Variables
Proposition 2.9.2 For k, j ,
2
_

TC
k
(t)TC
j
(t)dt =
k,j
_

_
1, k = 0,
1
3
, k

, k ,= 0,
1
6
, k

(2.9.3)
and, for k, j

,
2
_

TS
k
(t)TS
j
(t)dt =
1
6

k,j
. (2.9.4)
Proof If f g is invariant under A
2
then
1
3
_

f (t)g(t)dt = 2
_

f (t)g(t)dt,
from which (2.9.3) and (2.9.4) follow from Proposition 2.9.1.
We can use these generalized sine and cosine functions to dene analogues of
the Chebyshev polynomials of the rst and the second kind. To see the polynomial
structure among the generalized trigonometric functions, we make a change of
variables. Denote
z :=TC
0,1,1
(t) =
1
3
[
0,1,1
(t) +
1,1,0
(t) +
1,0,1
(t)]. (2.9.5)
Let z = x +iy. The change of variables (t
1
, t
2
) (x, y) has Jacobian

(x, y)
(t
1
, t
2
)

=
16
27

2
sint
1
sint
2
sin(t
1
+t
2
) (2.9.6)
=
2
3

_
3(x
2
+y
2
+1)
2
+8(x
3
3xy
2
) +4

1/2
,
and the region is mapped onto the region

bounded by 3(x
2
+y
2
+1)
2
+
8(x
3
3xy
2
) +4 = 0, which is called Steiners hypocycloid. This three-cusped
region is depicted in Figure 2.3.
Denition 2.9.3 Under the change of variables (2.9.5), dene the generalized
Chebyshev polynomials
T
m
k
(z, z) : =TC
k,mk,m
(t), 0 k m,
U
m
k
(z, z) : =
TS
k+1,mk+1,m2
(t)
TS
1,1,2
(t)
, 0 k m.
Proposition 2.9.4 Let P
m
k
denote either T
m
k
or U
m
k
. Then P
m
k
is a polynomial of
total degree m in z and z. Moreover
P
m
mk
(z, z) = P
m
k
(z, z), 0 k m, (2.9.7)
and the P
m
k
satisfy the recursion relation
P
m+1
k
(z, z) = 3zP
m
k
(z, z) P
m
k+1
(z, z) P
m1
k1
(z, z), (2.9.8)
2.9 Second Family of Koornwinder Polynomials 53
0.4 0.2 0.4 0.6 0.8 1.0

0.5
Figure 2.3 The domain for the Koornwinder orthogonal polynomials of the second type.
for 0 k m and m 1, where we use
T
m
1
(z, z) = T
m+1
1
(z, z), T
m
m+1
(z, z) = T
m+1
m
(z, z),
U
m
1
(z, z) = 0, U
m1
m
(z, z) = 0.
In particular, we have
T
0
0
(z, z) = 1, T
1
0
(z, z) = z, T
1
1
(z, z) = z,
U
0
0
(z, z) = 1, U
1
0
(z, z) = 3z, U
1
1
(z, z) = 3 z.
Proof Both (2.9.7) and (2.9.8) follow from a straightforward computation.
Together they determine all P
m
k
recursively, which shows that both T
m
k
and U
m
k
are polynomials of degree m in z and z.
The polynomials T
m
k
and U
m
k
are analogues of the Chebyshev polynomials of
the rst and the second kind. Furthermore, each family inherits an orthogonal
relation from the generalized trigonometric functions, so that they are orthogonal
polynomials of two variables.
Proposition 2.9.5 Let w

(x, y) =

(x,y)
(t
1
,t
2
)

2
as given in (2.9.6). Dene
f , g)
w

= c

f (x, y)g(x, y)w

(x, y)dxdy,
54 Orthogonal Polynomials in Two Variables
where c

is a normalization constant; c

:= 1/
_

(x, y)dxdy. Then


T
m
k
, T
m
j
)
w
1/2
=
k, j
_

_
1, m = k = 0,
1
3
, k = 0 or k = m, m > 0,
1
6
, 1 k m1,m > 0
and
U
m
k
,U
m
j
)
w
1/2
=
1
6

k, j
, 0 j, k m.
In particular, T
m
k
: 0 k m and U
m
k
: 0 k m are mutually orthogonal
bases of V
2
m
(w
1/2
) and V
2
m
(w
1/2
), respectively.
Proof The change of variables (2.9.5) implies immediately that
2
_

f (t)dt = c
1/2
_

f (x, y)w
1/2
(x, y)dxdy.
As a result, the orthogonality of the T
m
k
follows from that of TC
j
in Proposi-
tion 2.9.2. Further, using
TS
1,1,2
(t) =
4
3
sint
1
sint
2
sint
3
it is easily seen that the orthogonality of U
m
k
follows from that of TS
j
.
By taking the real and complex parts of T
m
k
or U
m
k
and using the relation (2.9.7),
we can also obtain a real orthogonal basis for V
2
m
.
2.10 Notes
The book by Appell and de F eriet [1926] contains several examples of orthog-
onal polynomials in two variables. Many examples up to 1950 can be found in
the collection Erd elyi, Magnus, Oberhettinger and Tricomi [1953]. An inuential
survey is that due to Koornwinder [1975]. For the general properties of orthogonal
polynomials in two variables, see the notes in Chapter 4.
Section 2.3 The disk is a special case (d = 2) of the unit ball in R
d
. Con-
sequently, some further properties of orthogonal polynomials on the disk can be
found in Chapter 4, including a compact formula for the reproducing kernel.
The rst orthonormal basis goes back as far as Hermite and was studied in
Appell and de F eriet [1926]. Biorthogonal polynomials were considered in detail
in Appell and de F eriet [1926]; see also Erd elyi et al. [1953]. The basis (2.3.8)
was rst discovered in Logan and Shepp [1975], and it plays an important role in
computer tomography; see Marr [1974] and Xu [2006a]. For further studies on
orthogonal polynomials on the disk, see Waldron [2008] and W unsche [2005] as
well as the references on orthogonal polynomials on the unit ball B
d
for d 2
given in Chapters 5 and 8.
Section 2.4 The triangle is a special case of the simplex in R
d
. Some further
properties of orthogonal polynomials on the triangle can be found in Chapter 5,
including a compact formula for the reproducing kernel.
2.10 Notes 55
The orthogonal polynomials (2.4.2) were rst introduced in Proriol [1957]; the
case = = = 0 became known as Dubiners polynomials in the nite ele-
ments community after Dubiner [1991], which was apparently unaware that they
had appeared in the literature much earlier. The basis U
n
k
was studied in detail in
Appell and de F eriet [1926], and the biorthogonal basis appeared in Fackerell and
Littler [1974]. The change of basis matrix connecting the three bases in Proposi-
tion 2.4.2 has RacahWilson
4
F
3
polynomials as entries; see Dunkl [1984b].
Section 2.5 The classication of all admissible equations that have orthogo-
nal polynomials as eigenfunctions was studied rst by Krall and Sheffer [1967],
as summarized in Section 2.5. The classication in Suetin [1999], based on
Engelis [1974], listed 15 cases; some of these are equivalent under the afne
transforms in Krall and Sheffer [1967] but are treated separately because of other
considerations. The orthogonality of cases (6) and (7) listed in this section is
determined in Krall and Sheffer [1967]; cases (8) and (9) are determined in
Berens, Schmid and Xu [1995a]. For further results, including solutions of the
cases (6)(9) and further discussion on the impact of afne transformations, see
Littlejohn [1988], Lyskova [1991], Kim, Kwon and Lee [1998] and references
therein. Classical orthogonal polynomials in two variables were studied in the
context of hypergroups in Connett and Schwartz [1995].
Fern andez, P erez and Pi nar [2011] considered examples of orthogonal poly-
nomials of two variables that satisfy fourth-order differential equations. The
product Jacobi polynomials, and other classical orthogonal polynomials of two
variables, satisfy a second-order matrix differential equation; see Fern andez,
P erez and Pi nar [2005] and the references therein. They also satisfy a matrix
form of Rodrigues type formula, as seen in de

Alvarez, Fern andez, P erez, and
Pi nar [2009].
Section 2.6 The method in the rst subsection rst appeared in Larcher
[1959] and was used in Agahanov [1965] for certain special cases. It was pre-
sented systematically in Koornwinder [1975], where the two cases of were
stated. For further examples of explicit bases constructed in various domains,
such as (x, y) : x
2
+ y
2
1, a y b, 0 < a, b < 1, see Suetin [1999].
The product formula for polynomials in (2.6.4) was given in Koornwinder and
Schwartz [1997]; it generates a convolution structure for L
2
(W
,
) and was used
to study the convergence of orthogonal expansions in zu Castell, Filbir and Xu
[2009]. The complex Hermite polynomials were introduced by It o [1952]. They
have been widely studied and used by many authors; see Ghanmi [2008, 2013],
Intissar and Intissar [2006], Ismail [2013] and Ismail and Simeonov [2013] for
some recent studies and their references. Disk polynomials were introduced in
Zernike and Brinkman [1935] to evaluate the point image of an aberrated optical
system taking into account the effects of diffraction (see the Wikipedia article on
optical aberration); our normalization follows Dunkl [1982]. They were used in
Folland [1975] to expand the PoissonSzeg o kernel for the ball in C
d
. A Banach
algebra related to disk polynomials was studied in Kanjin [1985]. For further
56 Orthogonal Polynomials in Two Variables
properties of disk polynomials, including the fact that for = d 2, d = 2, 3, . . .,
they are spherical functions for U(d)/U(d 1), see Ikeda [1967], Koornwinder
[1975], Vilenkin and Klimyk [1991a, b, c], and W unsche [2005]. The structure
of complex orthogonal polynomials of two variables and its connection to and
contrast with its real counterpart are studied in Xu [2013].
Section 2.7 The polynomials P
,,
k,n
were studied in Koornwinder [1974a],
which contains another differential operator, of fourth order, that has the
orthogonal polynomials as eigenfunctions. See Koornwinder and Sprinkhuizen-
Kuyper [1978] and Sprinkhuizen-Kuyper [1976] for further results on these
polynomials, including an explicit formula for P
,,
k,n
in terms of power series,
and Rodrigues type formulas; in Xu [2012] an explicit formula for the repro-
ducing kernel in the case of W

in (2.7.4) with =
1
2
was given in terms of
the reproducing kernels of the orthogonal polynomials of one variable. The case
W
1/2
for general w was considered rst in Schmid and Xu [1994] in connection
with Gaussian cubature rules.
Section 2.8 The result in this section was developed recently in Xu [2012],
where further results, including a compact formula for the reproducing kernel and
convergence of orthogonal expansions, can be found.
Section 2.9 The generalized Chebyshev polynomials in this section were
rst studied in Koornwinder [1974b]. We follow the approach in Li, Sun and
Xu [2008], which makes a connection with lattice tiling. In Koornwinder [1974b]
the orthogonal polynomials P

k,n
for the weight function w

, >
5
6
, were shown
to be eigenfunctions of a differential operator of third order. The special case P

0,n
was also studied in Lidl [1975], which includes an interesting generating function.
For further results, see Shishkin [1997], Suetin [1999], Li, Sun and Xu [2008] and
Xu [2010].
Other orthogonal polynomials of two variables The BersteinSzeg o two-
variable weight function is of the form
W(x, y) =
4

1x
2
_
1y
2
[h(z, y)[
2
, x =
1
2
_
z +
1
z
_
.
Here, for a given integer m and 1 y 1,
h(z, y) =
N

i=0
h
i
(y)z
i
, z C,
is nonzero for any [z[ 1 in which h
0
(y) = 1 and, for 1 i m, h
i
(y) are polyno-
mials in y with real coefcients of degree at most
m
2
[
m
2
i[. For m2, complete
orthogonal bases for V
2
n
(W) are constructed for all n in Delgado, Geronimo, Iliev
and Xu [2009].
Orthogonal polynomials with respect to the area measure on the regular
hexagon were studied in Dunkl [1987], where an algorithm for generating an
orthogonal basis was given.
3
General Properties of Orthogonal Polynomials in
Several Variables
In this chapter we present the general properties of orthogonal polynomials in
several variables, that is, those properties that hold for orthogonal polynomials
associated with weight functions that satisfy some mild conditions but are not
any more specic than that.
This direction of study started with the classical work of Jackson [1936] on
orthogonal polynomials in two variables. It was realized even then that the proper
denition of orthogonality is in terms of polynomials of lower degree and that
orthogonal bases are not unique. Most subsequent early work was focused on
understanding the structure and theory in two variables. In Erd elyi et al. [1953],
which documents the work up to 1950, one nds little reference to the general
properties of orthogonal polynomials in more than two variables, other than (Vol.
II, p. 265): There does not seem to be an extensive general theory of orthogonal
polynomials in several variables. It was remarked there that the difculty lies
in the fact that there is no unique orthogonal system, owing to the many pos-
sible orderings of multiple sequences. And it was also pointed out that since
a particular ordering usually destroys the symmetry, it is often preferable to
construct biorthogonal systems. Krall and Sheffer [1967] studied and classied
two-dimensional analogues of classical orthogonal polynomials as solutions of
partial differential equations of the second order. Their classication is based on
the following observation: while the orthogonal bases of V
d
n
, the set of orthogo-
nal polynomials of degree n dened in Section 3.1, are not unique, if the results
can be stated in terms of V
d
0
, V
d
1
, . . . , V
d
n
, . . . rather than in terms of a particular
basis in each V
d
n
, a degree of uniqueness is restored. This is the point of view
that we shall adopt in much of this chapter.
The advantage of this viewpoint is that the result obtained will be basis inde-
pendent, which allows us to derive a proper analogy of the three-term relation
for orthogonal polynomials in several variables, to dene block Jacobi matrices
and study them as self-adjoint operators and to investigate common zeros of
58 General Properties of Orthogonal Polynomials in Several Variables
orthogonal polynomials in several variables, among other things. This approach is
also natural for studying the Fourier series of orthogonal polynomial expansions,
since it restores a certain uniqueness in the expansion in several variables.
3.1 Notation and Preliminaries
Throughout this book we will use the standard multi-index notation, as follows.
We denote by N
0
the set of nonnegative integers. A multi-index is usually denoted
by , = (
1
, . . . ,
d
) N
d
0
. Whenever appears with a subscript, it denotes the
component of a multi-index. In this spirit we dene, for example, ! =
1
!
d
!
and [[ =
1
+ +
d
, and if , N
d
0
then we dene
,
=

1
,
1

d
,
d
.
For N
d
0
and x = (x
1
, . . . , x
d
), a monomial in the variables x
1
, . . . , x
d
is a
product
x

= x

1
1
x

d
d
.
The number [[ is called the total degree of x

. A polynomial P in d variables is
a linear combination of monomials,
P(x) =

,
where the coefcients c

lie in a eld k, usually the rational numbers Q, the real


numbers R or the complex numbers C. The degree of a polynomial is dened as
the highest total degree of its monomials. The collection of all polynomials in x
with coefcients in a eld k is denoted by k[x
1
, . . . , x
d
], which has the structure of
a commutative ring. We will use the abbreviation
d
to denote k[x
1
, . . . , x
d
]. We
also denote the space of polynomials of degree at most n by
d
n
. When d = 1, we
will drop the superscript and write and
n
instead.
A polynomial is called homogeneous if all the monomials appearing in it have
the same total degree. Denote the space of homogeneous polynomials of degree
n in d variables by P
d
n
, that is,
P
d
n
=
_
P : P(x) =

[[=n
c

_
.
Every polynomial in
d
can be written as a linear combination of homogeneous
polynomials; for P
d
n
,
P(x) =
n

k=0

[[=k
c

.
Denote by r
d
n
the dimension of P
d
n
. Evidently x

: [[ = n is a basis of P
d
n
;
hence, r
d
n
= # N
d
0
: [[ = n. It is easy to see that
3.1 Notation and Preliminaries 59
1
(1t)
d
=
d

i=1

i
=0
t

i
=

n=0
_

[[=n
1
_
t
n
=

n=0
r
d
n
t
n
.
Therefore, recalling an elementary innite series
1
(1t)
d
=

n=0
(d)
n
n!
t
n
=

n=0
_
n+d 1
n
_
t
n
,
and comparing the coefcients of t
n
in the two series, we conclude that
r
d
n
= dimP
d
n
=
_
n+d 1
n
_
.
Since the cardinalities of the sets N
d
0
: [[ n and N
d+1
0
: [[ = n
are the same, as each in the rst set becomes an element of the second set upon
adding
d+1
= n[[ and as this is a one-to-one correspondence, it follows that
dim
d
n
=
_
n+d
n
_
.
One essential difference between polynomials in one variable and polynomials
in several variables is the lack of an obvious natural order in the latter. The natural
order for monomials of one variable is the degree order, that is, we order mono-
mials in according to their degree, as 1, x, x
2
, . . . For polynomials in several
variables, there are many choices of well-dened total order. Two are described
below.
Lexicographic order We say that ~
L
if the rst nonzero entry in the
difference = (
1

1
, . . . ,
d

d
) is positive.
Lexicographic order does not respect the total degree of the polynomials. For
example, with = (3, 0, 0) of degree 3 is ordered so that it lies in front of
with = (2, 2, 2) of degree 6, while with = (0, 0, 3) of degree 3 comes after
. The following order does respect the polynomial degree.
Graded lexicographic order We say that ~
glex
if [[ >[[ or if [[ =[[
and the rst nonzero entry in the difference is positive.
In the case d = 2 we can write = (n k, k), and the lexicographic order
among : [[ = n is the same as the order k = 0, 1, . . . , n. There are various
other orders for polynomials of several variables; some will be discussed in later
chapters.
Let , ) be a bilinear form dened on
d
. Two polynomials P and Q are said to
be orthogonal to each other with respect to the bilinear form if P, Q)=0. A poly-
nomial P is called an orthogonal polynomial if it is orthogonal to all polynomials
of lower degree, that is, if
P, Q) = 0, Q
d
with degQ < degP.
60 General Properties of Orthogonal Polynomials in Several Variables
If the bilinear form is given in terms of a weight function W,
P, Q) =
_

P(x)Q(x)W(x)dx (3.1.1)
where is a domain in R
d
(this implies that has a nonempty interior), we
say that the orthogonal polynomials are orthogonal with respect to the weight
function W.
For arbitrary bilinear forms there may or may not exist corresponding orthog-
onal polynomials. In the following we hypothesize their existence, and later we
will derive necessary and sufcient conditions on the bilinear forms or linear
functionals for the existence of such orthogonal polynomial systems.
Denition 3.1.1 Assume that orthogonal polynomials exist for a particular
bilinear form. We denote by V
d
n
the space of orthogonal polynomials of degree
exactly n, that is,
V
d
n
=P
d
n
: P, Q) = 0 Q
d
n1
. (3.1.2)
When the dimension of V
d
n
is the same as that of P
d
n
, it is natural to use a
multi-index to index the elements of an orthogonal basis of V
d
n
. Thus, we shall
denote the elements of such a basis by P

, [[ = n. We will sometimes use the


notation P
n

, in which the superscript indicates the degree of the polynomial. For


orthogonal polynomials of two variables, instead of P

with = (k, n k) the


more convenient notation P
k
or P
n
k
for 0 k n is often used, as in Chapter 2.
3.2 Moment Functionals and Orthogonal Polynomials
in Several Variables
We will use the standard multi-index notation as in the above section. Throughout
this section we write r
n
instead of r
d
n
whenever r
d
n
would appear as a subscript or
superscript.
3.2.1 Denition of orthogonal polynomials
A multi-sequence s : N
d
0
Ris written in the form s =s

N
d
0
. For each multi-
sequence s =s

N
d
0
, let L
s
be the linear functional dened on
d
by
L
s
(x

) = s

, N
d
0
; (3.2.1)
call L
s
the moment functional dened by the sequence s.
For convenience, we introduce a vector notation. Let the elements of the set
N
d
0
: [[ = n be arranged as
(1)
,
(2)
, . . . ,
(r
n
)
according to lexicograph-
ical order. For each n N
0
let x
n
denote the column vector
x
n
= (x

)
[[=n
= (x

( j)
)
r
n
j=1
.
3.2 Moment Functionals and Orthogonal Polynomials 61
That is, x
n
is a vector whose elements are the monomials x

for [[ =n, arranged


in lexicographical order. Also dene, for k, j N
0
vectors of moments s
k
and
matrices of moments s
k+ j
by
s
k
=L
s
(x
k
) and s
k+ j
=L
s
(x
k
(x
j
)
T
). (3.2.2)
By denition s
k+ j
is a matrix of size r
d
k
r
d
j
and its elements are L
s
(x
+
)
for [[ = k and [[ = j. Finally, for each n N
0
, the s
k+ j
are used as building
blocks to dene a matrix
M
n,d
= (s
k+ j
)
n
k, j=0
with
n,d
= det M
n,d
. (3.2.3)
We call M
n,d
a moment matrix; its elements are L
s
(x
+
) for [[ n and [[ n.
In the following we will write L for L
s
whenever s is not explicit. If L is a
moment functional then L(PQ) is a bilinear form, so that we can dene orthog-
onal polynomials with respect to L. In particular, P is an orthogonal polynomial
of degree n if L(PQ) = 0 for every polynomial of degree n 1 or less. More
generally, we can put the monomials in the basis x

: N
d
0
of
d
in a linear
order and use GramSchmidt orthogonalization to generate a new basis whose
elements are mutually orthogonal with respect to L. However, since there is
no natural order for the set N
d
0
: [[ = n, we may just as well consider
orthogonality only with respect to polynomials of different degree, that is, we
take polynomials of the same degree to be orthogonal to polynomials of lower
degree but not necessarily orthogonal among themselves. To make this precise,
let us introduce the following notation. If P
n

[[=n
is a sequence of polynomials
in
d
n
, denote by P
n
the (column) polynomial vector
P
n
= (P
n

)
[[=n
= (P
n

(1)
P
n

(r
n
)
)
T
, (3.2.4)
where
(1)
, . . . ,
(r
n
)
is the arrangement of elements in N
d
0
: [[ =n accord-
ing to lexicographical order. Sometimes we use the notation P
n
to indicate the set
of polynomials P
n

.
Denition 3.2.1 Let L be a moment functional. A sequence of polynomials
P
n

: [[ = n, n N
0
, P
n


d
n
, is said to be orthogonal with respect to L if
L(x
m
P
T
n
) = 0, n > m, and L(x
n
P
T
n
) = S
n
, (3.2.5)
where S
n
is an invertible matrix of size r
d
n
r
d
n
(assuming for now that L permits
the existence of such P
n

).
We may also call P
n
an orthogonal polynomial. We note that this denition
agrees with our usual notion of orthogonal polynomials since, by denition,
L(x
m
P
T
n
) = 0 L(x

P
n

) = 0, [[ = n, [[ = m;
62 General Properties of Orthogonal Polynomials in Several Variables
thus (3.2.5) implies that the P
n

are orthogonal to polynomials of lower degree.


Some immediate consequences of this denition are as follows.
Proposition 3.2.2 Let L be a moment functional and let P
n
be the orthogonal
polynomial dened above. Then P
0
, . . . , P
n
forms a basis for
d
n
.
Proof Consider the sum a
T
0
P
0
+ +a
T
n
P
n
, where a
i
R
r
i
. Multiplying the sum
from the right by the row vector P
T
k
and applying L, it follows from the orthogo-
nality that a
T
k
L(P
k
P
T
k
) =0 for 0 k n, and this shows, by the orthogonality of
P
k
to lower-degree polynomials, that a
T
k
S
T
k
=0. Thus, a
k
=0 since S
k
is invertible.
Therefore P
n

[[n
is linearly independent and forms a basis for
d
n
.
Using the vector notation, the orthogonal polynomial P
n
can be written as
P
n
= G
n,n
x
n
+G
n,n1
x
n1
+ +G
n,0
x
0
, (3.2.6)
where G
n
= G
n,n
is called the leading coefcient of P
n
and is a matrix of size
r
d
n
r
d
n
.
Proposition 3.2.3 Let P
n
be as in the previous proposition. Then the leading-
coefcient matrix G
n
is invertible.
Proof The previous proposition implies that there exists a matrix G
/
n
such that
x
n
= G
/
n
P
n
+Q
n1
,
where Q
n1
is a vector whose components belong to
d
n1
. Comparing the
coefcients of x
n
gives G
/
n
G
n
= I, which implies that G
n
is invertible.
Proposition 3.2.4 Let P
n
be as in the previous proposition. Then the matrix
H
n
=L(P
n
P
T
n
) is invertible.
Proof Since H
n
= L(P
n
P
T
n
) = G
n
L(x
n
P
T
n
) = G
n
S
n
, it is invertible by Proposi-
tion 3.2.3.
Lemma 3.2.5 Let L be a moment functional and let P
n
be an orthogonal
polynomial with respect to L. Then P
n
is uniquely determined by the matrix S
n
.
Proof Suppose, otherwise, that there exist P
n
and P

n
both satisfying the orthog-
onality conditions of (3.2.5) with the same matrices S
n
. Let G
n
and G

n
denote
the leading-coefcient matrices of P
n
and P

n
, respectively. By Proposition 3.2.2,
P
0
, . . . , P
n
forms a basis of
d
n
; write the elements of P

n
in terms of this basis.
That is, there exist matrices C
k
: r
d
n
r
d
k
such that
P

n
=C
n
P
n
+C
n1
P
n1
+ +C
0
P
0
.
3.2 Moment Functionals and Orthogonal Polynomials 63
Multiplying the above equation by P
T
k
and applying the moment functional
L, we conclude that C
k
L(P
k
P
T
k
) = 0, 0 k n 1, by orthogonality.
It follows from Proposition 3.2.4 that C
k
= 0 for 0 k n 1; hence
P

n
=C
n
P
n
. Comparing the coefcients of x
n
leads to G

n
= C
n
G
n
. That is,
C
n
=G

n
G
1
n
and P
n
=G
n
G
1
n
P

n
, which implies by (3.2.5) that S
n
=L(x
n
P
T
n
) =
L(x
n
P
T
n
)(G
n
G
1
n
)
T
= S
n
(G
n
G
1
n
)
T
. Therefore G
n
G
1
n
= I and so G
n
= G

n
and P
n
=P

n
.
Theorem 3.2.6 Let L be a moment functional. A system of orthogonal
polynomials in several variables exists if and only if

n,d
,= 0, n N
0
.
Proof Using the monomial expression of P
n
in (3.2.6) and the notation s
k
in
(3.2.2),
L(x
k
P
T
n
) =L(x
k
(G
n
x
n
+G
n,n1
x
n1
+ +G
n,0
x
0
)
T
)
= s
k+n
G
T
n
+s
k+n1
G
T
n,n1
+ +s
k+0
G
T
n,0
.
From the denition of M
n,d
in (3.2.3), it follows that the orthogonality condition
(3.2.5) is equivalent to the following linear system of equations:
M
n,d
_

_
G
T
n,0
.
.
.
G
T
n,n
_

_
=
_

_
0
.
.
.
0
S
T
n
_

_
. (3.2.7)
If an orthogonal polynomial system exists then for each S
n
there exists exactly
one P
n
. Therefore, the system of equations (3.2.7) has a unique solution, which
implies that M
n,d
is invertible; thus
n,d
,= 0.
However, if
n,d
,= 0 then, for each invertible matrix S
n
, (3.2.7) has a unique
solution (G
0,n
G
n,n
)
T
. Let P
n
=

G
k,n
x
k
; then (3.2.7) is equivalent to
L(x
k
P
T
n
) = 0, k < n, and L(x
n
P
T
n
) = S
n
.
Denition 3.2.7 A moment linear functional L
s
is said to be positive denite if
L
s
(p
2
) > 0 p
d
, p ,= 0.
We also say that s

is positive denite when L


s
is positive denite.
If p =

a

is a polynomial in
d
then L
s
(p) =

a

. In terms of the
sequence s, the positive deniteness of L
s
amounts to the requirement that

,
a

s
+
> 0,
64 General Properties of Orthogonal Polynomials in Several Variables
where + = (
1
+
1
, . . . ,
d
+
d
), for every sequence a =a

N
d
0
in which
a

= 0 for all but nitely many . Hence it is evident that L


s
is positive denite
if and only if for every tuple (
(1)
, . . . ,
(r)
) of distinct multi-indices
( j)
N
d
0
,
1 j r, the matrix (s

(i)
+
( j)
)
i, j=1,...,r
has a positive determinant.
Lemma 3.2.8 If L
s
is positive denite then the determinant
n,d
> 0.
Proof Assume that L
s
is positive denite. Let a be an eigenvector of the matrix
M
n,d
corresponding to eigenvalue . Then, on the one hand, a
T
M
n,d
a = |a|
2
.
On the other hand, a
T
M
n,d
a = L
s
(p
2
) > 0, where p(x) =

n
j=0
a
T
j
x
j
. It follows
that > 0. Since all the eigenvalues are positive,
n,d
= det M
n,d
> 0.
Corollary 3.2.9 If L is a positive denite moment functional then there exists
a system of orthogonal polynomials with respect to L.
Denition 3.2.10 Let L be a moment functional. A sequence of polynomials
P
n

: [[ = n, n N
0
, P
n


d
n
, is said to be orthonormal with respect to L if
L(P
n

P
m

) =
,
;
in the vector notation, the above equations become
L(P
m
P
T
n
) = 0, n ,= m, and L(P
n
P
T
n
) = I
r
n
, (3.2.8)
where I
k
denotes the identity matrix of size k k.
Theorem 3.2.11 If L is a positive denite moment functional then there exists
an orthonormal basis with respect to L.
Proof By the previous corollary, there is a basis of orthogonal polynomials P
n
with respect to L. Let H
n
= L(P
n
P
T
n
). For any nonzero vector a, P = a
T
P
n
is
a nonzero polynomial by Proposition 3.2.2. Hence aH
n
a
T
= L(P
2
) > 0, which
shows that H
n
is a positive denite matrix. Let H
1/2
n
be the positive square root
of H (the unique positive denite matrix with the same eigenvectors as H and
H
n
= H
1/2
n
H
1/2
n
). Dene Q
n
= (H
1/2
n
)
1
P
n
. Then
L(Q
n
Q
T
n
) = (H
1/2
n
)
1
L(P
n
P
T
n
)(H
1/2
n
)
1
= (H
1/2
n
)
1
H
n
(H
1/2
n
)
1
= I,
which shows that the elements of Q
n
consist of an orthonormal basis with
respect to L.
3.2.2 Orthogonal polynomials and moment matrices
Let L be a positive denite moment functional. For each n N
0
, let M
n,d
be the moment matrix of L as dened in (3.2.3). Then M
n,d
has a positive
3.2 Moment Functionals and Orthogonal Polynomials 65
determinant by Lemma 3.2.8. For N
d
0
we denote by s
k,
the column vec-
tor s
,k
:= L(x

x
k
); in particular, s
,0
= s

. We will dene monic orthogonal


polynomials in terms of the moment matrix. For : [[ = n, dene the
polynomial P
n

by
P
n

(x) :=
1

n1,d
det
_

_
M
n1,d
s
,0
s
,1
.
.
.
s
,n1
1 x
T
(x
n1
)
T
x

_
. (3.2.9)
It is evident that the P
n

are of degree n. Furthermore, they are orthogonal.


Theorem 3.2.12 The polynomials P
n

dened above are monomial orthogonal


polynomials.
Proof Expanding the determinant in (3.2.9) by its last row, we see that P
n

is
a monomial polynomial, P

(x) = x

+ . Moreover multiplying P

(x) by x

,
[[ n 1, and applying the linear functional L, the last row of the deter-
minant det L[x

(x)] coincides with one of the rows above; consequently,


L(x

P
n

) = 0.
For d =2, this denition appeared in Jackson [1936]; see also Suetin [1999]. It
is also possible to dene a sequence of orthonormal bases in terms of moments.
For this purpose let L be a positive denite moment functional and dene a
different matrix

M

(x) as follows.
The rows of the matrix M
n,d
are indexed by : [[ = n. Moreover, the row
indexed by is
(s
T
,0
s
T
,1
s
T
,n
) =L
_
x

x
T
x

(x
n
)
T
_
.
Let

M

(x) be the matrix obtained from M


n,d
by replacing the above row, with
index , by
_
x

x
T
x

(x
n
)
T
_
. Dene

(x) :=
1

n,d
det

M

(x), [[ = n, N
d
0
. (3.2.10)
Multiplying the polynomial

P
n

by x

and applying the linear functional L, we


obtain immediately
L(x

P
n

) =
_
0 if [[ n1,

,
if [[ = n.
(3.2.11)
Thus

P
n

[[=n
n=0
is a system of orthogonal polynomials with respect to L.
66 General Properties of Orthogonal Polynomials in Several Variables
Let adj M
n,d
denote the adjoint matrix of M
n,d
; that is, its elements are the
cofactors of M
n,d
(see, for example, p. 20 of Horn and Johnson [1985]). A
theorem of linear algebra states that
M
1
n,d
=
1

n,d
adj M
n,d
.
Let M
1
n,d
([[ = n) denote the submatrix of M
1
n,d
= (m
,
)
[[,[[n
which con-
sists of those m
,
whose indices satisfy the condition [[ = [[ = n; in other
words, M
1
n,d
([[ = n) is the principal minor of M
1
n,d
of size r
d
n
r
d
n
at the lower
right corner. The elements of M
1
n,d
([[ = n) are the cofactors of the elements of
L(x
n
(x
n
)
T
) in M
n,d
. Since M
n,d
is positive denite, so is M
1
n,d
; it follows that
M
1
n,d
([[ = n) is also positive denite.
Theorem 3.2.13 Let L be a positive denite moment functional. Then the
polynomials
P
n
= (M
1
n,d
([[ = n))
1/2

P
n
= G
n
x
n
+ (3.2.12)
are orthonormal polynomials with respect to L, and G
n
is positive denite.
Proof By the denition of

M

, expand the determinant along the row


(1 x
T
(x
n
)
T
); then
det

M

(x) =

[[=n
C
,
x

+Q
n1

, Q
n1


d
n1
,
where C
,
is the cofactor of the element L(x

) in M
n,d
; that is, adj M
n,d
=
(C
,
)
[[,[[n
. Therefore, since adj M
n,d
=
n,d
M
1
n,d
, we can write

P
n
=
1

n,d

n,d
M
1
n,d
([[ = n)x
n
+Q
n1
= M
1
n,d
([[ = n)x
n
+Q
n1
.
From (3.2.11) we have that L(x
n

P
T
n
) = I
r
n
and L(Q
n1

P
T
n
) = 0; hence,
L(

P
n

P
T
n
) = M
1
n,d
([[ = n).
It then follows immediately that L(P
n
P
T
n
) =I; thus P
n
is orthonormal. Moreover,
G
n
= [M
1
n,d
([[ = n)]
1/2
. (3.2.13)
Clearly, G
n
is positive denite.
As is evident from the examples in the previous chapter, systems of orthonor-
mal polynomials with respect to L are not unique. In fact, if P
n
is orthonormal
then, for any orthogonal matrix O
n
of size r
d
n
r
d
n
, the polynomial components in
P

n
=O
n
P
n
are also orthonormal. Moreover, it is easily seen that if the components
3.2 Moment Functionals and Orthogonal Polynomials 67
of P
n
and P

n
are each a collection of orthonormal polynomials then multiplication
by an orthogonal matrix connects P
n
and P

n
.
Theorem 3.2.14 Let L be positive denite and let Q
n

be a sequence of
orthonormal polynomials with respect to L. Then there is an orthogonal matrix
O
n
such that Q
n
= O
n
P
n
, where P
n
are the orthonormal polynomials dened in
(3.2.12). Moreover, the leading-coefcient matrix G

n
of Q
n
satises G

n
= O
n
G
n
,
where G
n
is the positive denite matrix in (3.2.13). Furthermore,
(det G
n
)
2
=

n1,d

n,d
. (3.2.14)
Proof By Theorem 3.2.13, Q
n
=O
n
P
n
+O
n,n1
P
n1
+ +O
n,0
P
0
. Multiplying
the equation by P
k
, 0 k n1, it follows that O
n,k
= 0 for k = 0, 1, . . . , n1.
Hence, Q
n
= O
n
P
n
. Since both P
n
and Q
n
are orthonormal,
I =L(Q
n
Q
T
n
) = O
n
L(P
n
P
T
n
)O
T
n
= O
n
O
T
n
,
which shows that O
n
is orthonormal. Comparing the leading coefcients of
Q
n
= O
n
P
n
leads to G

n
= O
n
G
n
. To verify equation (3.2.14), use (3.2.13) and
the formula for the determinants of minors (see, for example, p. 21 of Horn and
Johnson [1985]):
det M
1
n,d
([[ = n) =
det M
n,d
(1 [[ n1)

n,d
=

n1,d

n,d
,
from which the desired result follows immediately.
Equation (3.2.14) can be viewed as an analogue of the relation k
2
n
= d
n
/d
n+1
for orthogonal polynomials of one variable, given in Section 1.4.
3.2.3 The moment problem
Let M = M(R
d
) denote the set of nonnegative Borel measures on R
d
having
moments of all orders, that is, if M then
_
R
d
[x

[ d(x) < N
d
0
.
For M, as in the previous section its moments are dened by
s

=
_
R
d
x

d, N
d
0
. (3.2.15)
However, if s

is a multi-sequence and there is a measure M such


that (3.2.15) holds then s

is called a moment sequence. Two measures in M


are called equivalent if they have the same moments. The measure is called
determinate if is unique in the equivalence class of measures. If a sequence
68 General Properties of Orthogonal Polynomials in Several Variables
is a moment sequence of a determinate measure, we say that the sequence is
determinate.
Evidently, if M then the moment functional L dened by its moments is
exactly the linear functional
L(P) =
_
R
d
P(x)d(x), P
d
.
Examples include any linear functional expressible as an integral with respect
to a nonnegative weight function W, that is for which d(x) = W(x)dx and
_
[x

[W(x)dx < for all . Whenever W is supported on a domain with


nonempty interior, it is positive denite in the sense of Denition 3.2.7, that is,
L(P
2
) > 0 whenever P ,= 0.
The moment problem asks the question when is a sequence a moment
sequence and, if so, when is the measure determinate? Like many other prob-
lems involving polynomials in higher dimensions, the moment problem in several
variables is much more difcult than its one-variable counterpart. The problem
is still not completely solved. A theorem due to Haviland gives the following
characterization (see Haviland [1935]).
Theorem 3.2.15 A moment sequence s can be given in the form (3.2.15) if
and only if the moment functional L
s
is nonnegative on the set of nonnegative
polynomials
d
+
=P
d
: p(x) 0, x R
d
.
The linear functional L is called positive if L(P) 0 on
d
+
. In one vari-
able, L being positive is equivalent to its being nonnegative denite, that is, to
L(p
2
) 0, since every positive polynomial on R can be written as a sum of
the squares of two polynomials. In several variables, however, this is no longer
the case: there exist positive polynomials that cannot be written as a sum of the
squared polynomials. Thus, Hamburgers famous theorem that a sequence is a
moment sequence if and only if it is positive denite does not hold in several
variables (see Fuglede [1983], Berg, Christensen and Ressel [1984], Berg [1987]
and Schm udgen [1990]). There are several sufcient conditions for a sequence to
be determinate. For example, the following result of Nussbaum [1966] extends
a classical result of Carleman in one variable (see, for example, Shohat and
Tamarkin [1943]).
Theorem 3.2.16 If s

is a positive denite sequence and if Carlemans


condition for a sequence a
n
,

n=0
(a
2n
)
1/2n
= +, (3.2.16)
is satised by each of the marginal sequences s
n,0,...,0
, s
0,n,...,0
, . . . , s
0,0,...,n
then s

is a determinate moment sequence.


3.2 Moment Functionals and Orthogonal Polynomials 69
This theorem allows us to extend some results for the moment problem of one
variable to several variables. For example, it follows that the proof of Theorem5.2
in Freud [1966] in one variable can be extended easily to the following theorem.
Theorem 3.2.17 If M satises
_
R
d
e
c|x|
d(x) < (3.2.17)
for some constant c > 0 then is a determinate measure.
Evidently, we can replace |x| by [x[
1
. In particular, if M has compact sup-
port then it is determinate. If K is a domain dened in R
d
then one can consider
the K-moment problem, which asks when a given sequence is a moment sequence
of a measure supported on K. There are various sufcient conditions and results
for special domains; see the references cited above.
A related question is the density of polynomials in L
2
(d). In one variable
it is known that if is determinate then polynomials in L
2
(d) will be dense.
This, however, is not true in several variables; see Berg and Thill [1991], where
it is proved that there exist rotation-invariant measures on R
d
which are deter-
minate but for which the polynomials are not dense in L
2
(d). For the study of
the convergence of orthogonal polynomial expansions in several variables in later
chapters, the following theorem is useful.
Theorem 3.2.18 If M satises the condition (3.2.17) for some constant
c > 0 then the space of polynomials
d
is dense in the space L
2
(d).
Proof The assumption implies that polynomials are elements of L
2
(d). Indeed,
for each N
d
0
and for every c > 0 there exists a constant A such that [x

[
2
<
Ae
c|x|
for all sufciently large |x|.
If polynomials were not dense in L
2
(d), there would be a function f , not
almost everywhere zero, in L
2
(d) such that f would be orthogonal to all
polynomials:
_
R
d
f (x)x

d(x) = 0, N
d
0
.
Let d = f (x)d. Then, since (
_
[ f [ d)
2

_
[ f [
2
d
_
d, we can take the
FourierStieltjes transform of d, which is, by denition,
(z) =
_
R
d
e
iz,x)
d(x), z C
d
.
Let y = Im z. By the CauchySchwarz inequality, for |y| c/2 we have
[ (z)[
2
=

_
R
d
e
iz,x)
d

_
R
d
e
2|y||x|
d
_
R
d
[ f (x)[
2
d <,
70 General Properties of Orthogonal Polynomials in Several Variables
since [y, x)[ |y||x|. Similarly,

(z)
z
i

_
R
d
[x
i
[
2
e
2|y||x|
d
_
R
d
[ f (x)[
2
d <
for |y| c/4. Consequently (z) is analytic in the set z C
d
: [ Imz[ c/4.
For small |z|, expanding (z) in a power series and integrating by parts gives
(z) =

n=0
(i)
n
n!
_
R
d
f (x)x, z)
n
d = 0.
Therefore, by the uniqueness principle for analytic functions, (z) = 0 for
|y| c/4. In particular (z) = 0 for z R
d
. By the uniqueness of the Fourier
Stieltjes transform, we conclude that f (x) = 0 almost everywhere, which is a
contradiction.
This proof is a straightforward extension of the one-variable proof on p. 31 of
Higgins [1977].
3.3 The Three-Term Relation
3.3.1 Denition and basic properties
The three-term relation plays an essential role in understanding the structure
of orthogonal polynomials in one variable, as indicated in Subsection 1.3.2.
For orthogonal polynomials in several variables, the three-term relation takes a
vectormatrix form. Let L be a moment functional. Throughout this subsection,
we use the notation

P
n
for orthogonal polynomials with respect to L and the
notation H
n
for the matrix
H
n
=L(

P
n

P
T
n
).
When H
n
= I, the identity matrix,

P
n
becomes orthonormal and the notation P
n
is
used to denote orthonormal polynomials. Note that H
n
is invertible by Proposition
3.2.4.
Theorem 3.3.1 For n 0, there exist unique matrices A
n,i
: r
d
n
r
d
n+1
,
B
n,i
: r
d
n
r
d
n
and C
T
n,i
: r
d
n
r
d
n1
such that
x
i

P
n
= A
n,i

P
n+1
+B
n,i

P
n
+C
n,i

P
n1
, 1 i d, (3.3.1)
where we dene P
1
= 0 and C
1,i
= 0; moreover,
A
n,i
H
n+1
=L(x
i

P
n

P
T
n+1
),
B
n,i
H
n
=L(x
i

P
n

P
T
n
),
A
n,i
H
n+1
= H
n
C
T
n+1,i
.
(3.3.2)
3.3 The Three-Term Relation 71
Proof Since the components of x
i

P
n
are polynomials of degree n +1, they can
be written as linear combinations of orthogonal polynomials of degree n +1 and
less by Proposition 3.2.2. Hence, in vector notation, there exist matrices M
k,i
such
that
x
i

P
n
= M
n,i

P
n+1
+M
n1,i

P
n
+M
n2,i

P
n1
+ .
However, if we multiply the above equation by

P
T
k
from the right and apply the
linear functional L then M
k1,i
H
k
=L(x
i

P
n

P
T
k
). By the orthogonality of

P
k
and
the fact that H
k
is invertible, M
k,i
=0 for k n2. Hence, the three-term relation
holds and (3.3.2) follows.
For orthonormal polynomials P
n
, H
n
=I; the three-termrelation takes a simpler
form:
Theorem 3.3.2 For n 0, there exist matrices A
n,i
: r
d
n
r
d
n+1
and B
n,i
: r
d
n
r
d
n
such that
x
i
P
n
= A
n,i
P
n+1
+B
n,i
P
n
+A
T
n1,i
P
n1
, 1 i d, (3.3.3)
where we dene P
1
= 0 and A
1,i
= 0. Moreover, each B
n,i
is symmetric.
The coefcients of the three-term relation satisfy several properties. First,
recall that the leading-coefcient matrix of

P
n
(or P
n
) is denoted by G
n
.
Comparing the highest-coefcient matrices on each side of (3.3.3), it follows that
A
n,i
G
n+1
= G
n
L
n,i
, 1 i d, (3.3.4)
where the L
n,i
are matrices of size r
d
n
r
d
n+1
, dened by
L
n,i
x
n+1
= x
i
x
n
, 1 i d. (3.3.5)
For example, for d = 2,
L
n,1
=
_

_
1 M 0
.
.
.
.
.
.
M 1 0
_

_
and L
n,2
=
_

_
0 1 M
.
.
.
.
.
.
0 M 1
_

_
.
We now adopt the following notation: if M
1
, . . . , M
d
are matrices of the same
size pq then we dene their joint matrix M by
M = (M
T
1
M
T
d
)
T
, M : dpq (3.3.6)
(it is better to write M as a column matrix of M
1
, . . . , M
d
). In particular, both L
n
and A
n
are joint matrices of size dr
d
n
r
d
n+1
. The following proposition collects
the properties of L
n,i
.
72 General Properties of Orthogonal Polynomials in Several Variables
Proposition 3.3.3 For each i, 1 i d, the matrix L
n,i
satises the equation
L
n,i
L
T
n,i
= I. Moreover,
rankL
n,i
= r
d
n
, 1 i d, and rankL
n
= r
d
n+1
.
Proof By denition, each row of L
n,i
contains exactly one element equal to 1; the
rest of its elements are 0. Hence L
n,i
L
T
n,i
=I, which also implies that rankL
n,i
=r
d
n
.
Moreover, let
N
n
= N
d
0
: [[ = n and N
n,i
= N
d
0
: [[ = n,
i
,= 0,
and let a = (a

)
N
n
be a vector of size r
d
n+1
1; then L
n,i
can be considered as
a mapping which projects a onto its restriction on N
n+1,i
, that is, L
n,i
a = a[
N
n+1,i
.
To prove that L
n
has full rank, we show that if L
n
a = 0 then a = 0. By denition,
L
n
a =0 implies that L
n,i
a =0, 1 i d. Evidently

i
N
n,i
=N
n
. Hence L
n,i
a =0
implies that a = 0.
Theorem 3.3.4 For n 0 and 1 i d,
rank A
n,i
= rank C
n+1,i
= r
d
n
. (3.3.7)
Moreover, for the joint matrix A
n
of A
n,i
and the joint matrix C
T
n
of C
T
n,i
,
rank A
n
= r
d
n+1
and rank C
T
n+1
= r
d
n+1
. (3.3.8)
Proof From the relation (3.3.4) and the fact that G
n
is invertible, rank A
n,i
= r
d
n
follows from Proposition 3.3.3. Since H
n
is invertible, it follows from the third
equation of (3.3.2) that rankC
n+1,i
= r
d
n
. In order to prove (3.3.8) note that (3.3.4)
implies that A
n
G
n+1
=diagG
n
, . . . , G
n
L
n
, since G
n
being invertible implies that
the matrix diagG
n
, . . . , G
n
is invertible. It follows from Proposition 3.3.3 that
rankA
n
= r
d
n+1
. Furthermore, the third equation of (3.3.2) implies that A
n
H
n+1
=
diagH
n
, . . . , H
n
C
T
n+1
, and H
n
is invertible; consequently, rankC
n+1
= rankA
n
.
Since the matrix A
n
has full rank, it has a generalized inverse D
T
n
, which we
write as follows:
D
T
n
= (D
T
n,1
D
T
n,i
), r
d
n+1
dr
d
n
,
where D
T
n,i
: r
d
n+1
r
d
n
, that is,
D
T
n
A
n
=
d

i=1
D
T
n,i
A
n,i
= I. (3.3.9)
We note that the matrix D
n
that satises (3.3.9) is not unique. In fact, let A
be a matrix of size s t, s > t, with rank A = t, and let the singular-value
decomposition of A be given by
3.3 The Three-Term Relation 73
A =W
T
_

0
_
U,
where : t t is an invertible diagonal matrix and W : s s and U : t t are
unitary matrices. Then the matrix D
T
dened by
D
T
=U
T
(
1
,
1
)W,
where
1
: s t can be any matrix, satises D
T
A = I. The matrix D
T
is known
as the generalized inverse of A. If
1
= 0 then D
T
is the unique MoorePenrose
generalized inverse, which is often denoted by A
+
.
Using the generalized matrix D
T
n
, we can deduce a recursive formula for
orthogonal polynomials in several variables.
Theorem 3.3.5 Let D
T
n
be a generalized inverse of A
n
. There exist matrices
E
n
: r
d
n+1
r
d
n
and F
n
: r
d
n+1
r
d
n1
such that

P
n+1
=
d

i=1
x
i
D
T
n,i

P
n
+E
n

P
n
+F
n

P
n1
, (3.3.10)
where
d

i=1
D
T
n,i
B
n,i
=E
n
,
d

i=1
D
T
n,i
C
T
n,i
=F
n
.
Proof Multiplying the three-term relation (3.3.3) by D
T
n,i
and summing over 1
i d, we nd that the desired equality follows from (3.3.9).
Equation (3.3.10) is an analogue of the recursive formula for orthogonal poly-
nomials in one variable. However, unlike its one-variable counterpart, (3.3.10)
does not imply the three-term relation; that is, if we choose matrices A
n,i
, B
n,i
and C
n,i
, and use (3.3.10) to generate a sequence of polynomials, in general the
polynomials do not satisfy the three-term relation. We will discuss the condition
under which equivalence does hold in Section 3.5.
3.3.2 Favards theorem
Next we prove an analogy of Favards theorem which states roughly that any
sequence of polynomials mich satises a three-term relation (3.3.1) whose coef-
cient matrices satisfy the rank conditions (3.3.7) and (3.3.8) is necessarily a
sequence of orthogonal polynomials. We need a denition.
Denition 3.3.6 A linear functional L dened on
d
is called quasi-denite if
there is a basis B of
d
such that, for any P, Q B,
L(PQ) = 0 if P ,= Q and L(P
2
) ,= 0.
74 General Properties of Orthogonal Polynomials in Several Variables
From the discussion in Section 3.2 it is clear that if L is positive denite
then it is quasi-denite. However, quasi-deniteness may not imply positive
deniteness.
Theorem 3.3.7 Let P
n

n=0
= P
n

: [[ = n, n N
0
, P
0
= 1, be an arbitrary
sequence in
d
. Then the following statements are equivalent.
(i) There exists a linear functional L which denes a quasi-denite linear
functional on
d
and which makes P
n

n=0
an orthogonal basis in
d
.
(ii) For n 0, 1 i d, there exist matrices A
n,i
, B
n,i
and C
n,i
such that
(a) the polynomials P
n
satisfy the three-term relation (3.3.1),
(b) the matrices in the relation satisfy the rank conditions (3.3.7) and
(3.3.8).
Proof That statement (i) implies statement (ii) is contained in Theorems 3.3.1
and 3.3.4. To prove the other direction, we rst prove that the polynomial
sequence P
n
forms a basis of
d
. It sufces to prove that the leading coefcient,
G
n
, of P
n
is invertible.
The matrix diagG
n
, . . . , G
n
is of size dr
d
n
dr
d
n
and has copies of the matrix
G
n
as diagonal entries in the sense of block matrices. Since the polynomial P
n
satises the three-term relation, comparing the coefcient matrices of x
n+1
on
each side of the relation leads to A
n,i
G
n+1
= G
n
L
n,i
, 1 i d, from which it
follows that A
n
G
n+1
= diagG
n
, . . . , G
n
L
n
. We now proceed with induction in n
to prove that each G
n
is invertible. That P
0
= 1 implies G
0
= 1. Suppose that G
n
has been proved to be invertible. Then diagG
n
, . . . , G
n
is invertible and from
the relation rank L
n
= r
d
n+1
we have
rank A
n
G
n+1
= rank diagG
n
, . . . , G
n
L
n
= r
d
n+1
.
Therefore, by (3.3.8) and the rank inequality of product matrices (see, for
example, p. 13 of Horn and Johnson [1985]),
rank G
n+1
rank A
n
G
n+1
rank A
n
+rank G
n+1
r
d
n+1
= rank G
n+1
.
Thus, it follows that rank G
n+1
= rank A
n
G
n+1
= r
d
n+1
. Hence G
n+1
is invertible
and the induction is complete.
Since P
n
k
is a basis of
d
, the linear functional L dened on
d
by
L(1) = 1, L(P
n
) = 0, n 1,
is well dened. We now use induction to prove that
L(P
k
P
T
j
) = 0, k ,= j. (3.3.11)
Let n 0 be an integer. Assume that (3.3.11) hold for every k, j such that 0 k
n and j > k. Since the proof of (3.3.10) used only the three-term relation and the
3.3 The Three-Term Relation 75
fact that rank A
n
= r
d
n+1
, it follows from (iia) and (iib) that (3.3.10) holds for P
n
under these considerations. Therefore, for > n+1,
L(P
n+1
P
T

) =L
_
d

i=1
x
i
D
n,i
P
n
P
T

_
=L
_
d

i=1
D
n,i
P
n
(A
,i
P
+1
+B
,i
P

+C
,i
P
1
)
T
_
= 0.
The induction is complete. Next we prove that H
n
= L(P
n
P
T
n
) is invertible.
Clearly, H
n
is symmetric. From (iib) and (3.3.11), A
n,i
H
n+1
= H
n
C
T
n+1,i
; thus
A
n
H
n+1
= diagH
n
, . . . , H
n
C
T
n+1
. (3.3.12)
Since L(1) = 1, it follows that H
0
= L(P
0
P
T
0
) = 1. Thus H
0
is invertible.
Suppose that H
n
is invertible. Then diagH
n
, . . . , H
n
is invertible and, by
rankC
n+1
= r
d
n+1
,
rank A
n
H
n+1
= rank
_
diagH
n
, . . . , H
n
C
T
n+1
_
= r
d
n+1
.
However, rank A
n
= r
d
n+1
by (iia) and A
n
: dr
d
n
r
d
n+1
, H
n+1
: r
d
n+1
r
d
n+1
; we then
have
rank H
n+1
rank(A
n
H
n+1
) rank A
n
+rank H
n+1
r
d
n+1
= rank H
n+1
.
Therefore rank H
n+1
= rank A
n
H
T
n+1
= r
d
n+1
, which implies that H
n+1
is invert-
ible. By induction, we have proved that H
n
is invertible for each n 0. Since H
n
is symmetric and invertible, there exist invertible matrices S
n
and
n
such that
H
n
= S
n

n
S
T
n
and
n
is diagonal. For Q
n
= S
1
n
P
n
it then follows that
L(Q
n
Q
T
n
) = S
1
n
L(P
n
P
T
n
)(S
1
n
)
T
= S
1
n
H
n
(S
1
n
)
T
=
k
.
This proves that L denes a quasi-denite linear functional in
d
; L makes
P
n

n=0
an orthogonal basis. The proof is complete.
If the polynomials in Theorem 3.3.7 satisfy (3.3.3) instead of (3.3.1) then they
will be orthogonal with respect to a positive denite linear functional instead of a
quasi-denite linear functional.
Theorem 3.3.8 Let P
n

n=0
=P
n

: [[ = n, n N
0
, P
0
= 1, be an arbitrary
sequence in
d
. Then the following statements are equivalent.
(i) There exists a linear function L which denes a positive denite linear
functional on
d
and which makes P
n

n=0
an orthonormal basis in
d
.
(ii) For n 0, 1 i d, there exist matrices A
n,i
and B
n,i
such that
(a) the polynomials P
n
satisfy the three-term relation (3.3.3),
(b) the matrices in the three-term relation satisfy the rank conditions
(3.3.7) and (3.3.8).
76 General Properties of Orthogonal Polynomials in Several Variables
Proof As in the proof of the previous theorem, we only need to prove that state-
ment (ii) implies statement (i). By Theorem 3.3.7 there exists a quasi-denite
linear functional L which makes P
n

n=0
orthogonal. We will prove that L is
positive denite. It is sufcient to show that the matrix H
n
= L(P
n
P
T
n
) is the
identity matrix for every n N
0
. Indeed, since every nonzero polynomial P of
degree n can be written as P = a
T
n
P
n
+ +a
T
0
P
0
, it follows from H
n
= I that
L(P
2
) =|a
n
|
2
+ +|a
0
|
2
> 0. Since C
n+1
= A
T
n
, equation (3.3.12) becomes
A
n
H
n+1
= diagH
k
, . . . , H
k
A
n
. Since P
0
= 1 and L1 = 1, H
0
= 1. Suppose that
H
n
is an identity matrix. Then diagH
k
, . . . , H
k
is also an identity matrix. Thus,
it follows from the rank condition and the above equation that H
n+1
is an identity
matrix. The proof is completed by induction.
Both these theorems are extensions of Favards theorem for orthogonal poly-
nomials of one variable. Since the three-term relation and the rank condition
characterize the orthogonality, we should be able to extract the essential infor-
mation on orthogonal polynomials by studying the three-term relation; indeed,
this has been carried out systematically for orthogonal polynomials in one vari-
able. For several variables, the coefcients of (3.3.1) and (3.3.3) are matrices and
they are unique only up to matrix similarity (for (3.3.3), unitary similarity) since
an orthogonal basis is not unique and there is no natural order among orthogonal
polynomials of the same degree. Hence, it is much harder to extract information
from these coefcients.
3.3.3 Centrally symmetric integrals
In this subsection we show that a centrally symmetric linear functional can be
described by a property of the coefcient matrices of the three-term relation.
Let L be a positive denite linear functional. Of special interest are examples
of L expressible as integrals with respect to a nonnegative weight function with
nite moments, that is, L f =
_
f (x)W(x)dx; in such cases we shall work with
the weight function W instead of the functional L.
Denition 3.3.9 Let R
d
be the support set of the weight function W. Then
W is centrally symmetric if
x x and W(x) =W(x).
We also call the linear functional L centrally symmetric if it satises
L(x

) = 0, N
d
, [[ an odd integer.
It is easily seen that when L is expressible as an integral with respect to W,
the two denitions are equivalent. As examples we mention the rotation invari-
ant weight function (1 |x|
2
)
1/2
on the unit ball B
d
and the product weight
3.3 The Three-Term Relation 77
function

d
i=1
(1x
i
)
a
i
(1+x
i
)
b
i
on the cube [1, 1]
d
, which is centrally symmet-
ric only if a
i
= b
i
. Weight functions on the simplex are not centrally symmetric
since the simplex is not symmetric with respect to the origin.
A centrally symmetric L has a relatively simple structure which allows us to
gain information about the structure of orthogonal polynomials. The following
theorem connects the central symmetry of L to the coefcient matrices in the
three-term relation.
Theorem 3.3.10 Let L be a positive denite linear functional, and let B
n,i

be the coefcient matrices in (3.3.3) for the orthogonal polynomials with respect
to L. Then L is centrally symmetric if and only if B
n,i
= 0 for all n N
0
and
1 i d.
Proof First assume that L is centrally symmetric. From (3.3.2),
B
0,i
=L(x
i
P
0
P
T
0
) =L(x
i
) = 0.
Therefore from (3.3.10) P
1
= x
i
D
T
0,i
, which implies that the constant terms in
the components of P
1
vanish. Suppose we have proved that B
k,i
= 0 for 1 k
n 1. Then, by (3.3.10), since E
k
= 0 for k = 0, 1, . . . , k 1 we can show by
induction that the components of P
n
are sums of monomials of even degree if n
is even and of odd degree if n is odd. By denition,
B
n,i
=L(x
i
P
n
P
T
n
) =
d

j=0
d

k=0
L
_
x
i
D
T
n1, j
(x
j
P
n1
+A
T
n2, j
P
n2
)
(x
k
P
n1
+A
T
n2,k
P
n2
)
T
D
n1,k

.
Since the components of (x
j
P
n1
+A
T
n2, j
P
n2
)(x
k
P
n1
+A
T
n2,k
P
n2
)
T
are poly-
nomials that are sums of monomials of even degree, their multiples by x
i
are sums
of monomials of odd degree. Since L is centrally symmetric, B
n,i
= 0.
However, assuming that B
n,i
= 0 for all n N
0
, the above proof shows that the
components of P
n
are sums of monomials of even degree if n is even and sums
of monomials of odd degree if n is odd. From B
0,i
=L(x
i
P
0
P
T
0
) =L(x
i
) = 0 it
follows that L(x
i
) = 0. We now use induction to prove that
B
k,i
= 0 : 1 k n L(x

1
1
x

d
) = 0 :
1
+ +
d
= 2n+1.
Suppose that the claim has been proved for k n1. By (3.3.3) and (3.3.2),
A
n1, j
B
n,i
A
T
n1,k
=L[x
i
A
n1, j
P
n
(A
n1,k
P
n
)
T
]
=L[x
i
(x
j
P
n1
A
T
n2, j
P
n2
)(x
k
P
n1
A
T
n2,k
P
n2
)
T
]
=L(x
i
x
j
x
k
P
n1
P
T
n1
) L(x
i
x
j
P
n1
P
T
n2
)A
n2,k
A
T
n2, j
L(x
i
x
k
P
n1
P
T
n2
) +A
T
n2, j
B
n2,i
A
n2,k
.
78 General Properties of Orthogonal Polynomials in Several Variables
Then L(x
i
x
j
P
n1
P
T
n2
) = 0 and L(x
i
x
k
P
n1
P
T
n2
) = 0, since the elements of
these matrices are polynomials that are sums of odd powers. Therefore, from
B
n,i
=0 we conclude that L(x
i
x
j
x
k
P
n1
P
T
n1
) = 0. Multiplying the above matrix
from the left and the right by A
l,p
and A
T
l,p
, respectively, for 1 p d and l =
n 2, . . . , 1 and using the three-term relation, we can repeat the above argument
and derive L(x

1
1
x

d
d
) = 0,
1
+ +
d
= 2n+1, in nitely many steps.
One important corollary of the above proof is the following result.
Theorem 3.3.11 Let L be a centrally symmetric linear functional. Then an
orthogonal polynomial of degree n with respect to L is a sum of monomials of
even degree if n is even and a sum of monomials of odd degree if n is odd.
For examples see various bases of the classical orthogonal polynomials on the
unit disk in Section 2.3.
If a linear functional L or a weight function W is not centrally symmetric
but becomes centrally symmetric under a nonsingular afne transformation then
the orthogonal polynomials should have a structure similar to that of the centrally
symmetric case. The coefcients B
n,i
in the three-term relations, however, will not
be zero. It turns out that they satisfy a commutativity condition, which suggests
the following denition.
Denition 3.3.12 If the coefcient matrices B
n,i
in the three-term relation
satisfy
B
n,i
B
n, j
= B
n, j
B
n,i
, 1 i, j d, n N
0
, (3.3.13)
then the linear functional L (or the weight function W) is called quasi-centrally-
symmetric.
Since the condition B
n,i
= 0 implies (3.3.13), central symmetry implies quasi-
central symmetry.
Theorem 3.3.13 Let W be a weight function dened on R
d
. Suppose that
W becomes centrally symmetric under the nonsingular linear transformation
u x, x = Tu+a, det T > 0, x, u, a R
d
.
Then, W is quasi-centrally-symmetric.
Proof Let W

(u) = W(Tu +a) and

= u : x = Tu +a, x . Denote
by P
n
the orthonormal polynomials associated to W; then, making the change
of variables x Tx +a shows that the corresponding orthonormal polynomials
3.3 The Three-Term Relation 79
for W

are given by P

n
(u) =

det T P
n
(Tu+a). Let B
n,i
(W) denote the matrices
associated with W. Then, by (3.3.2),
B
n,i
(W) =
_

x
i
P
n
(x)P
T
n
(x)W(x)dx
=
_

_
d

j=1
t
i j
u
j
+a
i
_
P

n
(u)P

n
T
(u)W

(u)du
=
d

j=1
t
i j
B
n, j
(W

) +a
i
I.
By assumption W

is centrally symmetric on

, which implies that B


n,i
(W

)
= 0 by Theorem 3.3.10. Then B
n,i
(W) = a
i
I, which implies that B
n,i
(W) satises
(3.3.13). Hence W is quasi-centrally-symmetric.
Howto characterize quasi-central symmetry through the properties of the linear
functional L or the weight function is not known.
3.3.4 Examples
To get a sense of what the three-term relations are like, we give several examples
before continuing our study. The examples chosen are the classical orthogonal
polynomials of two variables on the square, the triangle and the disk. For a more
concise notation, we will write orthonormal polynomials of two variables as P
n
k
.
The polynomials P
n
k
, 0 k n, constitute a basis of V
2
n
, and the polynomial
vector P
n
is given by P
n
= (P
n
0
P
n
n
)
T
. The three-term relations are
x
i
P
n
= A
n,i
P
n+1
+B
n,i
P
n
+A
T
n1,i
P
n1
, i = 1, 2,
where the sizes of the matrices are A
n,i
: (n +1) (n +2) and B
n,i
: (n +1)
(n+1). Closed formulae for the coefcient matrices are derived using (3.3.2) and
the explicit formulae for an orthonormal basis. We leave the verication to the
reader.
Product polynomials on the square
In the simplest case, that of a square, we can work with general product polyno-
mials. Let p
n
be the orthonormal polynomials with respect to a weight function w
on [1, 1]; they satisfy the three-term relation
xp
n
= a
n
p
n+1
+b
n
p
n
+a
n1
p
n1
, n 0.
Let W be the product weight function dened by W(x, y) = w(x)w(y). One basis
of orthonormal polynomials is given by
P
n
k
(x, y) = p
nk
(x)p
k
(y), 0 k n.
80 General Properties of Orthogonal Polynomials in Several Variables
The coefcient matrices of the three-term relation are then of the following form:
A
n,1
=
_

_
a
n
M 0
.
.
.
.
.
.
M a
0
0
_

_
and A
n,2
=
_

_
0 a
0
M
.
.
.
.
.
.
0 M a
n
_

_
;
B
n,1
=
_

_
b
n
M
.
.
.
M b
0
_

_
and B
n,2
=
_

_
b
0
M
.
.
.
M b
n
_

_
.
If w is centrally symmetric then b
k
= 0; hence B
n,i
= 0.
Classical polynomials on the triangle
For the weight function
W(x, y) = x
1/2
y
1/2
(1x y)
1/2
,
with normalization constant given by (setting [[ = + +)
w
,,
=
_
_
T
2
W(x, y)dxdy
_
1
=
([[ +
3
2
)
( +
1
2
)( +
1
2
)( +
1
2
)
,
we consider the basis of orthonormal polynomials, see (2.4.2),
P
n
k
(x, y) = (h
k,n
)
1
P
(2k++,1/2)
nk
(2x 1)(1x)
k
P
(1/2,1/2)
k
_
2y
1x
1
_
,
where
(h
k,n
)
2
=
w
,,
(2n+[[ +
1
2
)(2k + +)

(n+k + + +1)(nk + +
1
2
)(k + +
1
2
)(k + +
1
2
)
(nk)!k!(n+k +[[ +
1
2
)(k + +)
.
The coefcient matrices in the three-term relations are as follows:
A
n,1
=
_

_
a
0,n
M 0
a
1,n
0
.
.
.
.
.
.
M a
n,n
0
_

_
, B
n,1
=
_

_
b
0,n
M
b
1,n
.
.
.
M b
n,n
_

_
,
where
a
k,n
=
h
k,n+1
h
k,n
(nk +1)(n+k +[[ +
1
2
)
(2n+[[ +
1
2
)(2n+[[ +
3
2
)
,
b
k,n
=
1
2

( + +2k)
2
(
1
2
)
2
2(2n+[[
1
2
)(2n+[[ +
3
2
)
;
3.3 The Three-Term Relation 81
A
n,2
=
_

_
e
0,n
d
0,n
M 0
c
1,n
e
1,n
d
1,n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
c
n1,n
d
n1,n
0
M c
n,n
e
n,n
d
n,n
_

_
,
where
c
k,n
=
h
k1,n+1
(nk +1)(nk +2)(k +
1
2
)(k +
1
2
)
h
k,n
(2n+[[ +
1
2
)(2n+[[ +
3
2
)(2k + +)(2k + + 1)
,
e
k,n
=
_
1+
(
1
2
)
2
(
1
2
)
2
(2k + + +1)(2k + + 1)
_
a
n,k
2
,
d
k,n
=
h
k+1,n+1
(n+k +[[ +
1
2
)(n+k +[[ +
3
2
)(k +1)(k + +)
h
k,n
(2n+[[ +
1
2
)(2n+[[ +
3
2
)(2k + +)(2k + + +1)
;
and nally
B
n,2
=
_

_
f
0,n
g
0,n
M
g
0,n
f
1,n
g
1,n
.
.
.
.
.
.
.
.
.
g
n2,n
f
n1,n
g
n1,n
M g
n1,n
f
n,n
_

_
,
where
f
k,n
=
_
1+
(
1
2
)
2
(
1
2
)
2
(2k + + +1)(2k + + 1)
_
1b
n,k
2
,
g
k,n
=2
h
k+1,n
h
k,n

(nk +
1
2
)(n+k +[[ +
1
2
)(k +1)(k + +)
(2n+[[
1
2
)(2n+[[ +
3
2
)(2k + +)(2k + + +1)
.
Classical polynomials on the disk
With respect to the weight function W(x, y) = (1x
2
y
2
)
1/2
, we consider the
orthonormal polynomials, see (2.3.1),
P
n
k
(x, y) = (h
k,n
)
1
C
k++1/2
nk
(x)(1x
2
)
k/2
C

k
_
y

1x
2
_
,
where
(h
k,n
)
2
=
2(2 +1)
2
2k+4
[()]
2
(n+k +2 +1)(2 +k)
(2n+2 +1)(k +)[(k + +
1
2
)]
2
(nk)!k!
.
82 General Properties of Orthogonal Polynomials in Several Variables
If = 0, these formulae hold under the limit relation lim
0

1
C

n
(x) =
(2/n)T
n
(x). In this case B
n,i
= 0, since the weight function is centrally symmet-
ric. The matrices A
n,1
and A
n,2
are of the same form as in the case of classical
orthogonal polynomials on the triangle, with e
k,n
= 0 and
a
k,n
=
h
k,n+1
h
k,n
nk +1
2n+2 +1
, d
k,n
=
h
k+1,n+1
h
k,n
(k +1)(2k +2 +1)
2(k +)(2n+2 +1)
,
c
k,n
=
h
k1,n+1
h
k,n
(k +2 1)(nk +1)(nk +2)
2(k +)(2k +2 1)(2n+2 +1)
.
3.4 Jacobi Matrices and Commuting Operators
For a family of orthogonal polynomials in one variable we can use the coef-
cients of the three-term relation to dene a Jacobi matrix J, as in Subsection
1.3.2. This semi-innite matrix denes an operator on the sequence space
2
. It
is well known that the spectral measure of this operator is closely related to the
measure that denes the linear function L with respect to which the polynomials
are orthogonal.
For orthogonal polynomials in several variables, we can use the coefcient
matrices to dene a family of tridiagonal matrices J
i
, 1 i d, as follows:
J
i
=
_

_
B
0,i
A
0,i
M
A
T
0,i
B
1,i
A
1,i
A
T
1,i
B
2,i
.
.
.
M
.
.
.
.
.
.
_

_
, 1 i d. (3.4.1)
We call the J
i
block Jacobi matrices. It should be pointed out that their entries
are coefcient matrices of the three-term relation, whose sizes increase to innity
going down the main diagonal. As part of the denition of the block Jacobi matri-
ces, we require that the matrices A
n,i
and B
n,i
satisfy the rank conditions (3.3.7)
and (3.3.8). This requirement, however, implies that the coefcient matrices have
to satisfy several other conditions. Indeed, the three-term relation and the rank
conditions imply that the P
n
are orthonormal, for which we have
Theorem 3.4.1 Let P
n
be a sequence of orthonormal polynomials satis-
fying the three-term relation (3.3.3). Then the coefcient matrices satisfy the
commutativity conditions
A
k,i
A
k+1, j
= A
k, j
A
k+1,i
,
A
k,i
B
k+1, j
+B
k,i
A
k, j
= B
k, j
A
k,i
+A
k, j
B
k+1,i
,
A
T
k1,i
A
k1, j
+B
k,i
B
k, j
+A
k,i
A
T
k, j
= A
T
k1, j
A
k1,i
+B
k, j
B
k,i
+A
k, j
A
T
k,i
,
(3.4.2)
for i ,= j, 1 i, j d, and k 0, where A
1,i
= 0.
3.4 Jacobi Matrices and Commuting Operators 83
Proof By Theorem 3.3.8 there is a linear functional L that makes P
k
orthonor-
mal. Using the recurrence relation, there are two different ways of calculating
the matrices L(x
i
x
j
P
k
P
T
k+2
), L(x
i
x
j
P
k
P
T
k
) and L(x
i
x
j
P
k
P
T
k+1
). These calcula-
tions lead to the desired matrix equations. For example, by the recurrence relation
(3.3.3),
L(x
i
x
j
P
k
P
T
k+2
) =L(x
i
P
k
x
j
P
T
k+2
)
=L
_
(A
k,i
P
k+1
+ )( +A
T
k+1, j
P
k+1
)
T

= A
k,i
A
k+1, j
and
L(x
i
x
j
P
k
P
T
k+2
) =L(x
j
P
k
x
i
P
T
k+2
) = A
k, j
A
k+1,i
,
which leads to the rst equation in (3.4.2).
The reason that the relations in (3.4.2) are called commutativity conditions will
become clear soon. We take these conditions as part of the denition of the block
Jacobi matrices J
i
.
The matrices J
i
can be considered as a family of linear operators which act
via matrix multiplication on
2
. The domain of the J
i
consists of all sequences
in
2
for which matrix multiplication yields sequences in
2
. As we shall see
below, under proper conditions the matrices J
i
form a family of commuting self-
adjoint operators and the spectral theorem implies that the linear functional L in
Theorem 3.3.7 has an integral representation. In order to proceed, we need some
notation from the spectral theory of self-adjoint operators in a Hilbert space (see,
for example, Riesz and Sz. Nagy [1955] or Rudin [1991]).
Let H be a separable Hilbert space and let , ) denote the inner product in H.
Each self-adjoint operator T in H is associated with a spectral measure E on R
such that T =
_
x dE(x); E is a projection-valued measure dened for the Borel
sets of Rsuch that E(R) is the identity operator in Hand E(BC) =E(B)E(C)
for Borel sets B,CR. For any Hthe mapping BE(B), ) is an ordinary
measure dened for the Borel sets B R and denoted E, ). The operators in
the family T
1
, . . . , T
d
in H commute, by denition, if their spectral measures
commute: that is, if E
i
(B)E
j
(C) = E
i
(C)E
j
(B) for any i, j = 1, . . . , d and any two
Borel sets B,C R. If T
1
, . . . , T
d
commute then E = E
1
E
d
is a spectral
measure on R
d
with values that are self-adjoint projections in H. In particular, E
is the unique measure such that
E(B
1
B
d
) = E
1
(B
1
) E
d
(B
d
)
for any Borel sets B
1
, . . . , B
d
R. The measure E is called the spectral measure
of the commuting family T
1
, . . . , T
d
. A vector
0
H is a cyclic vector in H with
respect to the commuting family of self-adjoint operators T
1
, . . . , T
d
in H if the
linear manifold P(T
1
, . . . , T
d
)
0
: P
d
is dense in H. The spectral theorem
for T
1
, . . . , T
d
states:
84 General Properties of Orthogonal Polynomials in Several Variables
Theorem 3.4.2 If T
1
, . . . , T
d
is a commuting family of self-adjoint operators
with cyclic vector
0
then T
1
, . . . , T
d
are unitarily equivalent to multiplication
operators X
1
, . . . , X
d
,
(X
i
f )(x) = x
i
f (x), 1 i d,
dened on L
2
(), where the measure is dened by (B) = E(B)
0
,
0
) for
the Borel set B R
d
.
The unitary equivalence means that there exists a unitary mapping U : H
L
2
() such that UT
i
U
1
= X
i
, 1 i d. If the operators in Theorem 3.4.2 are
bounded and their spectra are denoted by S
i
then the measure has support S
S
1
S
d
. Note that the measure satises

0
, P(J
1
, . . . , J
d
)
0
) =
_
S
P(x) d(x).
We apply the spectral theorem to the block Jacobi operators J
1
, . . . , J
d
on

2
. Let

N
d
0
denote the canonical orthonormal basis for H =
2
. Intro-
duce the notation
k
= (

)
[[=k
for the column vector whose elements are

ordered according to lexicographical order. Then each f H can be written as


f = a
T
k

k
. Let | |
2
be the matrix norm induced by the Euclidean norm for
vectors. First we consider bounded operators.
Lemma 3.4.3 The operator J
i
dened via (3.4.1) is bounded if and only if
sup
k0
|A
k,i
|
2
< and sup
k0
|B
k,i
|
2
<.
Proof For any f H, f = a
T
k

k
, we have |f |
2
H
= f , f ) = a
T
k
a
k
. It follows
from the denition of J
i
that
J
i
f =

k=0
a
T
k
(A
k,i

k+1
+B
k,i

k
+A
T
k1,i

k1
)
=

k=0
(a
T
k1
A
k1,i
+a
T
k
B
k,i
+a
T
k+1
A
T
k,i
)
k
,
where we dene A
1,i
= 0. Hence, if sup
k0
|A
k,i
|
2
and sup
k0
|B
k,i
|
2
are
bounded then it follows from
|J
i
f |
2
H
=

k=0
|a
T
k1
A
k1,i
+a
T
k
B
k,i
+a
T
k+1
A
T
k,i
|
2
2
3
_
2sup
k0
|A
k,i
|
2
2
+sup
k0
|B
k,i
|
2
2
_
| f |
2
H
that J
i
is a bounded operator. Conversely, suppose that |A
k,i
|
2
, say, approaches
innity for a subsequence of N
0
. Let a
k
be vectors such that |a
k
|
2
= 1 and
|a
T
k
A
k,i
|
2
=|A
k,i
|
2
. Then |a
k

k
| =|a
k
|
2
= 1. Therefore, it follows from
|J
i
|
2
H
|J
i
a
T
k

k
|
2
H
=|a
T
k
A
k,i
|
2
2
+|a
T
k
B
k,i
|
2
2
+|a
T
k
A
T
k1,i
|
2
2
|A
k,i
|
2
2
that J
i
is unbounded.
3.4 Jacobi Matrices and Commuting Operators 85
Lemma 3.4.4 Suppose that J
i
, 1 i d, is bounded. Then J
1
, . . . , J
d
are self-
adjoint operators and they pairwise commute.
Proof Since J
i
is bounded, it is self-adjoint if it is symmetric. That is, we need to
show that J
i
f , g) = f , J
i
g). Let f =

a
T
k

k
and g =

b
T
k

k
. From the denition
of J
i
,
J
i
f , g) =

a
T
k1
(A
k1,i
+a
T
k
B
k,i
+a
T
k+1
A
T
k,i
)b
k
=

b
T
k
(A
T
k1,i
a
k1
+B
k,i
a
k
+A
k,i
a
k+1
) = f , J
i
g).
Also, since the J
i
are bounded, to show that J
1
, . . . , J
d
commute we need only
verify that
J
k
J
j
f = J
j
J
k
f f H.
A simple calculation shows that this is equivalent to the commutativity condi-
tions (3.4.2).
Lemma 3.4.5 Suppose that J
i
, 1 i d, is bounded. Then
0
=
0
H is a
cyclic vector with respect to J
1
, . . . , J
d
, and

n
=P
n
(J
1
J
d
)
0
(3.4.3)
where P
n
(x) is a polynomial vector as in (3.2.4).
Proof We need only prove (3.4.3), since it implies that
0
is a cyclic vector. The
proof uses induction. Clearly P
0
= 1. From the denition of J
i
,
J
i

0
= A
0,i

1
+B
0,i

0
, 1 i d.
Multiplying by D
T
0,i
from (3.3.9) and summing over i = 1, . . . , d, we get

1
=
d

i=1
D
T
0,i
J
i

i=1
D
T
0,i
B
0,i

0
=
_
d

i=1
D
T
0,i
J
i
E
0
_

0
,
where E
0
= D
0,i
B
0,i
. Therefore P
1
(x) =
d
i=1
x
i
D
T
0,i
E
0
. Since D
T
0,i
is of size
r
d
1
r
d
0
= d 1, P
1
is of the desired form. Likewise, for k 1,
J
i

k
= A
k,i

k+1
+B
k,i

k
+A
T
k1,i

k1
, 1 i d,
which implies, by induction, that

k+1
=
d

i=1
D
T
k,i
J
i

k
E
k

k
F
k

k1
=
_
d

i=1
D
T
k,i
J
i
P
k
(J) E
k
P
k
(J) F
k
P
k1
(J)
_

0
,
86 General Properties of Orthogonal Polynomials in Several Variables
where J = (J
1
J
d
)
T
and E
k
and F
k
are as in (3.3.10). Consequently
P
k+1
(x) =
d

i=1
x
i
D
T
k,i
P
k
(x) E
k
P
k
(x) F
k
P
k1
(x).
Clearly, every component of P
k+1
is a polynomial in
d
k+1
.
The last two lemmas show that we can apply the spectral theorem to the
bounded operators J
1
, . . . , J
d
. Hence, we can state the following.
Lemma 3.4.6 If J
i
, 1 i d, is bounded then there exists a measure M
with compact support such that J
1
, . . . , J
d
are unitarily equivalent to the multi-
plication operators X
1
, . . . , X
d
on L
2
(d). Moreover, the polynomials P
n

n=0
in
Lemma 3.4.5 are orthonormal with respect to .
Proof Since (B) =E(B)
0
,
0
) in Theorem 3.4.2,
_
P
n
(x)P
T
m
(x)d(x) =P
n

0
, P
T
m

0
) =
n
,
T
m
).
This proves that the P
n
are orthonormal.
Unitary equivalence associates the cyclic vector
0
with the function P(x) = 1
and the orthonormal basis
n
in H with the orthonormal basis P
n
in L
2
(d).
Theorem 3.4.7 Let P
n

n=0
, P
0
= 1, be a sequence in
d
. Then the following
statements are equivalent.
(i) There exists a determinate measure M with compact support in R
d
such that P
n

n=0
is orthonormal with respect to .
(ii) The statement (ii) in Theorem 3.3.8 holds, together with
sup
k0
|A
k,i
|
2
< and sup
k0
|B
k,i
|
2
<, 1 i d. (3.4.4)
Proof On the one hand, if has compact support then the multiplication oper-
ators X
1
, . . . , X
d
in L
2
(d) are bounded. Since these operators have the J
i
as
representation matrices with respect to the orthonormal basis P
n
, (3.4.4) follows
from Lemma 3.4.3.
On the other hand, by Theorem 3.3.8 the three-term relation and the rank con-
ditions imply that the P
n
are orthonormal; hence, we can dene Jacobi matrices
J
i
. The condition (3.4.4) shows that the operators J
i
are bounded. The existence
of a measure with compact support follows from Lemma 3.4.6. That the measure
is determinate follows from Theorem 3.2.17.
3.5 Further Properties of the Three-Term Relation 87
For block Jacobi matrices, the formal commutativity J
i
J
j
= J
j
J
i
is equivalent
to the conditions in (3.4.2), which is why we call them commutativity conditions.
For bounded self-adjoint operators, the commutation of the spectral measures is
equivalent to formal commutation. There are, however, examples of unbounded
self-adjoint operators with a common dense domain such that they formally com-
mute but their spectral measures do not commute (see Nelson [1959]). Theorem
3.4.7 shows that the measures with unbounded support set give rise to unbounded
Jacobi matrices. There is a sufcient condition for the commutation of opera-
tors in Nelson [1959], which turns out to be applicable to a family of unbounded
operators J
1
, . . . , J
d
that satisfy an additional condition. The result is as follows.
Theorem 3.4.8 Let P
n

n=0
, P
0
= 1, be a sequence in
d
that satises the
three-term relation (3.3.3) and the rank conditions (3.3.7) and (3.3.8). If

k=0
1
|A
k,i
|
2
=, 1 i d, (3.4.5)
then there exists a determinate measure M such that the P
n
are orthonormal
with respect to .
The condition (3.4.5) is a well-known condition in one variable and implies the
classical Carleman condition (3.2.16) for the determinacy of a moment problem;
see p. 24 of Akhiezer [1965]. For the proof of this theorem and a discussion of
the condition (3.4.5), see Xu [1993b].
3.5 Further Properties of the Three-Term Relation
In this section we discuss further properties of the three-term relation. To
some extent, the properties discussed below indicate major differences between
three-term relations in one variable and in several variables. They indicate that
the three-term relation in several variables is not as strong as that in one variable;
for example, there is no analogue of orthogonal polynomials of the second kind.
These differences reect the essential difculties in higher dimensions.
3.5.1 Recurrence formula
For orthogonal polynomials in one variable, the three-term relation is equivalent
to a recursion relation. For any given sequence of a
n
and b
n
we can use
p
n+1
=
1
a
n
(x b
n
)p
n

a
n1
a
n
p
n1
, p
0
= 1, p
1
= 0
to generate a sequence of polynomials, which by Favards theorem is a sequence
of orthogonal polynomials provided that a
n
> 0.
In several variables, however, the recurrence formula (3.3.10) without
additional conditions is not equivalent to the three-term relation. As a
88 General Properties of Orthogonal Polynomials in Several Variables
consequence, although Theorem 3.3.8 is an extension of the classical Favard the-
orem for one variable, it is not as strong as that theorem. In fact, one direction of
the theorem says that if there is a sequence of polynomials P
n
that satises the
three-term relation and the rank condition then it is orthonormal. But it does not
answer the question when and which P
n
will satisfy such a relation. The condi-
tions under which the recurrence relation (3.3.10) is equivalent to the three-term
relation are discussed below.
Theorem 3.5.1 Let P
k

k=0
be dened by (3.3.10). Then there is a linear func-
tional L that is positive denite and makes P
k

k=0
an orthonormal basis for

d
if and only if B
k,i
are symmetric and A
k,i
satisfy the rank condition (3.3.7) and
together they satisfy the commutativity conditions (3.4.2).
This theorem reveals one major difference between orthogonal polynomials of
one variable and of several variables; namely, the three-term relation in the case
of several variables is different from the recurrence relation. For the recurrence
formula to generate a sequence of orthogonal polynomials it is necessary that its
coefcients satisfy the commutativity conditions. We note that, even if the set
P
n

n=0
is only orthogonal rather than orthonormal, S
1
n
P
n
will be orthonormal,
where S
n
is a nonsingular matrix that satises S
n
S
T
n
= L(P
n
P
T
n
). Therefore, our
theoremmay be stated in terms of polynomials that are only orthogonal. However,
it has to be appropriately modied since the three-term relation in this case is
(3.3.1) and the commutativity conditions become more complicated.
For the proof of Theorem 3.5.1 we will need several lemmas. One reason that
the recurrence relation (3.3.10) is not equivalent to the three-term relation (3.3.1)
lies in the fact that D
T
n
is only a left inverse of A
n
; that is, D
T
n
A
n
= I, but A
n
D
T
n
is not equal to the identity. We need to understand the kernel space of I A
n
D
T
n
,
which is related to the kernel space of A
n1,i
A
n1, j
: 1 i, j n. In order to
describe the latter object, we introduce the following notation.
Denition 3.5.2 For any given sequence of matrices C
1
, . . . ,C
d
, where each C
i
is of size s t, a joint matrix
C
is dened as follows. Let
i, j
, 1 i, j d, i ,= j,
be block matrices dened by

i, j
= ( [0[C
T
j
[ [ C
T
i
[0[ ),
i, j
: t ds,
that is, the only two nonzero blocks are C
T
j
at the ith block and C
T
i
at the
jth block. The matrix
C
is then dened by using the
i, j
as blocks in the
lexicographical order of (i, j) : 1 i < j d,

C
= [
T
1,2
[
T
1,3
[ [
T
d1,d
].
The matrix
C
is of size ds
_
d
2
_
t.
3.5 Further Properties of the Three-Term Relation 89
For example, for d = 2 and d = 3, respectively,
C
is given by
_
C
2
C
1
_
and
_
_
C
2
C
3
0
C
1
0 C
3
0 C
1
C
2
_
_
.
The following lemma follows readily from the denition of
C
.
Lemma 3.5.3 Let Y = (Y
T
1
Y
T
d
)
T
, Y
i
R
s
. Then

T
C
Y = 0 if and only if C
T
i
Y
j
=C
T
j
Y
i
, 1 i < j d.
By denition,
A
n
and
A
T
n
are matrices of different sizes. Indeed,

A
n
: dr
d
n

_
d
2
_
r
d
n+1
and
A
T
n
: dr
d
n+1

_
d
2
_
r
d
n
.
We will need to nd the ranks of these two matrices. Because of the relation
(3.3.4), we rst look at
L
n
and
L
T
n
, where the matrices L
n,i
are dened by (3.3.5).
Lemma 3.5.4 For n N
0
and 1 i < j d, L
n,i
L
n+1, j
= L
n, j
L
n+1,i
.
Proof From (3.3.5) it follows that, for any x R
d
,
L
n,i
L
n+1, j
x
n+2
= x
i
x
j
x
n
= L
n, j
L
n+1,i
x
n+2
.
Taking partial derivatives, with respect to x, of the above identity we conclude
that (L
n,i
L
n+1, j
L
n, j
L
n+1,i
)e
k
= 0 for every element e
k
of the standard basis of
R
r
n+2
. Hence, the desired identity follows.
Lemma 3.5.5 For d 2 and n 1, rank
L
T
n
= dr
d
n+1
r
d
n+2
.
Proof We shall prove that the dimension of the null space of
T
L
T
n
is r
d
n+2
. Let Y =
(Y
T
1
Y
T
d
)
T
R
dr
n+1
, where Y
i
R
r
n+1
. Consider the homogeneous equation
in dr
d
n+1
variables
T
L
T
n
Y = 0, which by Lemma 3.5.3, is equivalent to
L
n,i
Y
j
= L
n, j
Y
i
, 1 i < j d. (3.5.1)
There is exactly a single 1 in each row of L
n,i
and the rank of L
n,i
is r
d
n
.
Dene N
n
= N
d
0
: [[ = n and, for each 1 i d, N
n,i
= N
d
0
:
[[ = n and
i
,= 0. Let (N ) denote the number of elements in the set N .
Counting the number of integer solutions of [[ = n shows that (N
n
) = r
d
n
,
(N
n,i
) = r
d
n1
. We can consider the L
n,i
as transforms from N
n
to N
n,i
. Fix a
one-to-one correspondence between the elements of N
n
and the elements of a
vector in R
r
n
, and write Y
i
[
N
n+1, j
= L
n, j
Y
i
. The linear systems of equations (3.5.1)
can be written as
Y
i
[
N
n+1, j
=Y
j
[
N
n+1,i
, 1 i < j d. (3.5.2)
90 General Properties of Orthogonal Polynomials in Several Variables
This gives
_
d
2
_
r
d
n
equations in dr
d
n+1
variables of Y, not all of which are
independent. For any distinct integers i, j and k,
Y
i
[
N
n+1, j
N
n+1,k
=Y
j
[
N
n+1,i
N
n+1,k
, Y
i
[
N
n+1,k
N
n+1, j
=Y
k
[
N
n+1,i
N
n+1, j
,
Y
j
[
N
n+1,k
N
n+1,i
=Y
k
[
N
n+1, j
N
n+1,i
.
Thus, there are exactly (N
n+1, j
N
n+1,k
) = r
d
n1
duplicated equations among
these three systems of equations. Counting all combinations of the three systems
generated by the equations in (3.5.2) gives
_
d
3
_
r
d
n1
equations; however, among
them some are counted more than once. Repeating the above argument for the
four distinct systems generated by the equations in (3.5.1), we have that there
are (N
n,i
N
n, j
N
n,k
) = r
d
n2
equations that are duplicated. There are
_
d
4
_
combinations of the four different systems of equations. We then need to consider
ve systems of equations, and so on. In this way, it follows from the inclusion
exclusion formula that the number of duplicated equations in (3.5.2) is
_
d
3
_
r
d
n1

_
d
4
_
r
d
n2
+ +(1)
d+1
r
d
nd+2
=
d

k=3
(1)
k+1
_
d
k
_
r
d
nk+2
.
Thus, among the
_
d
2
_
r
d
n
equations (3.5.1), the number that are independent is
_
d
2
_
r
d
n

k=3
(1)
k+1
_
d
k
_
r
d
nk+2
=
d

k=2
(1)
k
_
d
k
_
r
d
nk+2
.
Since the dimension of the null space is equal to the number of variables minus
the number of independent equations, and there are dr
d
n+1
variables, we have
dim ker
T
L
T
n
= dr
d
n+1

k=2
(1)
k
_
d
k
_
r
d
nk+2
= r
d
n+2
,
where the last equality follows from a ChuVandermonde sum (see Proposi-
tion 1.2.3). The proof is complete.
Lemma 3.5.6 For d 2 and n 1, rank
L
n
= dr
d
n
r
d
n1
.
Proof We shall prove that the dimension of the null space of
T
L
n
is r
d
n1
. Suppose
that Y = (Y
T
1
Y
T
d
)
T
R
dr
n
, where Y
i
R
r
n
. Consider the homogeneous
equation
T
L
n
Y = 0. By Lemma 3.5.3 this equation is equivalent to the system of
linear equations
L
T
n,i
Y
j
= L
T
n, j
Y
i
, 1 i < j d. (3.5.3)
We take N
n
and N
n,i
to be as in the proof of the previous lemma. Note that L
T
n,i
Y
j
is a vector in R
r
n+1
, whose coordinates corresponding to N
n,i
are those of Y
j
and
whose other coordinates are zeros. Thus, equation (3.5.3) implies that the ele-
ments of Y
j
are nonzero only when they correspond to N
n, j
N
n,i
being in L
T
n,i
Y
j
,
3.5 Further Properties of the Three-Term Relation 91
that is, (L
T
n,i
Y
j
)[
N
n,i
N
n, j
; the nonzero elements of Y
i
are (L
T
n, j
Y
i
)[
N
n,i
N
n, j
. More-
over, these two vectors are equal. Since for any X R
r
n+1
we have L
n1, j
L
n,i
X =
X[
N
n,i
N
n, j
, which follows from L
n1,i
L
n, j
x
n+1
= x
i
x
j
x
n1
, from the fact that
L
n,i
L
T
n,i
= I we have
(L
T
n,i
Y
j
)[
N
n,i
N
n, j
= L
n1, j
L
n,i
(L
T
n,i
Y
j
) = L
n1, j
Y
j
=Y
j
[
N
n1, j
.
Therefore, the nonzero elements of Y
i
and Y
j
satisfy Y
i
[
N
n1,i
= Y
j
[
N
n1, j
, 1
i < j d. Thus there are exactly (N
n1,i
) = r
d
n1
independent variables in the
solution of (3.5.3). That is, the dimension of the null space of
T
L
n
is r
d
n1
.
From the last two lemmas we can derive the ranks of
A
n
and
A
T
n
.
Proposition 3.5.7 Let A
n,i
be the matrices in the three-term relation. Then
rank
A
n
= dr
d
n
r
d
n1
, and rank
A
T
n
= dr
d
n+1
r
d
n+2
.
Proof By (3.3.4) and the denition of
C
it readily follows that

A
n
diagG
n+1
, . . . , G
n+1
= diagG
n
, . . . , G
n

L
n
,
where the block diagonal matrix on the left-hand side is of size
_
d
2
_
r
d
n+1

_
d
2
_
r
d
n+1
and that on the right-hand side is of size dr
d
n
dr
d
n
. Since G
n
and G
n+1
are both
invertible, so are these two block matrices. Therefore rank
A
n
= rank
L
n
. Thus,
the rst equality follows from the corresponding equality in Lemma 3.5.6. The
second is proved similarly.
Our next lemma deals with the null space of I A
n
D
T
n
.
Lemma 3.5.8 Let A
n,i
be the coefcient matrices in the three-term rela-
tion (3.3.1), and let D
T
n
be the left inverse of A
n
, dened in (3.3.9). Then
Y = (Y
T
1
Y
T
d
)
T
, Y
i
R
r
n
, is in the null space of I A
n
D
T
n
if and only if
A
n1,i
Y
j
= A
n1, j
Y
i
, 1 i, j d, or, equivalently,
T
A
T
n1
Y = 0.
Proof Let the singular-value decompositions of A
n
and D
T
n
be of the form given
in Subsection 3.3.1. Both A
n
and D
n
are of size dr
d
n
r
d
n+1
. It follows that
I A
n
D
T
n
=W
T
_
0
1
0 I
_
W,
where W is a unitary matrix, and
1
are invertible diagonal matrices and I is the
identity matrix of size dr
d
n
r
d
n+1
, from which it follows that rank(I A
n
D
T
n
) =
dr
d
n
r
d
n+1
. Hence
dim ker(I A
n
D
T
n
) = dr
d
n
rank(I A
n
D
T
n
) = r
d
n+1
.
92 General Properties of Orthogonal Polynomials in Several Variables
Since D
T
n
A
n
= I, it follows that (I A
n
D
T
n
)A
n
= A
n
A
n
= 0, which implies that
the columns of A
n
form a basis for the null space of I A
n
D
T
n
. Therefore, if
(I A
n
D
T
n
)Y = 0 then there exists a vector c such that Y = A
n
c. But the rst
commutativity condition in (3.4.2) is equivalent to
T
A
T
n1
A
n
= 0; it follows that

T
A
T
n1
Y = 0. This proves one direction. For the other direction, suppose that

T
A
T
n1
Y = 0. By Proposition 3.5.7,
dim ker
T
A
T
n1
= dr
d
n
rank
T
A
T
n1
= r
d
n+1
.
From
T
A
T
n1
Y =0 it also follows that the columns of A
n
form a basis for ker
T
A
T
n1
.
Therefore, there exists a vector c

such that Y = A
n
c

, which, by D
T
n
A
n
= I,
implies that (I A
n
D
T
n
)Y = 0.
Finally, we are in a position to prove Theorem 3.5.1.
Proof of Theorem 3.5.1 Evidently, we need only prove the if part of the theo-
rem. Suppose that P
n

n=0
is dened recursively by (3.3.10) and that the matrices
A
n,i
and B
n,i
satisfy the rank and commutativity conditions. We will use induction.
For n = 0, P
0
= 1 and P
1
= 0; thus
P
1
=
d

i=1
D
T
0,i
x
i

i=1
D
T
0,i
B
0,i
.
Since A
0
and D
0
are both d d matrices and D
T
0
A
0
= I, it follows that A
0
D
T
0
= I,
which implies that A
0,i
D
T
0, j
=
i, j
. Therefore
A
0, j
P
1
= x
j
B
0, j
= x
j
P
0
B
0, j
P
0
.
Assuming we have proved that
A
k, j
P
k+1
= x
j
P
k
B
k, j
P
k
A
T
k1, j
P
k1
, 0 k n1, (3.5.4)
we now show that this equation also holds for k = n. First we prove that
A
n, j
P
n+1
= x
j
P
n
B
n, j
P
n
A
T
n1, j
P
n1
+Q
n2, j
, (3.5.5)
where the components of Q
n2, j
are elements of the polynomial space
d
n2
.
Since each component of P
n
is a polynomial of degree at most n, we can write
P
n
= G
n,n
x
n
+G
n,n1
x
n1
+G
n,n2
x
n
+ ,
for some matrices G
n,k
and also write G
n
= G
n,n
. Upon expanding both sides
of (3.5.5) in powers of x
k
, it sufces to show that the highest three coef-
cients in the expansions are equal. Since P
n+1
is dened by the recurrence
relation (3.3.10), comparing the coefcients shows that we need to establish that
3.5 Further Properties of the Three-Term Relation 93
A
n, j
d
i=1
D
T
n,i
X
n,i
=X
n, j
, 1 j d, where X
n, j
is one of Q
n, j
, R
n, j
and S
n, j
, which
are as follows:
Q
n, j
= G
n
L
n, j
,
R
n, j
= G
n,n1
L
n1, j
B
n, j
G
n
,
S
n, j
= G
n,n2
L
n2, j
B
n, j
G
n,n1
A
T
n1, j
G
n1
;
(3.5.6)
see (3.3.5) for the L
n, j
. Written in a more compact form, the equations that we
need to show are
A
n
D
T
n
Q
n
= Q
n
, A
n
D
T
n
R
n
= R
n
, A
n
D
T
n
S
n
= S
n
.
We proceed by proving that the columns of Q
n
, R
n
and S
n
belong to the null space
of I A
n
D
T
n
. But, by Lemmas 3.5.8 and 3.5.3, this reduces to showing that
A
n1,i
Q
n, j
= A
n1, j
Q
n,i
, A
n1,i
R
n, j
= A
n1, j
R
n,i
,
A
n1,i
S
n, j
= A
n1, j
S
n,i
.
These equations can be seen to follow from Lemma 3.5.4, equations (3.4.2) and
the following:
G
n1
L
n1,i
= A
n1,i
G
n
,
G
n1,n2
L
n2,i
= A
n1,i
G
n,n1
B
n1,i
G
n1
,
G
n1,n3
L
n3,i
= A
n1,i
G
n,n2
B
n1,i
G
n1,n2
A
T
n2,i
G
n2
,
which are obtained by comparing the coefcients of (3.5.4) for k = n 1. For
example,
A
n1,i
R
n, j
= A
n1,i
(G
n,n1
L
n1, j
B
n, j
G
n
)
= G
n1,n2
L
n2,i
L
n1, j
B
n1,i
G
n1
L
n1, j
A
n1,i
B
n, j
G
n
= G
n1,n2
L
n2,i
L
n1, j
(B
n1,i
A
n1, j
+A
n1,i
B
n, j
)G
n
= A
n1, j
R
n,i
.
The other two equations follow similarly. Therefore, we have proved (3.5.5). In
particular, from (3.5.4) and (3.5.5) we obtain
A
k,i
G
k+1
= G
k
L
k,i
, 0 k n, 1 i d, (3.5.7)
which, since A
n,i
satises the rank condition, implies that G
n+1
is invertible, as in
the proof of Theorem 3.3.7.
To complete the proof, we now prove that Q
n2,i
= 0. On the one hand, mul-
tiplying equation (3.5.5) by D
T
n,i
and summing for 1 i d, it follows from the
recurrence formulae (3.3.10) and D
T
n
A
n
= I that
d

j=1
D
T
n, j
Q
n2, j
=
d

j=1
D
T
n, j
A
n, j
P
n+1
P
n+1
= 0. (3.5.8)
94 General Properties of Orthogonal Polynomials in Several Variables
On the other hand, it follows from (3.5.4) and (3.5.5) that
A
n1,i
Q
n2, j
=x
i
x
j
P
n1
+A
n1,i
A
n, j
P
n+1
+(A
n1,i
B
n, j
+B
n1,i
A
n1, j
)P
n
+(A
n1,i
A
T
n1, j
+B
n1,i
B
n1, j
+A
T
n2,i
A
n2, j
)P
n1
+(A
T
n2,i
B
n2, j
+B
n1,i
A
T
n2, j
)P
n2
+A
T
n2,i
A
T
n3, j
P
n3
,
which implies by the commutativity conditions (3.4.2) that
A
n1,i
Q
n2, j
= A
n1, j
Q
n2,i
.
We claim that this equation implies that there is a vector Q
n
such that Q
n2,i
=
A
n,i
Q
n
, which, once established, completes the proof upon using (3.5.8) and
the fact that D
T
n
A
n
= I. Indeed, the above equation by (3.5.7), is equivalent to
L
n1,i
X
j
= L
n1, j
X
i
. Dene Q
n
by Q
n
= G
1
n+1
X, where X is dened as a vector
that satises L
n,i
X = X
i
, in which X
i
= G
1
n,i
Q
n2,i
. In other words, recalling the
notations used in the proof of Lemma 3.5.5, X is dened as a vector that satises
X[
N
n,i
= X
i
. Since N =

N
n,i
and
X[
N
n,i
N
n, j
= L
n1,i
L
n, j
X = L
n1,i
X
j
= L
n1, j
X
i
= X[
N
n,i
N
n, j
,
we see that X, and hence Q
n
, is well dened.
3.5.2 General solutions of the three-term relation
Next we turn our attention to another difference between orthogonal polynomials
of one and several variables. From the three-term relation and Favards theo-
rem, orthogonal polynomials in one variable can be considered as solutions of
the difference equation
a
k
y
k+1
+b
k
y
k
+a
k1
y
k1
= xy
k
, k 1,
where a
k
> 0 and the initial conditions are y
0
= 1 and y
1
= a
1
0
(xb
0
). It is well
known that this difference equation has another solution that satises the initial
conditions y
0
= 0 and y
1
= a
1
0
. The components of the solution, denoted by q
k
,
are customarily called associated polynomials, or polynomials of the second kind.
Together, these two sets of solutions share many interesting properties, and q
n

plays an important role in areas such as the problem of moments, the spectral
theory of the Jacobi matrix and continuous fractions.
By the three-term relation (3.3.3), we can consider orthogonal polynomials in
several variables to be solutions of the multi-parameter nite difference equations
x
i
Y
k
= A
k,i
Y
k+1
+B
k,i
Y
k
+A
T
k1,i
Y
k1
, 1 i d, k 1, (3.5.9)
where the B
n,i
are symmetric, the A
n,i
satisfy the rank conditions (3.3.8) and
(3.3.7) and the initial values are given by
Y
0
= a, Y
1
= b, a R, b R
d
. (3.5.10)
3.5 Further Properties of the Three-Term Relation 95
One might expect that there would be other linearly independent solutions of
(3.5.9) which could be dened as associated polynomials in several variables.
However, it turns out rather surprisingly that (3.5.9) has no solutions other than
the system of orthogonal polynomials. This result is formulated as follows.
Theorem 3.5.9 If the multi-parameter difference equation (3.5.9) has a solution
P =P
k

k=0
for the particular initial values
Y

0
= 1, Y

1
= A
1
0
(x B
0
) (3.5.11)
then all other solutions of (3.5.9) and (3.5.10) are multiples of P with the
possible exception of the rst component. More precisely, if Y = Y
k

k=0
is a
solution of (3.5.9) and (3.5.10) then Y
k
= hP
k
for all k 1, where h is a function
independent of k.
Proof The assumption and the extension of Favards theorem in Theorem 3.3.8
imply that P=P
k

k=0
forms a sequence of orthonormal polynomials. Therefore,
the coefcient matrices of (3.5.9) satisfy the commutativity conditions (3.4.2).
Suppose that a sequence of vectors Y
k
satises (3.5.9) and the initial values
(3.5.10). From equation (3.5.9),
A
k,i
Y
k+1
= x
i
Y
k
B
k,i
Y
k
A
T
k1,i
Y
k1
, 1 i d.
Multiplying the ith equation by A
k1, j
and the jth equation by A
k1,i
, it follows
from the rst equation of (3.4.2) that
A
k1,i
(x
j
Y
k
B
k, j
Y
k
A
T
k1, j
Y
k1
) =A
k1, j
(x
i
Y
k
B
k,i
Y
k
A
T
k1,i
Y
k1
), (3.5.12)
for 1 i, j d and k 1. In particular, the case k = 1 gives a relation between
Y
0
and Y
1
, which, upon using the second equation of (3.4.2) for k = 1 and the fact
that A
0,i
A
T
0, j
and B
0,i
are numbers, we can rewrite as
(x
i
B
0,i
)A
0, j
Y
1
= (x
j
B
0, j
)A
0,i
Y
1
.
However, x
i
and x
j
are independent variables; Y
1
must be a function of x, and
moreover, has to satisfy A
0,i
Y
1
= (x
i
b
i
)h(x), where h is a function of x. Substi-
tuting the latter equation into the above displayed equation, we obtain b
i
= B
0,i
.
The case h(x) = 1 corresponds to the orthogonal polynomial solution P. Since
D
T
0
= A
1
0
, from (3.5.11) we have
Y
1
=
d

i=1
D
T
0,i
(x
i
B
0,i
)h(x) = A
1
0
(x B
0
)h(x) = h(x)P
1
.
Therefore, upon using the left inverse D
T
k
of A
k
from (3.3.9) we see that every
solution of (3.5.9) satises
Y
k+1
=
d

i=1
D
T
k,i
x
i
Y
k
E
k
Y
k
F
k
Y
k1
, (3.5.13)
96 General Properties of Orthogonal Polynomials in Several Variables
where E
k
and F
k
are dened as in (3.3.10); consequently,
Y
2
=
d

i=1
D
T
1,i
(x
i
I B
1,i
)h(x)P
1
F
1
Y
0
= h(x)P
2
+F
1
(h(x) Y
0
).
Since P is a solution of equation (3.5.9) with initial conditions (3.5.11), it follows
from (3.5.12) that
A
1,i
(x
j
P
2
B
2, j
P
2
A
T
1, j
P
1
) = A
1, j
(x
i
P
2
B
2,i
P
2
A
T
1,i
P
1
).
Multiplying this equation by h and subtracting it from (3.5.12) with k = 2, we
conclude from the formulae for Y
1
and Y
2
that
A
1,i
(x
j
I B
2, j
)F
1
(h(x) Y
0
) = A
1, j
(x
i
I B
2,i
)F
1
(h(x) Y
0
).
If Y
0
= h(x) then it follows from (3.5.13) and Y
1
= h(x)P
1
that Y
k
= h(x)P
k
for all
k 0, which is the conclusion of the theorem. Now assume that h(x) ,=Y
0
. Thus,
h(x) Y
0
is a nonzero number so, from the previous formula,
A
1,i
(x
j
I B
2, j
)F
1
= A
1, j
(x
i
I B
2,i
)F
1
.
However, since x
i
and x
j
are independent variables, we conclude that for this
equality to hold it is necessary that A
1,i
F
1
= 0, which implies that F
1
= 0 because
D
T
n
A
n
= 1. We then obtain from the formula for Y
2
that Y
2
= h(x)P
2
. Thus, by
(3.5.13), Y
k
= h(x)P
k
for all k 1, which concludes the proof.
3.6 Reproducing Kernels and Fourier Orthogonal Series
Let L( f ) =
_
f d be a positive denite linear functional, where d is a nonneg-
ative Borel measure with nite moments. Let P
n

be a sequence of orthonormal
polynomials with respect to L( f ). For any function f in L
2
(d), we can consider
its Fourier orthogonal series with respect to P
n

:
f

n=0

[[=n
a
n

( f )P
n

with a
n

( f ) =
_
f (x)P
n

(x)d.
Although the bases of orthonormal polynomials are not unique, the Fourier series
in fact is independent of the particular basis. Indeed, using the vector notation P
n
,
the above denition of the Fourier series may be written as
f

n=0
a
T
n
( f )P
n
with a
n
( f ) =
_
f (x)P
n
(x)d. (3.6.1)
Furthermore, recalling that V
d
n
denotes the space of orthogonal polynomials of
degree exactly n, as in (3.1.2), it follows from the orthogonality that the expansion
can be considered as
f

n=0
proj
V
d
n
f , where proj
V
d
n
f = a
T
n
( f )P
n
.
3.6 Reproducing Kernels and Fourier Orthogonal Series 97
We shall denote the projection operator proj
V
d
n
f by P
n
( f ). In terms of an
orthonormal basis, it can be written as
P
n
( f ; x) =
_
f (y)P
n
(x, y)d(y), P
n
(x, y) =P
T
n
(x)P
n
(y). (3.6.2)
The projection operator is independent of the particular basis. In fact, since two
different orthonormal bases differ by a factor that is an orthogonal matrix (The-
orem 3.2.14), it is evident from (3.6.2) that P
n
( f ) depends on V
d
n
rather than a
particular basis of V
d
n
. We dene the nth partial sum of the Fourier orthogonal
expansion (3.6.1) by
S
n
( f ) =
n

k=0
a
T
k
( f )P
k
=
_
K
n
(, y) f (y)d, (3.6.3)
where the kernel K
n
(, ) is dened by
K
n
(x, y) =
n

k=0

[[=k
P
k

(x)P
k

(y) =
n

k=0
P
k
(x, y) (3.6.4)
and is often called the nth reproducing kernel, for reasons to be given below.
3.6.1 Reproducing kernels
For the denition of K
n
it is not necessary to use an orthonormal basis; any
orthogonal basis will do. Moreover, the denition makes sense for any moment
functional.
Let L be a moment functional, and let P
n
be a sequence of orthogonal poly-
nomials with respect to L. In terms of P
n
, the kernel function K
n
takes the
form
K
n
(x, y) =
n

k=0
P
T
k
(x)H
1
k
P
k
(y), H
k
=L(P
k
P
T
k
).
Theorem 3.6.1 Let V
d
k
, k 0, be dened by means of a moment functional L.
Then K
n
(, ) depends only on V
d
k
rather than on a particular basis of V
d
k
.
Proof Let P
k
be a basis of V
d
k
. If Q
k
is another basis then there exists an invert-
ible matrix M
k
, independent of x, such that P
k
(x) = M
k
Q
k
(x). Let H
k
(P) =
H
k
=L(P
k
P
T
k
) and H
k
(Q) =L(Q
k
Q
T
k
). Then H
1
k
(P) = (M
T
k
)
1
H
1
k
(Q)M
1
k
.
Therefore P
T
k
H
1
k
(P)P
k
=Q
T
k
H
1
k
(Q)Q
k
, which proves the stated result.
Since the denition of K
n
does not depend on the particular basis in which it is
expressed, it is often more convenient to work with an orthonormal basis when L
is positive denite. The following theorem justies the name reproducing kernel.
98 General Properties of Orthogonal Polynomials in Several Variables
Theorem 3.6.2 Let L be a positive denite linear functional on all P in the
polynomial space
d
. Then, for all P
d
n
,
P(x) =L
_
K
n
(x, )P()
_
.
Proof Let P
n
be a sequence of orthogonal polynomials with respect to L
and let L(P
n
P
T
n
) = H
n
. For P
d
n
we can expand it in terms of the basis
P
0
, P
1
, . . . , P
n
by using the orthogonality property:
P(x) =
n

k=0
[a
k
(P)]
T
H
1
k
P
k
(x) with a
k
(P) =L(PP
k
).
But this equation is equivalent to
P(x) =
n

k=0
L(PP
k
)
T
H
1
k
P
k
(x) =L
_
K
n
(x, )P()

,
which is the desired result.
For orthogonal polynomials in one variable, the reproducing kernel enjoys
a compact expression called the ChristoffelDarboux formula. The following
theorem is an extension of this formula for several variables.
Theorem 3.6.3 Let L be a positive denite linear functional, and let P
k

k=0
be a sequence of orthogonal polynomials with respect to L. Then, for any integer
n 0, x, y R
d
,
n

k=0
P
T
k
(x)H
1
k
P
k
(y)
=
_
A
n,i
P
n+1
(x)

T
H
1
n
P
n
(y) P
T
n
(x)H
1
n
_
A
n,i
P
n+1
(y)

x
i
y
i
, (3.6.5)
for 1 i d, where x = (x
1
x
d
) and y = (y
1
y
d
).
Proof By Theorem 3.3.1, the orthogonal polynomials P
n
satisfy the three-term
relation (3.3.1). Let us write

k
=
_
A
k,i
P
k+1
(x)

T
H
1
k
P
k
(y) P
k
(x)
T
H
1
k
_
A
k,i
P
k+1
(y)

.
From the three-term relation (3.3.1),

k
=
_
x
i
P
k
(x) B
k,i
P
k
(x) C
k,i
P
k1
(x)

T
H
1
k
P
k
(y)
P
T
k
(x)H
1
k
_
y
i
P
k
(y) B
k,i
P
k
(y) C
k,i
P
k1
(y)

=(x
i
y
i
)P
T
k
(x)H
1
k
P
k
(y) P
T
k
(x)
_
B
T
k,i
H
1
k
H
1
k
B
k,i
_
P
k
(y)

_
P
T
k1
(x)C
T
k,i
H
1
k
P
k
(y) P
T
k
(x)H
1
k
C
k,i
P
k1
(y)

.
3.6 Reproducing Kernels and Fourier Orthogonal Series 99
By the denition of H
k
and (3.3.2), both H
k
and B
k,i
H
k
are symmetric matrices;
hence B
k,i
H
k
= (B
k,i
H
k
)
T
=H
k
B
T
k,i
, which implies that B
T
k,i
H
1
k
=H
1
k
B
k,i
. There-
fore, the second term on the right-hand side of the above expression for
k
is zero.
From the third equation of (3.3.2), C
T
k,i
H
1
k
= H
1
k1
A
k1,i
. Hence, the third term
on the right-hand side of the expression for
k
is
P
T
k1
(x)H
1
k1
A
k1,i
P
k
(y) +P
T
k
(x)A
T
k1,i
H
1
k1
P
k1
(y) =
k1
.
Consequently, the expression for
k
can be rewritten as
(x
i
y
i
)P
T
k
(x)H
1
k
P
k
(y) =
k

k1
.
Summing this identity from 0 to n and noting that
1
= 0, we obtain the stated
equation.
Corollary 3.6.4 Let P
k

k=0
be as in Theorem 3.6.3. Then
n

k=0
P
T
k
(x)H
1
k
P
k
(x)
=P
T
n
(x)H
1
n
_
A
n,i

i
P
n+1
(x)

_
A
n,i
P
n+1
(x)

T
H
1
n

i
P
n
(x),
where
i
=/x
i
denotes the partial derivative with respect to x
i
.
Proof Since P
n
(x)
T
H
1
n
[A
n,i
P
n+1
(x)] is a scalar function, it is equal to its own
transpose. Thus
P
n
(x)
T
H
1
n
_
A
n,i
P
n+1
(x)

=
_
A
n,i
P
n+1
(x)

T
H
1
n
P
n
(x).
Therefore the numerator of the right-hand side of the ChristoffelDarboux
formula (3.6.5) can be written as
_
A
n,i
P
n+1
(x)

T
H
1
n
_
P
n
(y) P
n
(x)

P
n
(x)
T
H
1
n
A
n,i
_
P
n+1
(y) P
n+1
(x)

.
Thus the desired result follows from (3.6.5) on letting y
i
x
i
.
If P
n
are orthonormal polynomials then the formulae in Theorem 3.6.3 and
Corollary 3.6.4 hold with H
k
=I. We state this case as follows, for easy reference.
Theorem 3.6.5 Let P
n
be a sequence of orthonormal polynomials. Then
K
n
(x, y) =
_
A
n,i
P
n+1
(x)

T
P
n
(y) P
T
n
(x)
_
A
n,i
P
n+1
(y)

x
i
y
i
(3.6.6)
and
K
n
(x, x) =P
T
n
(x)
_
A
n,i

i
P
n+1
(x)

_
A
n,i
P
n+1
(x)

i
P
n
(x). (3.6.7)
100 General Properties of Orthogonal Polynomials in Several Variables
It is interesting to note that, for each of the above formulae, although the right-
hand side seems to depend on i, the left-hand side shows that it does not. By
(3.6.3), the function K
n
is the kernel for the Fourier partial sum; thus it is impor-
tant to derive a compact formula for K
n
, that is, one that contains no summation.
In the case of one variable, the right-hand side of the ChristoffelDarboux for-
mula gives the desired compact formula. For several variables, however, the
numerator of the right-hand side of (3.6.6) is still a sum, since P
T
n
(x)A
n,i
P
n+1
(y) is
a linear combination of P
n

(x)P
n+1

(y). For some special weight functions, includ-


ing the classical orthogonal polynomials on the ball and on the simplex, we are
able to nd a compact formula; this will be discussed in Chapter 8.
For one variable the function 1/K
n
(x, x) is often called the Christoffel function.
We will retain the name in the case of several variables, and we dene

n
(x) =
_
K
n
(x, x)

1
. (3.6.8)
Evidently this function is positive; moreover, it satises the following property.
Theorem 3.6.6 Let L be a positive denite linear functional. For an arbitrary
point x R
d
,

n
(x) = minL[P
2
] : P(x) = 1, P
d
n
;
that is, the minimum is taken over all polynomials P
d
n
subject to the condition
P(x) = 1.
Proof Since L is positive denite, there exists a corresponding sequence P
k

of orthonormal polynomials. If P is a polynomial of degree at most n, then P can


be written in terms of the orthonormal basis as follows:
P(x) =
n

k=0
a
T
k
(P)P
k
(x) where a
k
(P) =L(PP
k
).
From the orthonormal property of P
k
it follows that
L(P
2
) =
n

k=0
a
T
k
(P)a
k
(P) =
n

k=0
|a
k
(P)|
2
.
If P(x) = 1 then, by Cauchys inequality,
1 = [P(x)]
2
=
_
n

k=0
a
T
k
(P)P
k
(x)
_
2

_
n

k=0
|a
k
(P)||P
k
(x)|
_
2

k=0
|a
k
(P)|
2
n

k=0
|P
k
(x)|
2
=L(P
2
)K
n
(x, x),
where equality holds if and only if a
k
(P) =
_
K
n
(x, x)

1
P
k
(x). The proof is
complete.
3.6 Reproducing Kernels and Fourier Orthogonal Series 101
Theorem 3.6.6 states that the function
n
is the solution of an extremum
problem. One can also take the formula in the theorem as the denition of the
Christoffel function.
3.6.2 Fourier orthogonal series
Let L be a positive denite linear functional, and let H
L
be the Hilbert space
of real-valued functions dened on R
d
with inner product f , g) =L( f g). Then
the standard Hilbert space theory applies to the Fourier orthogonal series dened
in (3.6.1).
Theorem 3.6.7 Let L be a positive denite linear functional, and f H
L
.
Then among all the polynomials P in
d
n
, the value of
L
_
[ f (x) P(x)[
2
_
becomes minimal if and only if P = S
n
( f ).
Proof Let P
k
be an orthonormal basis of V
k
. For any P
d
n
there exist b
k
such that
P(x) =
n

k=0
b
T
k
P
k
(x),
and S
n
( f ) satises equation (3.6.3). Following a standard argument,
0 L
_
[ f P[
2
_
=L
_
f
2
_
2

b
T
k
L ( f P
k
) +

j
b
T
k
L
_
P
k
P
T
j
_
b
j
=L
_
f
2
_
2
n

k=0
b
T
k
a
k
( f ) +
n

k=0
b
T
k
b
k
=L
_
f
2
_

k=0
a
T
k
( f )a
k
( f ) +
n

k=0
[a
T
k
( f )a
k
( f ) +b
T
k
b
k
2b
T
k
a
k
( f )].
By Cauchys inequality the third term on the right-hand side is nonnegative;
moreover, the value of L([ f P[
2
) is minimal if and only if b
k
= a
k
( f ), or
P = S
n
( f ).
In the case when the minimum is attained, we have Bessels inequality

k=0
a
T
k
( f )a
k
( f ) L
_
[ f [
2
_
.
Moreover, the following result holds as in standard Hilbert space theory.
Theorem 3.6.8 If
d
is dense in H
L
then S
n
( f ) converges to f in H
L
, and we
have Parsevals identity

k=0
a
T
k
( f )a
k
( f ) =L
_
[ f [
2
_
.
102 General Properties of Orthogonal Polynomials in Several Variables
The denition of the Fourier orthogonal series does not depend on the special
basis of V
d
n
. If P
n
is a sequence of polynomials that are orthogonal rather than
orthonormal then the Fourier series will take the form
f

n=0
a
T
n
( f )H
1
n
P
n
, a
n
( f ) =L ( f P
n
), H
n
=L
_
P
n
P
T
n
_
. (3.6.9)
The presence of the inverse matrix H
1
k
makes it difcult to nd the Fourier
orthogonal series using a basis that is orthogonal but not orthonormal.
An alternative method is to use biorthogonal polynomials to nd the Fourier
orthogonal series. Let P
n
=P
n

and Q
n
=Q
n

be two sequences of orthogonal


polynomials with respect to L. They are biorthogonal if L
_
P
n

Q
n

_
= d

,
,
where d

is nonzero. If all d

= 1, the two systems are called biorthonormal. If


P
n
and Q
n
are biorthonormal then the coefcients of the Fourier orthogonal series
in terms of P
n
can be found as follows:
f

n=0
a
T
n
( f )P
n
, a
n
( f ) =L( f Q
n
);
that is, the Fourier coefcient associated with P
n

is given by L( f Q
n

). The idea
of using biorthogonal polynomials to study Fourier orthogonal expansions was
rst applied to the case of classical orthogonal polynomials on the unit ball (see
Section 5.2), and can be traced back to the work of Hermite; see Chapter XII in
Vol. II of Erd elyi et al. [1953].
We nish this section with an extremal problemfor polynomials in several vari-
ables. Let G
n
be the leading-coefcient matrix corresponding to an orthonormal
polynomial vector P
n
as before. Denote by |G
n
| the spectral norm of G
n
; |G
n
|
2
is equal to the largest eigenvalue of G
n
G
T
n
.
Theorem 3.6.9 Let L be a positive denite linear functional. Then
minL(P
2
) : P = a
T
x
n
+
d
n
, |a| = 1 =|G
n
|
2
.
Proof Since P
0
, . . . , P
n
forms a basis of
d
n
, we can rewrite P as
P(x) = a
T
G
1
n
P
n
+a
T
n1
P
n1
+ +a
T
0
P
0
.
For |a|
2
= 1, it follows from Bessels inequality that
L(P
2
) a
T
G
1
n
(G
1
n
)
T
a+|a
n1
|
2
+ +|a
0
|
2
a
T
G
1
n
(G
1
n
)
T
a
min
where
min
is the smallest eigenvalue of (G
T
n
G
n
)
1
, which is equal to the
reciprocal of the largest eigenvalue of G
n
G
T
n
. Thus, we have proved that
L(P
2
) |G
n
|
2
, P(x) = a
T
x
n
+ , |a| = 1.
3.7 Common Zeros of Orthogonal Polynomials in Several Variables 103
Choosing a such that |a| = 1 and a
T
G
1
n
(G
1
n
)
T
a =
min
, we see that the
polynomial P = a
T
G
1
n
P
n
attains the lower bound.
In Theorem 3.2.14, det G
n
can be viewed as an extension of the leading coef-
cient k
n
(see Proposition 1.3.7) of orthogonal polynomials in one variable in the
context there. The above theorem shows that |G
n
| is also a natural extension of

n
. In view of the formula (3.3.4), it is likely that |G
n
| is more important.
3.7 Common Zeros of Orthogonal Polynomials
in Several Variables
For polynomials in one variable the nth orthonormal polynomial p
n
has exactly
n distinct zeros, and these zeros are the eigenvalues of the truncated Jacobi
matrix J
n
. These facts have important applications in quadrature formulae, as we
saw in Subsection 1.3.2.
A zero of a polynomial in several variables is an example of an algebraic
variety. In general, a zero of a polynomial in two variables can be either a sin-
gle point or an algebraic curve on the plane. The structures of zeros are far more
complicated in several variables. However, if common zeros of orthogonal poly-
nomials are used then at least part of the theory in one variable can be extended
to several variables.
A common zero of a set of polynomials is a zero for every polynomial in the
set. Let L be a positive denite linear functional. Let P
n
= P
n

be a sequence
of orthonormal polynomials associated with L. A common zero of P
n
is a zero
of every P
n

. Clearly, we can consider zeros of P


n
as zeros of the polynomial
subspace V
d
n
.
We start with two simple properties of common zeros. First we need a deni-
tion: if x is a zero of P
n
and at least one partial derivative of P
n
at x is not zero
then x is called a simple zero of P
n
.
Theorem 3.7.1 All zeros of P
n
are distinct and simple. Two consecutive
polynomials P
n
and P
n1
do not have common zeros.
Proof Recall the ChristoffelDarboux formula in Theorem 3.6.3, which gives
n1

k=0
P
T
k
(x)P
k
(x) =P
T
n1
(x)A
n1,i

i
P
n
(x) P
T
n
(x)A
T
n1,i

i
P
n1
(x),
where
i
=/x
i
denotes the partial derivative with respect to x
i
. If x is a zero of
P
n
then
n1

k=0
P
T
k
(x)P
k
(x) =P
T
n1
(x)A
n1,i

i
P
n
(x).
Since the left-hand side is positive, neither P
n1
(x) nor
i
P
n
(x) can be zero.
104 General Properties of Orthogonal Polynomials in Several Variables
Using the coefcient matrices of the three-term relation (3.3.3), we dene the
truncated block Jacobi matrices J
n,i
as follows:
J
n,i
=
_

_
B
0,i
A
0,i
M
A
T
0,i
B
1,i
A
1,i
.
.
.
.
.
.
.
.
.
A
T
n3,i
B
n2,i
A
n2,i
M A
T
n2,i
B
n1,i
_

_
, 1 i d.
Note that J
n,i
is a square matrix of order N = dim
d
n1
. We say that =
(
1

d
)
T
R
d
is a joint eigenvalue of J
n,1
, . . . , J
n,d
if there is a ,= 0,
R
N
, such that J
n,i
=
i
for i = 1, . . . , d; the vector is called a joint
eigenvector associated with .
The common zeros of P
n
are characterized in the following theorem.
Theorem 3.7.2 A point = (
1

d
)
T
R
d
is a common zero of P
n
if
and only if it is a joint eigenvalue of J
n,1
, . . . , J
n,d
; moreover, the joint eigenvectors
of are constant multiples of (P
T
0
() P
T
n1
())
T
.
Proof If P
n
() = 0 then it follows from the three-term relation that
B
0,i
P
0
() +A
0,i
P
1
() =
i
P
0
(),
A
T
k1,i
P
k1
() +B
k,i
P
k
() +A
k,i
P
k+1
() =
i
P
k
(), 1 k n2,
A
T
n2,i
P
n2
() +B
n1,i
P
n1
() =
i
P
n1
(),
for 1 i d. From the denition of J
n,i
it follows that, on the one hand,
J
n,i
=
i
, =
_
P
T
0
() P
T
n1
()
_
T
.
Thus, is the eigenvalue of J
n
with joint eigenvector .
On the other hand, suppose that = (
1

d
) is an eigenvalue of J
n
and
that J
n
has a joint eigenvector for . Write = (x
T
0
x
T
n1
)
T
, x
j
R
r
j
.
Since J
n,i
=
i
, it follows that x
j
satises a three-term relation:
B
0,i
x
0
+A
0,i
x
1
=
i
x
0
,
A
T
k1,i
x
k1
+B
k,i
x
k
+A
k,i
x
k+1
=
i
x
k
, 1 k n2,
A
T
n2,i
x
n2
+B
n1,i
x
n1
=
i
x
n1
,
for 1 i d. First we show that x
0
, and thus , is nonzero. Indeed, if x
0
= 0
then it follows from the rst equation in the three-term relation that A
0,i
x
1
= 0,
which implies that A
0
x
1
= 0. Since A
0
is a d d matrix and it has full rank, it
follows that x
1
= 0. With x
0
= 0 and x
1
= 0, it then follows from the three-term
relation that A
1,i
x
2
= 0, which leads to A
1
x
2
= 0. Since A
1
has full rank, x
2
= 0.
Continuing this process leads to x
i
= 0 for i 3. Thus, we end up with = 0,
3.7 Common Zeros of Orthogonal Polynomials in Several Variables 105
which contradicts the assumption that is an eigenvector. Let us assume that
x
0
= 1 =P
0
and dene x
n
R
r
n
as x
n
= 0. We will prove that x
j
=P
j
() for all
1 j n. Since the last equation in the three-term relation of x
j
can be written as
A
T
n2,i
x
n2
+B
n1,i
x
n1
+A
n1,i
x
n
=
i
x
n1
,
it follows that x
k

n
k=0
and P
k
()
n
k=0
satisfy the same three-term relation. Thus
so does y
k
= P
k
() x
k
. But since y
0
= 0, it follows from the previous
argument that y
k
= 0 for all 1 k n. In particular, y
n
= P
n
() = 0. The proof
is complete.
From this theorem follow several interesting corollaries.
Corollary 3.7.3 All the common zeros of P
n
are real and are points in R
d
.
Proof This follows because the J
n,i
are symmetric matrices and the eigenvalues
of a symmetric matrix are real and hence are points in R
d
.
Corollary 3.7.4 The polynomials in P
n
have at most dim
d
n1
common zeros.
Proof Since J
n,i
is a square matrix of size dim
d
n1
, it can have at most that many
eigenvectors.
In one variable, the nth orthogonal polynomial p
n
has n = dim
1
n1
distinct
real zeros. The situation becomes far more complicated in several variables. The
following theorem characterizes when P
n
has the maximum number of common
zeros.
Theorem 3.7.5 The orthogonal polynomial P
n
has N = dim
d
n1
distinct real
common zeros if and only if
A
n1,i
A
T
n1, j
= A
n1, j
A
T
n1,i
, 1 i, j d. (3.7.1)
Proof According to Theorem 3.7.2 a zero of P
n
is a joint eigenvalue of
J
n,1
, . . . , J
n,d
whose eigenvectors forms a one-dimensional space. This implies
that P
n
has N =dim
d
n1
distinct zeros if and only if J
n,1
, . . . , J
n,d
have N distinct
eigenvalues, which is equivalent to stating that J
n,1
, . . . , J
n,d
can be simultaneously
diagonalized by an invertible matrix. Since a family of matrices is simultaneously
diagonalizable if and only if it is a commuting family,
J
n,i
J
n, j
= J
n, j
J
n,i
, 1 i, j d.
From the denition of J
n,i
and the commutativity conditions (3.4.2), the above
equation is equivalent to the condition
A
T
n2,i
A
n2, j
+B
n1,i
B
n1, j
= A
T
n2, j
A
n2,i
+B
n1, j
B
n1,i
.
The third equation in (3.4.2) then leads to the desired result.
106 General Properties of Orthogonal Polynomials in Several Variables
The condition (3.7.1) is trivial for orthogonal polynomials in one variable, as
then A
i
=a
i
are scalars. The condition is usually not satised, since matrix multi-
plication is not commutative; it follows that in general P
n
does not have dim
d
n1
common zeros. Further, the following theorem gives a necessary condition for P
n
to have the maximum number of common zeros. Recall the notation
C
for joint
matrices in Denition 3.5.2.
Theorem 3.7.6 A necessary condition for the orthogonal polynomial P
n
to have
dim
d
n1
distinct real common zeros is
rank
_

T
B
n
B
n
_
= rank
_

T
A
n1
A
n1
_
=
_
n+d 2
n
_
+
_
n+d 3
n1
_
.
In particular, for d = 2, the condition becomes
rank(B
n,1
B
n,2
B
n,2
B
n,1
) = rank(A
T
n1,1
A
n1,2
A
T
n1,2
A
n1,1
) = 2.
Proof If P
n
has dim
d
n1
distinct real common zeros then, by Theorem 3.7.5,
the matrices A
n1,i
satisfy equation (3.7.1). It follows that the third commutativity
condition in (3.4.2) is reduced to
B
n,i
B
n, j
B
n, j
B
n,i
= A
T
n1, j
A
n1,i
A
T
n1,i
A
n1, j
,
which, using the fact that the B
n,i
are symmetric matrices, can be written as

T
B
n
B
n
=
T
A
n1
A
n1
. Using the rank condition (3.3.8), Proposition 3.5.7 and
a rank inequality, we have
rank
T
B
n
B
n
= rank
T
A
n1
A
n1
rank A
n1
+rank
A
n1
dr
d
n1
r
d
n
+dr
d
n1
r
d
n2
dr
d
n1
= r
d
n
r
d
n2
.
To prove the desired result, we now prove the reverse inequality. On the one hand,
using the notation
C
the rst equation in (3.4.2) can be written as
T
A
T
n1
A
n
= 0.
By Proposition 3.5.7 and the rank condition (3.3.8), it follows that the columns
of A
n
form a basis for the null space of
T
A
T
n1
. On the other hand, the condition
(3.7.1) can be written as
T
A
T
n1
(A
n1,1
A
n1,d
)
T
=0, which shows that the
columns of the matrix (A
n1,1
A
n1,d
)
T
belong to the null space of
T
A
T
n1
.
So, there is a matrix S
n
: r
d
n+1
r
d
n1
such that (A
n1,1
A
n1,d
)
T
= A
n
S
n
. It
follows from
r
d
n1
= rank(A
n1,1
A
n1,d
)
T
= rank A
n
S
n
rank S
n
r
d
n1
that rank S
n
= r
d
n1
. Using the condition (3.7.1) with n in place of n1, we have

T
B
n+1
B
n+1
S
n
=
T
A
n
A
n
S
n
=
T
A
n
(A
n1,1
A
n1,d
)
T
= 0,
3.8 Gaussian Cubature Formulae 107
where the last equality follows from the rst equation in (3.4.2). Hence
rank
_

T
B
n+1
B
n+1
_
=r
d
n+1
dimker
T
B
n+1
B
n+1
r
d
n+1
rank S
n
= r
d
n+1
r
d
n1
,
which completes the proof.
If L is a quasi-centrally-symmetric linear functional then rank[
T
B
n
B
n
] = 0,
since it follows from Lemma 3.5.3 that

T
B
n
B
n
= 0 B
n,i
B
n, j
= B
n, j
B
n,i
,
where we have used the fact that the B
n,i
are symmetric. Therefore, a immediate
corollary of the above result is the following.
Corollary 3.7.7 If L is a quasi-centrally-symmetric linear functional then P
n
does not have dim
d
n1
distinct common zeros.
The implications of these results are discussed in the following section.
3.8 Gaussian Cubature Formulae
In the theory of orthogonal polynomials in one variable, Gaussian quadrature
formulae played an important role. Let w be a weight function dened on R.
A quadrature formula with respect to w is a weighted sum of a nite number
of function values that gives an approximation to the integral
_
f wdx; it gives
the exact value of the integral for polynomials up to a certain degree, which is
called the degree of precision of the formula. Among all quadrature formulae a
Gaussian quadrature formula has the maximumdegree of precision, which is 2n
1 for a formula that uses n nodes, as discussed in Theorem 1.3.13. Furthermore,
a Gaussian quadrature formula exists if and only if the nodes are zeros of an
orthogonal polynomial of degree n with respect to w. Since such a polynomial
always has n distinct real zeros, the Gaussian quadrature formula exists for every
suitable weight function.
Below we study the analogues of the Gaussian quadrature formulae for several
variables, called the Gaussian cubature formulae. The existence of such formulae
depends on the existence of common zeros of orthogonal polynomials in several
variables.
Let L be a positive denite linear functional. A cubature formula of degree
2n1 with respect to L is a linear functional
I
n
( f ) =
N

k=1

k
f (x
k
),
k
R, x
k
R
d
,
108 General Properties of Orthogonal Polynomials in Several Variables
such that L( f ) =I
n
( f ) for all f
d
2n1
and L( f

) ,=L
2n1
( f

) for at least
one f

d
2n
. The points x
k
are called the nodes, and the numbers
k
the weights,
of the cubature formula. Such a formula is called minimal if N, the number of
nodes, is minimal. If a cubature formula has all positive weights, it is called a
positive cubature.
We are interested in the minimal cubature formulae. To identify a cubature
formula as minimal, it is necessary to know the minimal number of nodes in
advance. The following theorem gives a lower bound for the numbers of nodes of
cubature formulae; those formulae that attain the lower bound are minimal.
Theorem 3.8.1 The number of nodes of a cubature formula of degree 2n 1
satises N dim
d
n1
.
Proof If a cubature formula L
n
of degree 2n 1 has less than dim
d
n1
nodes
then the linear system of equations P(x
k
) = 0, where the x
k
are nodes of the
cubature formula and P
d
n1
, has dim
d
n1
unknowns that are coefcients of
P but fewer than dim
d
n1
equations. Therefore, there will be at least one nonzero
polynomial P
d
n1
which vanishes on all nodes. Since the formula is of degree
2n1, it follows that L(P
2
) =L
n
(P
2
) = 0, which contradicts the fact that L is
positive denite.
For d = 1, the lower bound in Theorem 3.8.1 is attained by the Gaussian
quadrature formulae. As an analogy of this, we make the following denition.
Denition 3.8.2 A cubature formula of degree 2n 1 with dim
d
n1
nodes is
called a Gaussian cubature.
The main result in this section is the characterization of Gaussian cubature
formulae. We start with a basic lemma of Mysovskikh [1976], which holds the
key to the proof of the main theorem in this section and which is of considerable
interest in itself. We need a denition: an algebraic hypersurface of degree m in
R
d
is a zero set of a polynomial of degree m in d variables.
Lemma 3.8.3 The pairwise distinct points x
k
, 1 k N = dim
d
n1
, on R
d
are not on an algebraic hypersurface of degree n1 if and only if there exist r
d
n
polynomials of degree n such that their nth degree terms are linearly independent,
for which these points are the common zeros.
Proof That the x
k
are not on a hypersurface of degree n1 means that the NN
matrix (x

k
), where the columns are arranged according to the lexicographical
order in : 0 [[ n 1 and the rows correspond to 1 k N, is
nonsingular. Therefore, we can construct r
d
n
polynomials
3.8 Gaussian Cubature Formulae 109
Q
n

= x

, [[ = n, R

(x) =

[[n1
C
,
x


d
n1
(3.8.1)
by solving the linear system of equations
R
n

(x
k
) = x

k
, 1 k N,
for each xed . Clearly, the set of polynomials Q
n

is linearly independent and


the conditions Q
n

(x
k
) = 0 are satised.
However, suppose that there are r
d
n
polynomials Q
n

of degree n which have x


k
as common zeros and that their nth terms are linearly independent. Because of
their linear independence, we can assume that these polynomials are of the form
(3.8.1). If the points x
k
are on an algebraic hypersurface of degree n1 then the
matrix X
n
= (x

k
), where 1 k N and [[ n 1, is singular. Suppose that
rank X
n
= M1, M N. Assume that the rst M1 rows of this matrix are
linearly independent. For each k dene a vector U
k
= (x

k
)
[[n1
in R
N
; then
U
1
, . . . ,U
M1
are linearly independent and there exist scalars c
1
, . . . , c
M
, not all
zero, such that

M
k=1
c
k
U
k
= 0. Moreover, by considering the rst component of
this vector equation it follows that at least two c
j
are nonzero. Assume that c
1
,=0
and c
M
,= 0. By (3.8.1) the equations Q
n

(x
k
) = 0 can be written as
x

k


[[n1
C
,
x

k
= 0, [[ = n, k = 1, . . . , N.
Since the summation term can be written as C
T

U
k
, with C

a vector in R
N
, it
follows from

M
k=1
c
k
U
k
= 0 that

M
k=1
c
k
x

k
= 0, [[ = n. Using the notation for
the vectors U
k
again, it follows from the above equation that
M
k=1
c
k
x
k,i
U
k
=
0, 1 i d, where we have used the notation x
k
= (x
k,1
, . . . , x
k,d
). Multiplying
the equation
M
k=1
c
k
U
k
= 0 by x
M,i
and then subtracting it from the equation

M
k=1
c
k
x
k,i
U
k
= 0, we obtain
M1

k=1
c
k
(x
k,i
x
M,i
)U
k
= 0, 1 i d.
Since U
1
, . . . ,U
M1
are linearly independent, it follows that c
k
(x
k,i
x
M,i
) = 0,
1 k M1, 1 i d. Since c
1
,= 0, x
1,i
= x
M,i
for 1 i M1, which is
to say that x
1
= x
M
, a contradiction to the assumption that the points are pairwise
distinct.
The existence of a Gaussian cubature of degree 2n 1 is characterized by
the common zeros of orthogonal polynomials. The following theorem was rst
proved by Mysovskikh [1970]; our proof is somewhat different.
Theorem 3.8.4 Let L be a positive denite linear functional, and let P
n
be
the corresponding orthogonal polynomials. Then L admits a Gaussian cubature
formula of degree 2n1 if and only if P
n
has N = dim
d
n1
common zeros.
110 General Properties of Orthogonal Polynomials in Several Variables
Proof Suppose that P
n
has N common zeros. By Theorem 3.7.1 and Corollary
3.7.3, all these zeros are real and distinct. Let us denote these common zeros by
x
1
, x
2
, . . . , x
N
. From the ChristoffelDarboux formula (3.6.5) and the formula for
K
n
(x, x), it follows that the polynomial
k
(x) = K
n
(x, x
k
)/K
n
(x
k
, x
k
) satises the
condition
k
(x
j
) =
k, j
, 1 k, j N. Since the
k
are of degree n1 they serve
as fundamental interpolation polynomials, and it follows that
L
n
( f , x) =
N

k=1
f (x
k
)
K(x, x
k
)
K(x
k
, x
k
)
, L
n
( f )
d
n1
, (3.8.2)
is a Lagrange interpolation based on x
k
, that is, L
n
( f , x
k
) = f (x
k
), 1 k N.
By Lemma 3.8.3 the points x
k
are not on an algebraic hypersurface of degree
n1. Therefore, it readily follows that L
n
( f ) is the unique Lagrange interpolation
formula. Thus the formula
L[L
n
( f )] =
N

k=1

k
f (x
k
),
k
= [K
n
(x
k
, x
k
)]
1
, (3.8.3)
is a cubature formula for L which is exact for all polynomials in
d
n1
. However,
since P
n
is orthogonal to
d
n1
and the nodes are zeros of P
n
, it follows that, for
any vector a R
r
n
,
L(a
T
P
n
) = 0 =
N

k=1

k
a
T
P
n
(x
k
).
Since any polynomial P
d
n
can be written as P = a
T
P
n
+R for some a with
R
d
n1
,
L(P) =L(R) =
N

k=1

k
R(x
k
) =
N

k=1

k
P(x
k
) P
d
n
. (3.8.4)
Thus, the cubature formula (3.8.3) is exact for all polynomials in
d
n
. Clearly, we
can repeat this process. By orthogonality and the fact that P
n
(x
k
) = 0,
L(x
i
a
T
P
n
) = 0 =
N

k=1

k
x
k,i
a
T
P
n
(x
k
), 1 i d,
which, by the recurrence relation (3.3.10) and the relation (3.8.4), implies that
L(a
T
P
n+1
) =
N
k=1

k
a
T
P
n+1
(x
k
). Therefore, it readily follows that the cubature
formula (3.8.3) is exact for all polynomials in
d
n+1
. Because P
n
is orthogonal to
polynomials of degree less than n, we can apply this process on x

, [[ n2,
and conclude that the cubature formula (3.8.3) is indeed exact for
d
2n1
.
Now suppose that a Gaussian cubature formula exists and it has x
k
, 1 k
N = dim
d
n1
, as its nodes. Since L is positive denite, these nodes cannot all
lie on an algebraic hypersurface of degree n1. Otherwise there would be a Q

n1
, not identically zero, such that Q vanished on all nodes, which would imply
3.8 Gaussian Cubature Formulae 111
that L(Q
2
) = 0. By Lemma 3.8.3 there exist linearly independent polynomials
P
n
j
, 1 j r
d
n
, which have the x
k
as common zeros. By the Gaussian cubature
formula, for any R
n1
,
L(RP
n
j
) =
N

k=1

k
(RP
n
j
)(x
k
) = 0.
Thus, the polynomials P
n
j
are orthogonal to
d
n1
and we can write them in the
form (3.8.1).
It is worthwhile mentioning that the above proof is similar to the proof for the
Gaussian quadrature formulae. The use of interpolation polynomials leads to the
following corollary.
Corollary 3.8.5 A Gaussian cubature formula takes the form (3.8.3); in
particular, it is a positive cubature.
In the proof of Theorem 3.8.4, the Gaussian cubature formula (3.8.3) was
derived by integrating the Lagrange interpolation polynomial L
n
f in (3.8.2). This
is similar to the case of one variable. For polynomial interpolation in one vari-
able, it is known that the Lagrange interpolation polynomials based on the distinct
nodes always exist and are unique. The same, however, is not true for polynomial
interpolation in several variables. A typical example is that of the interpolation of
six points on the plane by a polynomial of degree 2. Assume that the six points lie
on the unit circle x
2
+y
2
= 1; then the polynomial p(x, y) = x
2
+y
2
1 will van-
ish on the nodes. If P is an interpolation polynomial of degree 2 on these nodes
then so is P+ap for any given number a. This shows that the Lagrange interpo-
lation polynomial, if it exists, is not unique. If the nodes are zeros of a Gaussian
cubature formula then they admit a unique Lagrange interpolation polynomial.
Theorem 3.8.6 If x
k
, 1 k dim
d
n1
, are nodes of a Gaussian cubature for-
mula then for any given y
i
there is a unique polynomial P
d
n1
such that
P(x
i
) = y
i
, 1 i dim
d
n1
.
Proof The interpolating polynomial takes the form (3.8.2) with y
k
= f (x
k
). If all
f (x
i
) = 0 then P(x
i
) = 0 and the Gaussian cubature formula shows that L(P
2
)
= 0, which implies P = 0.
Theorem 3.8.4 characterizes the cubature formulae through the common zeros
of orthogonal polynomials. Together with Theorem 3.7.5 we have the following.
Theorem 3.8.7 Let L be a positive denite linear functional, and let P
n
be
the corresponding orthonormal polynomials, which satisfy the three-termrelation
(3.3.3). Then a Gaussian cubature formula exists if and only if (3.7.1) holds.
112 General Properties of Orthogonal Polynomials in Several Variables
Since Corollary 3.7.7 states that (3.7.1) does not hold for a quasi-centrally-
symmetric linear functional, it follows that:
Corollary 3.8.8 A quasi-centrally-symmetric linear functional does not admit
a Gaussian cubature formula.
In particular, there is no Gaussian cubature formula for symmetric classical
weight functions on the square or for the classical weight function on the unit
disk, or for their higher-dimensional counterparts. An immediate question is
whether there are any weight functions which admit a Gaussian cubature formula.
It turns out that there are such examples, as will be shown in Subsection 5.4.2.
3.9 Notes
Section 3.2 The general properties of orthogonal polynomials in several vari-
ables were studied in Jackson [1936], Krall and Sheffer [1967], Bertran [1975],
Kowalski [1982a, b], Suetin [1999], and a series of papers of Xu [1993a,
b, 1994ae] in which the vectormatrix notion is emphasized.
The classical references for the moment problems in one variable are Shohat
and Tamarkin [1943] and Akhiezer [1965]. For results on the moment problem
in several variables, we refer to the surveys Fuglede [1983] and Berg [1987],
as well as Putinar and Vasilescu [1999] and the references therein. Although
in general positive polynomials in several variables cannot be written as sums
of squares of polynomials, they can be written as sums of squares of rational
functions (Hilberts 17th problem); see the recent survey Reznick [2000] and the
references therein.
Section 3.3 The three-term relation and Favards theorem in several variables
were rst studied by Kowalski [1982a, b], in which the vector notation was used
in the form xP
n
= (x
1
P
T
n
x
d
P
T
n
) and Favards theorem was stated under an
additional condition that takes a rather complicated form. The additional condi-
tion was shown to be equivalent to rank C
k
= r
d
k
, where C
k
= (C
k,1
C
k,d
)
in Xu [1993a]. The relation for orthonormal polynomials is stated and proved in
Xu [1994a]. The three-term relations were used by Barrio, Pe na and Sauer [2010]
for evaluating orthogonal polynomials. The matrices in the three-term relations
were computed for further orthogonal polynomials of two variables in Area,
Godoy, Ronveaux and Zarzo [2012]. The structure of orthogonal polynomials
in lexicographic order, rather than graded lexicographic order, was studied in
Delgado, Geronimo, Iliev and Marcell an [2006].
Section 3.4 The relation between orthogonal polynomials of several variables
and self-adjoint commuting operators was studied in Xu [1993b, 1994a]; see also
Gekhtman and Kalyuzhny [1994]. An extensive study from the operator theory
perspective was carried out in Cicho n, Stochel and Szafraniec [2005].
3.9 Notes 113
Sections 3.7 and 3.8 The study of Gaussian cubature formulae started with
the classical paper of Radon [1948]. Mysovskikh and his school continued the
study and made considerable progress; Mysovskikh obtained, for example, the
basic properties of the common zeros of P
n
and most of the results in Section
3.8; see Mysovskikh [1981] and the references therein. The fact that the com-
mon zeros are joint eigenvalues of block Jacobi matrices and Theorem 3.7.5 were
proved in Xu [1993a, 1994b]. Mysovskikh [1976] proved a version of Theorem
3.7.5 in two variables without writing it in terms of coefcient matrices of the
three-term relation. A different necessary and sufcient condition for the exis-
tence of the Gaussian cubature formula was given recently in Lasserre [2012].
There are examples of weight functions that admit Gaussian cubature formulae
for all n (Schmid and Xu [1994], Berens, Schmid and Xu [1995b]).
M oller [1973] proved the following important results: the number of nodes, N,
of a cubature formula of degree 2n1 in two variables must satisfy
N dim
2
n1
+
1
2
rank(A
n1,1
A
T
n1,2
A
n1,2
A
T
n1,1
), (3.9.1)
which we have stated in matrix notation; moreover, for the quasi-centrally-
symmetric weight function, rank(A
n1,1
A
T
n1,2
A
n1,2
A
T
n1,1
) = 2[n/2]. M oller
also showed that if a cubature formula attains the lower bound (3.9.1) then its
nodes must be common zeros of a subset of orthogonal polynomials of degree n.
Similar results hold in the higher dimensional setting. Much of the later study in
this direction has been inuenced by M ollers work. There are, however, only
a few examples in which the lower bound (3.9.1) is attained; see, for exam-
ple, M oller [1976], Morrow and Patterson [1978], Schmid [1978, 1995], and
Xu [2013]. The bound in (3.9.1) is still not sharp, even for radial weight func-
tions on the disk; see Verlinden and Cools [1992]. For cubature formulae of
even degree, see Schmid [1978] and Xu [1994d]. In general, all positive cuba-
ture formulae are generated by common zeros of quasi-orthogonal polynomials
(M oller [1973], Xu [1994f, 1997a]), which are orthogonal to polynomials of a
few degrees lower, a notion that can also be stated in terms of the m-orthogonality
introduced by M oller. A polynomial p is said to be m-orthogonal if
_
pqd = 0
for all q
d
such that pq
d
m
. The problem can be stated in the language
of polynomial ideals and varieties. The essential result says that if the variety of
an ideal of m-orthogonal polynomials consists of only real points and the size of
the variety is more or less equal to the codimension of the ideal then there is a
cubature formula of degree m; see Xu [1999c]. Concerning the basic question of
nding a set of orthogonal polynomials or m-orthogonal polynomials with a large
number of common zeros, there is no general method yet. For further results in
this direction we refer to the papers cited above and to the references therein. Puti-
nar [1997, 2000] studied cubature formulae using an operator theory approach.
The books by Stroud [1971], Engels [1980] and Mysovskikh [1981] deal with
cubature formulae in general.
4
Orthogonal Polynomials on the Unit Sphere
In this chapter we consider orthogonal polynomials with respect to a weight
function dened on the unit sphere, the structure of which is not covered by
the discussion in the previous chapter. Indeed, if d is a measure supported on
the unit sphere of R
d
then the linear functional L( f ) =
_
f d is not positive def-
inite in the space of polynomials, as
_
(1|x|
2
)
2
d = 0. It is positive denite in
the space of polynomials restricted to the unit sphere, which is the space in which
these orthogonal polynomials are dened.
We consider orthogonal polynomials with respect to the surface measure on
the sphere rst; these are the spherical harmonics. Our treatment will be brief,
since most results and proofs will be given in a more general setting in Chap-
ter 7. The general structure of orthogonal polynomials on the sphere will be
derived from the close connection between the orthogonal structures on the
sphere and on the unit ball. This connection goes both ways and can be used
to study classical orthogonal polynomials on the unit ball. We will also discuss
a connection between the orthogonal structures on the unit sphere and on the
simplex.
4.1 Spherical Harmonics
The Fourier analysis of continuous functions on the unit sphere S
d1
:=x : |x| =
1 in R
d
is performed by means of spherical harmonics, which are the restrictions
of homogeneous harmonic polynomials to the sphere. In this section we present a
concise overviewof the theory and a construction of an orthogonal basis by means
of Gegenbauer polynomials. Further results can be deduced as special cases of
theorems in Chapter 7, by taking the weight function there as 1.
We assume d 3 unless stated otherwise. Let =
2
1
+ +
2
d
be the Laplace
operator, where
i
=/x
i
.
4.1 Spherical Harmonics 115
Denition 4.1.1 For n = 0, 1, 2, . . . let H
d
n
be the linear space of harmonic
polynomials, homogeneous of degree n, on R
d
, that is,
H
d
n
=
_
P P
d
n
: P = 0
_
.
Theorem 4.1.2 For n = 0, 1, 2, . . .
dimH
d
n
= dimP
d
n
dimP
d
n2
=
_
n+d 1
d 1
_

_
n+d 3
d 1
_
.
Proof Briey, one shows that maps P
d
n
onto P
d
n2
; this uses the fact that

_
|x|
2
P(x)

,= 0 whenever P(x) ,= 0.
There is a Pascal triangle for the dimensions. Let a
d,n
=dimH
d
n
; then a
d,0
=
1, a
2,n
= 2 for n 1 and a
d+1,n
= a
d+1,n1
+a
d,n
for n 1. Note that a
3,n
=
2n+1 and a
4,n
= (n+1)
2
. The basic device for constructing orthogonal bases is
to set up the differential equation
_
g(x
m+1
, . . . , x
d
) f (
d
j=m
x
2
j
, x
m
)
_
= 0, where g
is harmonic and homogeneous and f is a polynomial.
Proposition 4.1.3 For 1 m<d, suppose that g(x
m+1
, . . . , x
d
) is harmonic and
homogeneous of degree s in its variables, and that f (

d
j=m
x
2
j
, x
m
) is homogeneous
of degree n in x; then (gf ) = 0 implies that (see Denition 1.4.10)
f = c
_
d

j=m
x
2
j
_
n/2
C

n
_
x
m
_
d

j=m
x
2
j
_
1/2
_
for some constant c and = s +
1
2
(d m1).
Proof Let
2
=
d
j=m
x
2
j
and apply the operator to gx
n2 j
m

2 j
. The product rule
gives

_
gx
n2 j
m

2 j
_
= x
n2 j
m

2 j
g+g
_
x
n2 j
m

2 j
_
+2
d

i=m+1
g
x
i

2 j
x
i
x
n2 j
m
= 4gs jx
n2 j
m

2 j2
+g
_
2 j(d m1+2n2 j)x
n2 j
m

2 j2
+(n2 j)(n2j 1)x
n2 j2
m

2 j

.
Set
_
g
jn/2
c
j
x
n2 j
m

2 j
_
= 0 to obtain the recurrence equation
2( j +1)(d m+2n+2s 32 j)c
j+1
+(n2 j)(n2 j 1)c
j
= 0,
with solution
c
j
=
_

n
2
_
j
_
1n
2
_
j
j!
_
1n
dm1
2
s
_
j
c
0
.
116 Orthogonal Polynomials on the Unit Sphere
Now let f (x) =

jn/2
c
j
x
n2 j
m

2 j
= c
/

n
C

n
(
x
m

), with = s + (d m1)/2,
for some constant c
/
; then (gf ) = 0. This uses the formula in Proposition 1.4.11
for Gegenbauer polynomials.
The easiest way to start this process is with the real and imaginary parts of
(x
d1
+

1x
d
)
s
, which can be written in terms of Chebyshev polynomials of
rst and second kind, that is, g
s,0
=
s
T
s
(
x
d1

) and g
s1,1
= x
d

s1
U
s1
(
x
d1

),
where
2
= x
2
d1
+x
2
d
, for s 1 (and g
0,0
= 1). Then Proposition 4.1.3, used
inductively from m = d 2 to m = 1, produces an orthogonal basis for H
d
n
: x
n = (n
1
, n
2
, . . . , n
d1
, 0) or (n
1
, n
2
, . . . , n
d1
, 1) and let
Y
n
(x) = g
n
d1
,n
d
(x)
d2

j=1
_
(x
2
j
+ +x
2
d
)
n
j
/2
C

j
n
j
_
x
j
(x
2
j
+ +x
2
d
)
1/2
_
_
,
with
j
=

d
i=j+1
n
i
+
1
2
(d j 1). Note that Y
n
(x) is homogeneous of degree [n[
and that the number of such d-tuples is
_
n+d2
d2
_
+
_
n+d3
d2
_
=
_
n+d1
d1
_

_
n+d3
d1
_
=
dimH
d
n
. The fact that these polynomials are pairwise orthogonal follows easily
from their expression in spherical polar coordinates. Indeed, let
x
1
= r cos
d1
,
x
2
= r sin
d1
cos
d2
,
.
.
.
x
d1
= r sin
d1
sin
2
cos
1
,
x
d
= r sin
d1
sin
2
sin
1
,
(4.1.1)
with r 0, 0
1
< 2 and 0
i
for i 2. In these coordinates the surface
measure on S
d1
and the normalizing constant are
d =
d2

j=1
_
sin
dj
_
dj1
d
1
d
2
d
d1
, (4.1.2)

d1
=
_
S
d1
d =
2
d/2
(
d
2
)
, (4.1.3)
where
d1
is the surface area of S
d1
.
Theorem 4.1.4 In the spherical polar coordinate system, let
Y
n
(x) = r
[n[
g
/
(x)
d2

j=1
_
(sin
dj
)

j
C

j
n
j
(cos
dj
)
_
,
where
j
=

d
i=j+1
n
i
, g
/
(x) = cos n
d1

1
for n
d
= 0 and sin(n
d1
+1)
1
for
n
d
= 1. Then Y
n
is a mutually orthogonal basis of H
d
n
and the L
2
norms of the
polynomials are
4.1 Spherical Harmonics 117

1
d1
_
S
d1
Y
2
n
d = a
n
d2

j=1

j
(2
j
)
n
j
_
dj
2
_

j
n
j
!
_
dj+1
2
_

j
(n
j
+
j
)
,
where a
n
=
1
2
if n
d1
+n
d
> 0, else a
n
= 1.
The calculation of the norm uses the formulae in Subsection 1.4.3. We also
note that in the expansion of Y
n
(x) as a sum of monomials the highest term in
lexicographic order is (a multiple of ) x
n
.
It is an elementary consequence of Greens theorem that homogeneous har-
monic polynomials of differing degrees are orthogonal with respect to the surface
measure. In the following /n denotes the normal derivative.
Proposition 4.1.5 Suppose that f , g are polynomials on R
d
; then
_
S
d1
_
f
n
g
g
n
f
_
d =
_
B
d
(gf f g)dx
and if f , g are homogeneous then
(deg f degg)
_
S
d1
f g d =
_
B
d
(gf f g)dx.
Proof The rst statement is Greens theorem specialized to the ball B
d
and its
boundary S
d1
. The normal derivative on the sphere is f /n =
d
i=1
x
i
f /x
i
=
(deg f ) f .
Clearly, if f , g are spherical harmonics of different degrees then they are
orthogonal in L
2
_
S
d1
, d
_
. There is another connection to the Laplace oper-
ator. For x R
d
, consider the spherical polar coordinates (r, ) with r > 0 and
S
d1
dened by x = r.
Proposition 4.1.6 In spherical coordinates (r,
1
, . . . ,
d1
), the Laplace oper-
ator can be written as
=

2
r
2
+
d 1
r

r
+
1
r
2

0
, (4.1.4)
where the spherical part
0
is given by

0
=
()
,
(x)
)
2
(d 2),
()
);
here
()
and
()
denote the Laplacian and the gradient with respect to
(
1
, . . . ,
d1
), respectively.
118 Orthogonal Polynomials on the Unit Sphere
Proof Write r and
i
in terms of x by the change of variables x (r,
1
, . . . ,
d
)
and compute the partial derivatives
i
/y
i
; the chain rule implies that

x
i
=
i

r
+
1
r
_

i
,
()
)
_
, 1 i d,
where for i = d we use the convention that
2
d
= 1
2
1

2
x1
and dene
/
d
=0. Repeating the above operation and simplifying, we obtain the second-
order derivatives after some tedious computation:

2
i
=
2
i

2
r
2
+
1
2
i
r

r
+
1
r
2
_

2

2
i
(1
2
i
),
()
)
i
,
()
)

i
_
+
_

2
i
,
()
)
__
1
r

1
r
2
_

2
i
,
()
)
_
x,
(x)
).
Adding these equations for 1 i d, we see that the last two terms become zero,
since
2
1
+ +
2
d
= 1, and we end up with the equation
=

2
r
2
+
d 1
r

r
+
1
r
2
_

()
(d 1),
()
)
d1

i=1

i
,
()
)

i
_
.
The term with the summation sign can be veried to be ,
()
)
2
,
()
),
from which the stated formula follows.
The differential operator
0
is called the LaplaceBeltrami operator.
Theorem 4.1.7 For n = 0, 1, 2, . . . ,

0
Y =n(n+d 2)Y Y H
d
n
. (4.1.5)
Proof Let Y H
d
n
. Since Y is homogeneous, Y
n
(x) =r
n
Y
n
(x
/
). As Y is harmonic,
(4.1.4) shows that
0 =Y(x) = n(n1)r
n2
Y
n
(x
/
) +(d 1)nr
n2
Y
n
(x
/
) +r
n2

0
Y
n
(x
/
),
which is, when restricted to the sphere, equation (4.1.5).
For a given n, let P
n
(x, y) be the reproducing kernel for H
d
n
(as functions on
the sphere); then
P
n
(x, y) =

n
|Y
n
|
2
Y
n
(x)Y
n
(y), [n[ = n, n
d
= 0 or 1.
However the linear space H
d
n
is invariant under the action of the orthogonal
group O(d) (that is, f (x) f (xw) where w O(d)); the surface measure is also
invariant, hence P
n
(xw, yw) = P
n
(x, y) for all w O(d). This implies that P
n
(x, y)
depends only on the distance between x and y (both points being on the sphere)
or, equivalently, on x, y) =

d
i=1
x
i
y
i
, the inner product. Fix y = (1, 0, . . . , 0);
4.2 Orthogonal Structures on S
d
and on B
d
119
the fact that P
n
(x, y) = f (x, y)) = f (x
1
) must be the restriction of a harmonic
homogeneous polynomial to the sphere shows that
P
n
(x, y) = a
n
|x|
n
|y|
n
C
(d2)/2
n
_
x, y)/|x||y|
_
for some constant a
n
. With respect to normalized surface measure on S
d1
we have
1
d1
_
S
d1 P
n
(x, x)d(x) = dimH
d
n
, but P
n
(x, x) is constant and thus
a
n
C
(d2)/2
n
(1) = dimH
d
n
. Hence a
n
= (2n + d 2)/(d 2). Thus, we have
proved the following:
Theorem 4.1.8 The reproducing kernel P
n
(x, y) of H
d
n
satises
P
n
(x, y) =
n+

|x|
n
|y|
n
C

n
_
x, y)
|x||y|
_
, =
d 2
2
. (4.1.6)
The role of the generic zonal harmonic P
n
in the Poisson summation kernel will
be addressed in later chapters. Applying the reproducing kernel to the harmonic
polynomial x |x|
n
C
(d2)/2
n
(x, z)/|x|) for a xed z S
d1
implies the so-
called FunkHecke formula

1
d1
2n+d 2
d 2
_
S
d1
C
(d2)/2
n
(x, y)) f (x)d(x) = f (y),
for any f H
d
n
and y S
d1
.
4.2 Orthogonal Structures on S
d
and on B
d
Our main results on the orthogonal structure on the unit sphere will be deduced
by lifting those on the unit ball B
d
to the unit sphere S
d
, where B
d
:= x R
d
:
|x| 1 and S
d
R
d+1
. We emphasize that we shall work with the unit sphere
S
d
instead of S
d1
in this section, since otherwise we would be working with
B
d1
.
Throughout this section we x the following notation: for y R
d+1
, write
y = (y
1
, . . . , y
d
, y
d+1
) = (y
/
, y
d+1
) and use polar coordinates
y = r(x, x
d+1
), where r =|y|, (x, x
d+1
) S
d
. (4.2.1)
Note that (x, x
d+1
) S
d
implies immediately that x B
d
. Here are the weight
functions on S
d
that we shall consider:
Denition 4.2.1 A weight function H dened on R
d+1
is called S-symmetric
if it is even with respect to y
d+1
and centrally symmetric with respect to the
variables y
/
, that is, if
H(y
/
, y
d+1
) = H(y
/
, y
d+1
) and H(y
/
, y
d+1
) = H(y
/
, y
d+1
).
We further assume that the integral of H over the sphere S
d
is nonzero.
120 Orthogonal Polynomials on the Unit Sphere
Examples of S-symmetric functions include H(y) = W(y
2
1
, . . . , y
2
d+1
), which
is even in each of its variables, and H(y) = W(y
1
, . . . , y
d
)h(y
d+1
), where W is
centrally symmetric on R
d
and h is even on R. Note that the weight function

i<j
[y
i
y
j
[

is not an S-symmetric function, since it is not even with respect to


y
d+1
. Nevertheless, this function is centrally symmetric. In fact, it is easy to see
that:
Lemma 4.2.2 If H is an S-symmetric weight function on R
d+1
then it is
centrally symmetric, that is, H(y) = H(y) for y R
d+1
.
In association with a weight function H dened on R
d+1
, dene a weight
function W
B
H
on B
d
by
W
B
H
(x) = H(x,
_
1|x|
2
), x B
d
.
If H is S-symmetric then the assumption that H is centrally symmetric with
respect to the rst d variables implies that W
B
H
is centrally symmetric on B
d
.
Further, dene a pair of weight functions on B
d
,
W
B
1
(x) = 2W
B
H
(x)
_
_
1|x|
2
and W
B
2
(x) = 2W
B
H
(x)
_
1|x|
2
,
respectively. We shall show that orthogonal polynomials with respect to H on
S
d
are closely related to those with respect to W
B
1
and W
B
2
on B
d
. We need the
following elementary lemma.
Lemma 4.2.3 For any integrable function f dened on S
d
,
_
S
d
f (y)d
d
=
_
B
d
_
f
_
x,
_
1|x|
2
_
+ f
_
x,
_
1|x|
2
_
_
dx
_
1|x|
2
.
Proof For y S
d
write y = (

1t
2
x, t), where x S
d1
and 1 t 1. It
follows that
d
d
(y) = (1t
2
)
(d2)/2
dt d
d1
(x).
Making the change of variables y (

1t
2
x, t) gives
_
S
d
f (y)d
d
=
_
1
1
_
S
d1
f
_
_
1t
2
x, t
_
d
d1
(1t
2
)
(d2)/2
dt
=
_
1
0
_
S
d1
_
f
_
_
1t
2
x, t
_
+ f
_
_
1t
2
x, t
__
d
d1
(1t
2
)
(d2)/2
dt
=
_
1
0
_
S
d1
_
f (rx,
_
1r
2
) + f
_
rx,
_
1r
2
__
d
d1
r
d1
dr

1r
2
,
from which the stated formula follows from the standard parameterization of the
integral over B
d
in spherical polar coordinates y = rx for y B
d
, 0 < r 1 and
x S
d1
.
4.2 Orthogonal Structures on S
d
and on B
d
121
Recall the notation V
d
n
(W) for the space of orthonormal polynomials of degree
n with respect to the weight function W. Denote by P

[[=n
and Q

[[=n
systems of orthonormal polynomials that form bases for V
d
n
(W
B
1
) and V
d
n
(W
B
2
),
respectively. Keeping in mind the notation (4.2.1), dene
Y
(1)

(y) = r
n
P

(x) and Y
(2)

(y) = r
n
x
d+1
Q

(x), (4.2.2)
where [[ = n, [[ = n 1 and , N
d
0
. These functions are, in fact,
homogeneous orthogonal polynomials on S
d
.
Theorem 4.2.4 Let H be an S-symmetric weight function dened on R
d+1
. Then
Y
(1)

and Y
(2)

in (4.2.2) are homogeneous polynomials of degree [[ on R


d+1
, and
they satisfy
_
S
d
Y
(i)

(y)Y
( j)

(y)H(y)d
d
=
,

i, j
, i, j = 1, 2,
where if i = j = 1 then [[ =[[ = n and if i = j = 2 then [[ =[[ = n1.
Proof Since both W
B
1
and W
B
2
are centrally symmetric, it follows from Theo-
rem 3.3.11 that P

and Q

are sums of monomials of even degree if [[ is even


and sums of monomials of odd degree if [[ is odd. This allows us to write, for
example,
P

(x) =
n

k=0

[[=n2k
a

, a

R, x B
d
,
where [[ = n, which implies that
Y
(1)

(y) = r
n
P

(x) =
n

k=0
r
2k

[[=n2k
a

y
/

.
Since r
2
=y
2
1
+ +y
2
d+1
and y
/
= (y
1
, . . . , y
d
), this shows that Y
(1)

(y) is a homo-
geneous polynomial of degree n in y. Upon using rx
d+1
= y
d+1
, a similar proof
shows that Y
(2)

is homogeneous of degree n.
Since Y
(1)

, when restricted to S
d
, is independent of x
d+1
and since Y
(2)

contains
a single factor x
d+1
, it follows that Y
(1)

and Y
(2)

are orthogonal with respect to


H(y)d
d
on S
d
for any and . Since H is even with respect to its last variable,
it follows from Lemma 4.2.3 that
_
S
d
Y
(1)

(x)Y
(1)

(x)H(x)d
d
=2
_
B
d
P

(x)P

(x)
H(x,
_
1|x|
2
)
_
1|x|
2
dx
=
_
B
d
P

(x)P

(x)W
B
1
(x)dx =
,
and, similarly, using the fact that x
2
d+1
= 1 |x|
2
, we see that the polynomials
Y
(2)

are orthonormal.
122 Orthogonal Polynomials on the Unit Sphere
In particular, ordinary spherical harmonics (the case H(y) = 1) are related to
the orthogonal polynomials with respect to the radial weight functions W
0
(x) =
1/
_
1|x|
2
and W
1
(x) =
_
1|x|
2
on B
d
, both of which are special cases of
the classical weight functions W

(x) = w

(1 |x|
2
)
1/2
; see Subsection 5.2.
For d = 2, the spherical harmonics on S
1
are given in polar coordinates (y
1
, y
2
) =
r(x
1
, x
2
) = r(cos, sin) by the formulae r
n
cosn and r
n
sinn, which can be
written as
Y
(1)
n
(y
1
, y
2
) = r
n
T
n
(x
1
) and Y
(2)
n
(y
1
, y
2
) = r
n
x
2
U
n1
(x
1
),
where, with t = cos, T
n
(t) = cosn and U
n
(t) = sin(n +1)/sin are Cheby-
shev polynomials of the rst and the second kind and are orthogonal with respect
to 1/

1t
2
and

1t
2
, respectively; see Subsection 1.4.3.
Theorem 4.2.4 shows that the orthogonal polynomials with respect to an
S-symmetric weight function are homogeneous. This suggests the following
denition.
Denition 4.2.5 For a given weight function H, denote by H
d+1
n
(H) the space
of homogeneous orthogonal polynomials of degree n with respect to Hd on S
d
.
Using this denition, Theorem 4.2.4 can be restated as follows: the rela-
tion (4.2.2) denes a one-to-one correspondence between an orthonormal basis
of H
d+1
n
(H) and an orthonormal basis of V
d
n
(W
B
1
) x
d+1
V
d
n1
(W
B
2
). Conse-
quently, the orthogonal structure on S
d
can be studied using the orthogonality
structure on B
d
.
Theorem 4.2.6 Let H be an S-symmetric function on R
d+1
. For each n N
0
,
dimH
d+1
n
(H) =
_
n+d
d
_

_
n+d 2
d
_
= dimP
d+1
n
dimP
d+1
n2
.
Proof From the orthogonality in Theorem 4.2.4, the polynomials in Y
(1)

,Y
(2)


are linearly independent. Hence, it follows that
dimH
d+1
n
(H) = r
d
n
+r
d
n1
=
_
n+d 1
n
_
+
_
n+d 2
n1
_
,
where we use the convention that
_
k
j
_
= 0 if j < 0. Using the identity
_
n+m
n
_

_
n+m1
n
_
=
_
n+m1
n1
_
, it is easy to verify that dimH
d+1
n
(H) is given by the formula
stated in Proposition 4.1.2.
Theorem 4.2.7 Let H be an S-symmetric function on R
d+1
. For each n N
0
,
P
d+1
n
=
[n/2]

k=0
|y|
2k
H
d+1
n2k
(H);
4.2 Orthogonal Structures on S
d
and on B
d
123
that is, if P P
d+1
n
then there is a unique decomposition
P(y) =
[n/2]

k=0
|y|
2k
P
n2k
(y), P
n2k
H
d+1
n2k
(H).
Proof Since P is homogeneous of degree n, we can write P(y) = r
n
P(x, x
d+1
),
using the notation in (4.2.1). Using x
2
d+1
=1|x|
2
whenever possible, we further
write
P(y) = r
n
P(x) = r
n
[p(x) +x
d+1
q(x)], x B
d
,
where p and q are polynomials of degree at most n and n1, respectively. More-
over, if n is even then p is a sum of monomials of even degree and q is a sum
of monomials of odd degree; and if n is odd, then p is a sum of monomials of
odd degree and q is a sum of monomials of even degree. Since both P

and
Q

form bases for


d
n
and since the weight functions W
B
1
and W
B
2
are centrally
symmetric, we have the unique expansions
p(x) =
[n/2]

k=0

[[=n2k
a

(x) and q(x) =


[(n1)/2]

k=0

[[=n2k1
b

(x).
Therefore, by the denitions of Y
(1)

and Y
(2)

,
P(y) =
[n/2]

k=0
r
2k

[[=n2k
a

Y
(1)

(y) +y
d+1
[(n1)/2]

k=0
r
2k

[[=n2k1
b

Y
(2)

(y),
which is the desired decomposition. The uniqueness of the decomposition of P(y)
follows from the orthogonality in Theorem 4.2.4.
Let H be an S-symmetric weight function and let Y
(1)

and Y
(2)

be orthonor-
mal polynomials with respect to H such that they form an orthonormal basis of
H
d+1
n
(H). For f L
2
(H, S
d
), its Fourier orthogonal expansion is dened by
f

_
a
(1)

( f )Y
(1)

+a
(2)

( f )Y
(2)

_
,
where a
(i)

( f ) =
_
S
d
f (x)Y
(i)

(x)H(x)d. We dene the nth component by


P
n
( f ; x) =

[[=n
_
a
(1)

( f )Y
(1)

(x) +a
(2)

( f )Y
(2)

(x)
_
. (4.2.3)
The expansion f

n=0
P
n
( f ) is also called the Laplace series. As in the case
of orthogonal expansions for a weight function supported on a solid domain, the
component P
n
( f ; x) can be viewed as the orthogonal projection of f on the space
124 Orthogonal Polynomials on the Unit Sphere
H
d+1
n
(H) and, being so, it is independent of the choice of basis of H
d+1
n
(H).
Dene
P
n
(H; x, y) =

[[=n
_
Y
(1)

(x)Y
(1)

(y) +Y
(2)

(x)Y
(2)

(y)
_
; (4.2.4)
this is the reproducing kernel of H
d+1
n
(H) since it evidently satises
_
S
d
P
n
(H; x, y)Y(y)H(y)d(y) =Y(x), Y H
d+1
n
(H),
which can be taken as the denition of P
n
(H; x, y). For ordinary spherical har-
monic polynomials, P
n
is a so-called zonal polynomial; see Section 4.1. The
function P
n
(H) does not depend on the choice of a particular basis. It serves as an
integral kernel for the component
S
n
( f ; x) =
_
S
d
f (y)P
n
(H; x, y)H(y)d(y). (4.2.5)
As a consequence of Theorem 4.2.4, there is also a relation between the repro-
ducing kernels of H
d+1
n
(H) and V
d
n
(W
B
1
). Let us denote the reproducing kernel
of V
d
n
(W) by P
n
(W; x, y); see (3.6.2) for its denition.
Theorem 4.2.8 Let H be an S-symmetric function and let W
B
1
be associated
with H. Recall the notation (4.2.1). Then
P
n
(W
B
1
; x, y) =
1
2
_
P
n
_
H; (x, x
d+1
),
_
y,
_
1|y|
2
__
+P
n
_
H; (x, x
d+1
),
_
y,
_
1|y|
2
___
, (4.2.6)
where x
d+1
=
_
1|x| and |x| is the Euclidean norm of x.
Proof From Theorem 4.2.4 and (4.2.4), we see that for z =r(x, x
d+1
) R
d+1
and
y B
d
with y
d+1
=
_
1|x|
2
,
P
n
_
H; z, (y, y
d+1
)
_
= r
n
P
n
(W
B
1
; x, y) +z
d+1
y
d+1
r
n1
P
n1
(W
B
2
; x, y),
from which the stated equation follows on using
r
n
P
n
(W
B
1
; x, y)=
1
2
_
P
n
_
H; z,
_
y,
_
1|y|
2
__
+P
n
_
H; z,
_
y,
_
1|y|
2
___
with r = 1.
In particular, as a consequence of the explicit formula of the zonal harmonic
(4.1.6), we have the following corollary.
4.3 Orthogonal Structures on B
d
and on S
d+m1
125
Corollary 4.2.9 Let W
0
(x) := 1/
_
1|x|
2
, x B
d
. The reproducing kernel
P
n
(W
0
; , ) of V
d
n
(W
0
) satises
P
n
(W
0
; x, y) =
n+
d1
2
d 1
_
C
d/2
n
_
x, y) +
_
1|x|
2
_
1|y|
2
_
+C
d/2
n
_
x, y)
_
1|x|
2
_
1|y|
2
__
.
Moreover, by Theorem 4.2.4 and (4.2.2), the spherical harmonics in H
d+1
n
that
are even in x
d+1
form an orthogonal basis of V
d
n
(W
0
). In particular, an orthonor-
mal basis of V
d
n
(W
0
) can be deduced from the explicit basis given in Theorem
4.1.4. The weight function W
0
, however, is a special case of the classical weight
functions on the unit ball to be discussed in the next chapter.
4.3 Orthogonal Structures on B
d
and on S
d+m1
We consider a relation between the orthogonal polynomials on the ball B
d
and
those on the sphere S
d+m1
, where m 1.
A function f dened on R
m
is called positively homogeneous of order if
f (tx) =t

f (x) for t > 0.


Denition 4.3.1 The weight function H dened on R
d+m
is called admissible if
H(x) = H
1
(x
1
)H
2
(x
2
), x = (x
1
, x
2
) R
d+m
, x
1
R
d
, x
2
R
m
,
where we assume that H
1
is a centrally symmetric function, and that H
2
is
positively homogeneous of order 2 and even in each of its variables.
Examples of admissible weight functions include H(x) = c
d+m
i=1
[x
i
[

i
for

i
0, in which H
1
and H
2
are of the same form but with fewer variables, and
H(x) = cH
1
(x
1
)[x
2
i
x
2
j
[

i, j
, where d +1 i < j d +m and 2 =

i, j

i j
.
Associated with H, we dene a weight function W
m
H
on B
d
by
W
m
H
(x) = H
1
(x)(1|x|
2
)
+(m2)/2
, x B
d
. (4.3.1)
For convenience, we assume that H has unit integral on S
d+m1
and W
m
H
has unit
integral on B
d
. We show that the orthogonal polynomials with respect to H and
W
m
H
are related, using the following elementary lemma.
Lemma 4.3.2 Let d and m be positive integers and m 2. Then
_
S
d+m1
f (y)d =
_
B
d
(1|x|
2
)
(m2)/2
_
S
m1
f (x,
_
1|x|
2
)d()dx.
126 Orthogonal Polynomials on the Unit Sphere
Proof Making the change of variables y (x,
_
1|x|
2
), x B
d
and
S
m1
, in the integral over S
d+m1
shows that d
d+m
(y)
= (1|x|
2
)
(m2)/2
dxd
m
().
When m = 1, the lemma becomes, in a limiting sense, Lemma 4.2.3.
In the following we denote by P

(W
m
H
) a sequence of orthonormal polyno-
mials with respect to W
m
H
on B
d
. Since H
1
, and thus W
m
H
, is centrally symmetric, it
follows from Theorem 3.3.11 that P

(W
m
H
) is a sum of monomials of even degree
if [[ is even and a sum of monomials of odd degree if [[ is odd. These polyno-
mials are related to homogeneous orthogonal polynomials with respect to H on
S
d+m1
.
Theorem 4.3.3 Let H be an admissible weight function. Then the functions
Y

(y) =|y|
[[
P

(W
m
H
; x
1
), y = r(x
1
, x
2
) R
d+m
, x
1
B
d
,
are homogeneous polynomials in y and are orthonormal with respect to H(y)d
on S
d+m1
.
Proof We rst prove that Y

is orthogonal to polynomials of lower degree. It is


sufcient to prove that Y

is orthogonal to g

(y) = y

for N
d+m
0
and [[
[[ 1. From Lemma 4.3.2,
_
S
d+m1
Y

(y)g

(y)H(y)d
=
_
B
d
P

(W
m
H
; x
1
)
_
_
S
m1
g

_
x
1
,
_
1[x
1
[
2

_
H
2
()d()
_
W
m
H
(x
1
)dx
1
.
If g

is odd with respect to at least one of its variables y


d+1
, . . . , y
d+m+1
then
we can conclude, using the fact that H
2
is even in each of its variables, that the
integral inside the square bracket is zero. Hence, Y

is orthogonal to g

in this
case. If g

is even in every variable of y


d+1
, . . . , y
d+m
then the function inside the
square bracket will be a polynomial in x
1
of degree at most [[ 1, from which
we conclude that Y

is orthogonal to g

by the orthogonality of P

with respect
to W
m
H
on B
d
. Moreover,
_
S
d+m1
Y

(y)Y

(y)H(y)d =
_
B
d
P

(W
m
H
; x
1
)P

(W
m
H
; x
1
)W
m
H
(x
1
)dx
1
,
which shows that Y

is an orthonormal set. The fact that Y

is homogeneous of
degree n in y follows as in the proof of Theorem 4.2.4.
If in the limiting case m=1, the integral over S
m1
is taken as point evaluations
at 1 and 1 with weights
1
2
for each point, then we recover a special case of
Theorem 4.2.4. In this case W
m
H
(x) = H(x,
_
1|x|
2
)/
_
1|x|
2
.
4.3 Orthogonal Structures on B
d
and on S
d+m1
127
In the following we show that an orthogonal basis of H
m+d
k
(H) can be derived
from orthogonal polynomials on B
d
and on S
m
. Let H
2
be dened as in Denition
4.3.1. Then, by Theorem 4.2.4, the space H
m
k
(H
2
) has an orthonormal basis S
k

such that each S


k

is homogeneous, and S
k

is even in each of its variables if k is


even and odd in each of its variables if k is odd.
For x R
d
, we write [x[ := x
1
+ +x
d
in the following.
Theorem 4.3.4 Let S
k

be an orthonormal basis of H
m
k
(H
2
) as above, and
let P
nk

(W
m+2k
H
) be an orthonormal basis for V
d
nk
(W
m+2k
H
) with respect to the
weight function W
m+2k
H
on B
d
, where 0 k n. Then the polynomials
Y
n
,,k
(y) = r
n
A
m,k
P
nk

(W
m+2k
H
; y
/
1
)S
k

(y
/
2
), y = r(y
/
1
, y
/
2
), r =|y|,
where y
/
1
B
d
, y
/
2
B
m
and [A
m,k
]
2
=
_
B
d W
m
H
(x)(1 |x|
2
)
k
dx, are homoge-
neous of degree n in y and form an orthonormal basis for H
d+m
n
(H).
Proof Since the last property of S
k

in Theorem 4.2.4 implies that it has the same


parity as n, and P
nk

(W
m+2k
H
) has the same parity as n k, the fact that Y
n
,,k
is
homogeneous of degree n in y follows as in the proof of Theorem 4.3.3.
We prove that Y
n
,,k
is orthogonal to all polynomials of lower degree. Again,
it is sufcient to show that Y
n
,,k
is orthogonal to g

(y) = y

, [[
1
n1. Using
the notation y = r(y
/
1
, y
/
2
), we write g

as
g

(y) = r
[[
1
y
/

1
1
y
/

2
2
, [
1
[
1
+[
2
[
1
=[[
1
n1.
Using the fact |y
/
2
|
2
= 1 |y
/
1
|
2
and the integral formula in Lemma 4.3.2, we
conclude that
_
S
d+m1
Y
n
,,k
(y)g

(y)H(y)d
d+m
= [A
m,k
]
1
_
B
d
P
nk

(W
m+2k
H
; y
/
1
)y
/

1
1
(1|y
/
1
|
2
)
([
2
[
1
k)/2
W
m+2k
H
(y
/
1
)dy
/
1
_
_
S
m1
S
k

()

2
H
2
()d
m
()
_
,
where we have used the fact that W
m+2k
H
(x) = W
m
H
(x)(1 |x|
2
)
k
. We can show
that this integral is zero by considering the following cases. If [
2
[
1
< k then the
integral in the square brackets is zero because of the orthogonality of S
k

. If [
2
[
1
k and [
2
[
1
k is an odd integer then [
2
[
1
and k have different parity; hence,
since S
k

is homogeneous, a change of variable leads to the conclu-


sion that the integral in the square brackets is zero. If [
2
[
1
k and [
2
[
1
k
is an even integer then y
/

1
1
(1 |y
/
1
|
2
)
([
2
[
1
k)/2
is a polynomial of degree
[
1
[
1
+[
2
[
1
k n 1 k; hence, the integral is zero by the orthogonality of
P
nk

(W
m+2k
H
). The same consideration also shows that the polynomial Y
n
,,k
has
norm 1 in L
2
(H, S
d+m
), since P
nk

(W
m+2k
H
) and S
n

are normalized.
128 Orthogonal Polynomials on the Unit Sphere
Finally, we show that Y
n
,,k
forms a basis of H
m+d
n
(H). By orthogonality,
the elements of Y
n
,,k
are linearly independent; since dimH
m
k
(H) = r
m1
k
+
r
m1
k1
, their cardinality is equal to
n

k=0
r
d
nk
(r
m1
k
+r
m1
k1
) =
n

k=0
r
d
nk
r
m1
k
+
n1

k=0
r
d
n1k
r
m1
k
,
which is the same as the dimension of H
m+d1
n
(H), as can be veried by the
combinatorial identity
n

k=0
r
d
nk
r
m1
k
=
n

k=0
_
nk +d 1
nk
__
k +m2
k
_
=
_
n+d +m2
n
_
= r
d+m1
n
.
This completes the proof.
There is also a relation between the reproducing kernels of H
d+m+1
n
(H) and
V
d
n
(W
m
H
), which we now establish.
Theorem 4.3.5 Let H be an admissible weight function dened on R
d+m+1
and
W
m
H
be the associated weight function on B
d
. Then
P
n
(W
m
H
; x
1
, y
1
) =
_
S
m
P
n
_
H; x,
_
y
1
,
_
1|y
1
|
2

__
H
2
()d
m
()
where x = (x
1
, x
2
) S
d+m
with x
1
B
d
, y = (y
1
, y
2
) S
d+m
with y
1
B
d
and
y
2
=|y| B
m+1
and S
m
.
Proof Since P
n
(H) is the reproducing kernel of H
d+m+1
n
(H), we can write it in
terms of the orthonormal basis Y
n
,,k
in Theorem 4.3.4:
P
n
(H; x, y) =

Y
n
,,k
(x)Y
n
,,k
(y).
Integrating P
n
(H; x, y) with respect to H
2
(y
2
) over S
m
, we write |y
2
|
2
=1|y
1
|
2
and use the fact that S

is homogeneous and orthogonal with respect to


H
2
(y
2
)d
m
to conclude that the integral of Y
n
,,k
on S
m
results in zero for all
k ,= 0, while for k = 0 we have that = 0 and Y
n
,0,0
(y) are the polynomials
P
n

(W
m
H
; y
1
). Hence, we conclude that
_
S
m
P
n
_
H; x,
_
y
1
,
_
1|y|
2

__
H
2
()d
m
()
=

P
n

(W
m
H
; x
1
)P
n

(W
m
H
; y
1
) = P
n
(W
m
H
; x
1
, y
1
),
by the denition of the reproducing kernel.
4.4 Orthogonal Structures on the Simplex 129
In the case that H, H
1
and H
2
are all constants, W
m
H
(x) becomes the case =
m1
2
of the classical weight function W

on the ball B
d
,
W

(x) := (1|x|
2
)
1/2
, x B
d
. (4.3.2)
As a consequence of the general result in Theorem 4.3.3, we see that, for
m = 1, 2, . . . , the orthogonal polynomials in V
d
n
(W
(m1)/2
) can be identied with
the spherical harmonics in H
d+m
n
. In particular, it is possible to deduce an orthog-
onal basis of V
d
n
(W
(m1)/2
) from the basis of H
d+m
n
, which can be found from
Theorem 4.1.4. Since the orthogonal polynomials for the classical weight func-
tion W

will be studied in the next chapter, we shall not derive this basis here.
One consequence of this connection is the following corollary of Theorem 4.3.5
and the explicit formula of the zonal harmonic (4.1.6):
Corollary 4.3.6 For m = 1, 2, 3, . . ., the reproducing kernel of V
d
n
(W

) with
=
m
2
satises
P(W

; x, y) =
n+

_
1
1
C

n
_
x, y) +
_
1|x|
2
_
1|y|
2
t
_
(1t
2
)
1
dt,
where = +
d2
2
and c

=( +
1
2
)/(

()).
This formula holds for all > 0, as will be proved in Chapter 8. When = 0,
the reproducing kernel for W
0
is given in Corollary 4.2.9, which can be derived
from the above corollary by taking the limit 0.
4.4 Orthogonal Structures on the Simplex
In this section we consider the orthogonal polynomials on the simplex
T
d
:=x R
d
: x
1
0, . . . , x
d
0, 1[x[
1
0,
where [x[
1
= x
1
+ +x
d
. The structure of the orthogonal polynomials can be
derived from that on the unit sphere or from that on the unit ball. The connection
is based on the following lemma.
Lemma 4.4.1 Let f (x
2
1
, . . . , x
2
d
) be an integrable function dened on B
d
. Then
_
B
d
f (y
2
1
, . . . , y
2
d
)dy =
_
T
d
f (x
1
, . . . , x
d
)
dx

x
1
x
d
.
Proof Since f (x
2
1
, . . . , x
2
d
) is an even function in each of its variables, the left-
hand side of the stated formula can be written as 2
d
times the integral over B
d
+
=
x B
d
: x
1
0, . . . , x
d
0. It can be seen to be equal to the right-hand side
130 Orthogonal Polynomials on the Unit Sphere
upon making the change of variables y
1
= x
2
1
, . . . , y
d
= x
2
d
in the integral over B
d
+
,
for which the Jacobian is 2
d
(y
1
, . . . , y
d
)
1/2
.
Denition 4.4.2 Let W
B
(x) =W(x
2
1
, . . . , x
2
d
) be a weight function dened on B
d
.
Associated with W
B
dene a weight function on T
d
by
W
T
(y) =W(y
1
, . . . , y
d
)
_

y
1
y
d
, y = (y
1
, . . . , y
d
) T
d
.
Evidently, the weight function W
B
is invariant under the group Z
d
2
, that is,
invariant under sign changes. If a polynomial is invariant under Z
d
2
then it is
necessarily of even degree since it is even in each of its variables.
Denition 4.4.3 Dene V
d
2n
(W
B
, Z
d
2
) = P V
d
2n
(W
B
) : P(x) = P(xw), w
Z
d
2
. Let P
2n

be the elements of V
d
2n
(W
B
, Z
d
2
); dene polynomials R
n

by
P
2n

(x) = R
n

(x
2
1
, . . . , x
2
d
). (4.4.1)
Theorem4.4.4 Let W
B
andW
T
be dened as above. The relation (4.4.1) denes
a one-to-one correspondence between an orthonormal basis of V
d
n
(W
T
) and an
orthonormal basis of V
2n
(W
B
, Z
d
2
).
Proof Assume that R

[[=n
is a set of orthonormal polynomials in V
d
n
(W
T
).
If N
d
0
has one odd component then the integral of P

(x)x

with respect to
W
B
over B
d
is zero. If all components of are even and [[ < 2n then it can be
written as = 2 with N
d
0
and [[ n1. Using Lemma 4.4.1, we obtain
_
B
d
P

(x)x

W
B
(x)dx =
_
T
d
R

(y)y

W
T
(y)dy = 0,
by the orthogonality of R

. This relation also shows that R

is orthogonal to
all polynomials of degree at most n 1, if P

is an orthogonal polynomial in
V
d
n
(W
B
, Z
d
2
). Moreover, Lemma 4.4.1 also shows that the L
2
(W
B
; B
d
) norm of
P

is the same as the L


2
(W
T
; T
d
) norm of R

.
The connection also extends to the reproducing kernels. Let us denote by
P
n
(W

; , ) the reproducing kernel of V


d
n
(W

,
d
) for = B or T.
Theorem 4.4.5 For n = 0, 1, 2, . . . and x, y T
d
,
P
n
(W
T
; x, y) = 2
d

Z
d
2
P
2n
(W
B
;

x,

y), (4.4.2)
where

x := (

x
1
, . . . ,

x
d
) and u := (
1
u
1
, . . . ,
d
u
d
).
4.4 Orthogonal Structures on the Simplex 131
Proof Directly from the denition of the reproducing kernel, it is not difcult to
see that
P(x, y) := 2
d

Z
d
2
P
2n
(W
B
; x, y), x, y B
d
,
is the the reproducing kernel of V
d
2n
(W
B
, Z
d
2
) and, as such, can be written as
P(x, y) =

[[=2n
P
2n

(W
B
; x)P
2n

(W
B
; y)
in terms of an orthonormal basis of V
d
2n
(W
B
, Z
d
2
). The polynomial P
2n

(W
B
; x) is
necessarily even in each of its variables, so that (4.4.2) follows from Theorem
4.4.4 and the denition of the reproducing kernel with respect to W
T
.
As a consequence of this theorem and Theorem 4.2.4, we also obtain a rela-
tion between the orthogonal polynomials on T
d
and the homogeneous orthogonal
polynomials on S
d
. Let H(y) =W(y
2
1
, . . . , y
2
d+1
) be a weight function dened on
R
d+1
and assume its restriction to S
d
is not zero. Then H is an S-symmetric
weight function; see Denition 4.2.1. Associated with H dene a weight function
W
T
on the simplex T
d
by
W
T
H
(x) = 2W(x
1
, . . . , x
d
, 1[x[
1
)
_
x
1
x
d
(1[x[
1
), x T
d
.
Let R

be an element of V
d
n
(W
T
H
). Using the coordinates y = r(x, x
d+1
) for y
R
d+1
, dene
S
2n

(y) = r
2n
R
n

(x
2
1
, . . . , x
2
d
). (4.4.3)
By Theorems 4.2.4 and 4.4.4, we see that S
2n

is a homogeneous orthogonal
polynomial in H
d
2n
(H) and is invariant under Z
d+1
2
. Denote the subspace of
Z
d+1
2
invariant elements of H
d
2n
(H) by H
d
2n
(H, Z
d+1
2
). Then S
2n

is an element
of H
d
2n
(H, Z
d+1
2
), and we conclude:
Theorem 4.4.6 Let H and W
T
H
be dened as above. The relation (4.4.3) denes
a one-to-one correspondence between an orthonormal basis of V
2n
(W
B
, Z
d
2
) and
an orthonormal basis of V
d
n
(W
T
).
As an immediate consequence of the above two theorems, we have
Corollary 4.4.7 Let H and W
B
be dened as above. Then
dimV
d
2n
(W
B
, Z
d
2
) =
_
n+d
n
_
and dimH
d
2n
(H, Z
d+1
2
) =
_
n+d 1
n
_
.
Working on the simplex T
d
, it is often convenient to use homogeneous
coordinates x = (x
1
, . . . , x
d+1
) with [x[
1
= 1; that is, we identify T
d
with
T
d
hom
:=x R
d+1
: x
1
0, . . . , x
d+1
0, [x[
1
= 1
132 Orthogonal Polynomials on the Unit Sphere
in R
d+1
. This allows us to derive a homogeneous basis of orthogonal polynomi-
als. Indeed, since H is even in each of its variables, the space H
d
2n
(H, Z
d+1
2
) has
a basis that is even in each of its variables and whose elements can be written in
the form S
n

(x
2
1
, . . . , x
2
d+1
).
Proposition 4.4.8 Let H be dened as above and S
n

(x
2
1
, . . . , x
2
d+1
) : [[ =
n, N
d
0
be an orthonormal homogeneous basis of H
d
2n
(H, Z
d+1
2
). Then
S
n

(x
1
, . . . , x
d+1
) : [[ = n, N
d
0
forms an orthonormal homogeneous basis
of V
d
n
(W
T
H
).
Proof It follows from Lemmas 4.2.3 and 4.4.1 that
_
S
d
f (y
2
1
, . . . , y
2
d+1
)d = 2
_
[x[
1
=1
f (x
1
, . . . , x
d+1
)
dx

x
1
x
d+1
, (4.4.4)
where we use the homogeneous coordinates in the integral over T
d
and write
the domain of integration as [x[
1
= 1 to make this clear. This formula can also
be obtained by writing the integral over S
d
as 2
d+1
times the integral over S
d
+
and then making the change of variables x
i
y
2
i
, 1 i d +1. The change of
variables gives the stated result.
For reproducing kernels, this leads to the following relation:
Theorem 4.4.9 For n = 0, 1, 2, . . . and x, y T
d
hom
,
P
n
(W
T
; x, y) = 2
d1

Z
d+1
2
P
2n
(H;

x,

y), (4.4.5)
where

x := (

x
1
, . . . ,

x
d+1
) and u := (
1
u
1
, . . . ,
d
u
d+1
).
In particular, for H(x) = 1 the weight function W
T
H
becomes
W
0
(x) =
1
_
x
1
. . . x
d
(1[x[
1
)
, x T
d
.
Moreover, by Denition 4.4.2 this weight is also closely related to W
B
0
(x) =
1/
_
1|x|
2
on the unit ball. In particular, it follows that the orthogonal poly-
nomials with respect to W
0
on T
d
are equivalent to those spherical harmonics on
S
d
that are even in every variable and, in turn, are related to the orthogonal poly-
nomials with respect to W
B
0
that are even in every variable. Furthermore, from
the explicit formula for the zonal harmonic (4.1.6) we can deduce the following
corollary.
Corollary 4.4.10 For W
0
on T
d
, the reproducing kernel P
n
(W
0
; , ) of V
d
n
(W
0
)
satises, with = (d 1)/2,
4.5 Van der CorputSchaake Inequality 133
P
n
(W
0
; x, y) =
(2n+)()
n
(
1
2
)
n
1
2
d+1

Z
d+1
2
P
1/2,1/2
n
_
2z(x, y, )
2
1
_
for x, y T
d
, where z(x, y, ) =

x
1
y
1

1
+ +

x
d+1
y
d+1

d+1
with x
d+1
=
1[x[
1
and y
d+1
=1[y[.
Proof Taking W
T
=W
0
in (4.4.5) and using (4.1.6), we conclude that
P
n
(W
0
; x, y) =
2n+

1
2
d+1

Z
d+1
2
C

2n
(z(x, y, )),
which gives, by the relation C

2n
(t) =
()
n
(
1
2
)
n
P
(1/2,1/2)
n
(2t
2
1), the stated
formula.
4.5 Van der CorputSchaake Inequality
Just as there are Bernstein inequalities relating polynomials of one variable and
their derivatives, there is an inequality relating homogeneous polynomials and
their gradients. The inequality will be needed in Chapter 6. For n = 1, 2, . . . and
p P
d
n
, we dene a sup-norm on P
d
n
by
|p|
S
:= sup[p(x)[ : |x| = 1,
and a similar norm related to the gradient:
|p|

=
1
n
sup[u, p(x))[ : |x| = 1 =[[u[[.
Because
d
i=1
x
i
p(x)/x
i
= np(x) it is clear that |p|
S
[[p[[

. The remarkable
fact, proven by van der Corput and Schaake [1935], is that equality always holds
for real polynomials. Their method was to rst prove it in R
2
and then extend it
to any R
d
. The R
2
case depends on polar coordinates and simple properties of
trigonometric polynomials. In this section all polynomials are real valued. The
maximum number of sign changes of a homogeneous polynomial on S
1
provides
a key device.
Lemma 4.5.1 Suppose that p P
2
n
; then p can be written as
c
n2m

j=1
x, u
j
)
m

i=1
_
|x|
2
+x, v
i
)
2
_
,
for some m, c Rand u
1
, . . . , u
n2m
S
1
, v
1
, . . . , v
m
R
2
. The trigonometric poly-
nomial p() = p(cos, sin) has no more than 2n sign changes in any interval

0
< <
0
+2 when p(
0
) ,= 0.
134 Orthogonal Polynomials on the Unit Sphere
Proof By rotation of coordinates if necessary, we can assume that p(1, 0) ,= 0;
thus p() = (sin)
n
g(cot ) where g is a polynomial of degree n. Hence g(t) =
c
0
n2m
j=1
(t r
j
)

m
i=1
(t
2
+b
i
t +c
i
) with real numbers c
0
, r
j
, b
i
, c
i
and b
2
i
4c
i
<0
for each i (and some m). This factorization implies the claimed expression for p
and also exhibits the possible sign changes of p.
Theorem 4.5.2 Let p(x
1
, x
2
) P
2
n
and M = sup[p(x)[ for x
2
1
+x
2
2
= 1; then
[u, p(x))[ Mn|u|(x
2
1
+x
2
2
)
(n1)/2
for any u R
2
.
Proof Since p is homogeneous of degree n 1, it sufces to prove the claim
for |x| = 1. Accordingly, let x
0
= (cos
0
, sin
0
) for an arbitrary
0
(, ]
and make the change of variables
y = (r cos(
0
), r sin(
0
))
= (x
1
cos
0
+x
2
sin
0
, x
1
sin
0
+x
2
cos
0
),
where x = (r cos, r sin). Clearly (
p
y
1
)
2
+(
p
y
2
)
2
= (
p
x
1
)
2
+(
p
x
2
)
2
= |p|
2
,
all terms being evaluated at the same point. Express p in terms of y
1
, y
2
as

n
j=0
a
j
y
nj
1
y
j
2
; then ( p/y
1
)
2
+ ( p/y
2
)
2
all terms being evaluated at y =
(1, 0), corresponding to x = x
0
, equals n
2
a
2
0
+a
2
1
. If a
1
= 0 then |p(x
0
)|
2
=
n
2
a
2
0
= n
2
p(x
0
)
2
n
2
M
2
, as required. Assume now that a
1
,= 0 and, by way of
contradiction, assume that n
2
a
2
0
+a
2
1
>n
2
M
2
. There is an angle ,=0, such that
a
0
cosn +
a
1
n
sinn =

a
2
0
+
a
2
1
n
2
.
Let f () = a
0
cosn +(a
1
/n)sinn
n
j=0
a
j
(cos)
nj
(sin)
j
. Denote by [x]
the integer part of x R. Because
cosn = cos
n
+
[n/2]

1
_
n
2 j
_
(1)
j
(cos)
n2 j
(sin)
2 j
,
sinn = ncos
n1
sin +
[(n1)/2]

1
_
n
2 j +1
_
(1)
j
(cos)
n12 j
(sin)
2 j+1
,
it follows that f () = (sin)
2
g(cos, sin), where g is a homogeneous polyno-
mial of degree n2. Consider the values
f
_
+
j
n
_
= (1)
j

a
2
0
+
a
2
1
n
2
p(y
j
),
where y
j
= (cos
j
, sin
j
) and
j
= + j/n
0
, 0 j 2n 1. Since

p(y
j
)

M, we have (1)
j
f ( + j/n)
_
a
2
0
+a
2
1
/n
2
M > 0. This implies
that g(cos, sin) changes sign 2n times in the interval < < +2. This
4.5 Van der CorputSchaake Inequality 135
contradicts the fact that g P
2
n2
and can change its sign at most 2(n2) times
by the lemma. Thus
_
a
2
0
+a
2
1
/n
2
M.
Van der Corput and Schaake proved a stronger result: if
_
a
2
0
+a
2
1
/n
2
= M for
some particular x
0
then [a
0
[ = [p(x
0
)[ = M (so that |p| is maximized at the
same point as [p[) or |p(x)| is constant on the circle, and p(cos, sin) =
Mcos n(
0
) for some
0
.
Theorem 4.5.3 Let p P
d
n
; then [u, p(x))[ |p|
S
|u||x|
n1
for any
u R
d
.
Proof For any x
0
R
d
with |x
0
| = 1 there is a plane E through the origin con-
taining both x
0
and p(x
0
) (it is unique, in general). By rotation of coordinates
transform x
0
to (1, 0, . . .) and E to (x
1
, x
2
, 0, . . .). In this coordinate system
p(x
0
) = (v
1
, v
2
, 0, . . .) and, by Theorem 4.5.2, (v
2
1
+ v
2
2
)
1/2
nsup[p(x)[ :
|x| = 1 and x E n|p|
S
.
Corollary 4.5.4 For p P
d
n
and n 1, |p|
S
=|p|

.
There is another way of dening [[p[[

.
Proposition 4.5.5 For p P
d
n
,
|p|

=
1
n!
sup
_
n

j=1

y
( j)
, )p(x)

: x S
d1
, y
( j)
S
d1
for all j
_
.
Proof The claim is true for n = 1. Assume that it is true for some n, and let p
P
d
n+1
; for any y
(n+1)
S
d1
the function g(x) =y
(n+1)
, )p(x) is homogeneous
of degree n. Thus
sup[y
(n+1)
, )p(x)[ : x S
d1
=|g|

=
1
n!
sup
_
n

j=1
y
( j)
, )g(x) : x S
d1
, y
( j)
S
d1
for all j
_
,
by the inductive hypothesis. But
|p|

=
1
n+1
sup[y
(n+1)
, )p(x)[ : x, y
(n+1)
S
d1
,
and this proves the claim for n+1.
136 Orthogonal Polynomials on the Unit Sphere
4.6 Notes
There are several books that have sections on classical spherical harmon-
ics. See, for example, Dunkl and Ramirez [1971], Stein and Weiss [1971],
Helgason [1984], Axler, Bourdon and Ramey [1992], Groemer [1996] and
M uller [1997]. Two recent books that contain the development of spherical har-
monics and their applications are Atkinson and Han [2012], which includes
application in the numerical analysis of partial differential equations, and Dai
and Xu [2013], which addresses approximation theory and harmonic analy-
sis on the sphere. The Jacobi polynomials appear as spherical harmonics with
an invariant property in Braaksma and Meulenbeld [1968] and Dijksma and
Koornwinder [1971].
A study of orthogonal structures on the sphere and on the ball was carried out
in Xu [1998a, 2001a]. The relation between the reproducing kernels of spherical
harmonics and of orthogonal polynomials on the unit ball can be used to relate the
behavior of the Fourier orthogonal expansions on the sphere and those on the ball,
as will be discussed in Chapter 9. The relation between orthogonal polynomials
with respect to 1/
_
1|x|
2
on B
d
and ordinary spherical harmonics on S
d
, and
more generally W

with = (m1)/2 on B
d
and spherical harmonics on S
d+m
,
is observed in the explicit formulae for the biorthogonal polynomials (see Section
5.2) in the work of Hermite, Didon, Appell and Kamp e de F eriet; see Chapter XII,
Vol. II, of Erd elyi et al. [1953].
The relation between the orthogonal structures on the sphere, ball and simplex
was studied in Xu [1998c]. The relation between the reproducing kernels will be
used in Chapter 8 to derive a compact formula for the Jacobi weight on the ball
and on the simplex. These relations have implications for other topics, such as
cubature formulae and approximations on these domains; see Xu [2006c].
5
Examples of Orthogonal Polynomials
in Several Variables
In this chapter we present a number of examples of orthogonal polynomials in
several variables. Most of these polynomials, but not all, are separable in the
sense that they are given in terms of the classical orthogonal polynomials of one
variable. Many of our examples are extensions of orthogonal polynomials in two
variables to more than two variables. We concentrate mostly on basic results in
this chapter and discuss, for example, only classical results on regular domains,
leaving further results to later chapters when they can be developed and often
extended in more general settings.
One essential difference between orthogonal polynomials in one variable and
in several variables is that the bases of V
d
n
are not unique for d 2. For d = 1,
there is essentially one element in V
1
n
since it is one dimensional. For d 2, there
can be innitely many different bases. Besides orthonormal bases, several other
types of base are worth mentioning. One is the monic orthogonal basis dened by
Q

(x) = x

+R

(x), [[ = n, R


d
n1
; (5.0.1)
that is, each Q

of degree n contains exactly one monomial of degree n. Because


the elements of a basis are not necessarily orthogonal, it is sometimes useful to
nd a biorthogonal basis, that is, another basis P

of V
d
n
such that Q

, P

) = 0
whenever ,=. The relations between various orthogonal bases were discussed
in Section 3.2.
5.1 Orthogonal Polynomials for Simple Weight Functions
In this section we consider weight functions in several variables that arise from
weight functions of one variable and whose orthogonal polynomials can thus be
easily constructed. We consider two cases: one is the product weight function and
the other is the radial weight function.
138 Examples of Orthogonal Polynomials in Several Variables
5.1.1 Product weight functions
For 1 j d, let w
j
be a weight function on a subset I
j
of R and let p
j,n

n=0
be
a sequence of polynomials that are orthogonal with respect to w
j
. Let W be the
product weight function
W(x) := w
1
(x
1
) w
d
(x
d
), x I
1
I
d
.
The product structure implies immediately that
P

(x) := p
1,
1
(x
1
) p
d,
d
(x
d
), N
d
0
,
is an orthogonal polynomial of degree [[ with respect to W on I
1
I
d
.
Furthermore, denote by V
d
n
(W) the space of orthogonal polynomials with respect
to W. We immediately have the following:
Theorem 5.1.1 For n = 0, 1, 2, . . . , the set P

: [[ = n is a mutually
orthogonal basis of V
d
n
(W).
If the polynomials p
j,n
are orthonormal with respect to w
j
for each j then the
polynomials P

are also orthonormal with respect to W. Furthermore, the poly-


nomial P

is also a constant multiple of monic orthogonal polynomials. In fact,


a monic orthogonal basis, up to a constant multiple, is also an orthonormal basis
only when the weight function is of product type. As an example we consider the
multiple Jacobi weight function
W
a,b
(x) =
d

i=1
(1x
i
)
a
i
(1+x
i
)
b
i
(5.1.1)
on the domain [1, 1]
d
. An orthogonal basis P

for the space V


d
n
(W
a,b
) is given
in terms of the Jacobi polynomials P
(a
j
,b
j
)

j
by
P

(W
a,b
; x) = P
(a
1
,b
1
)

1
(x
1
) P
(a
d
,b
d
)

d
(x
d
), [[ = n, (5.1.2)
where, to emphasize the dependence of the polynomials P

on the weight
function, we include W
a,b
in the notation.
The product Hermite and Laguerre polynomials are two other examples. They
are given in later subsections.
5.1.2 Rotation-invariant weight functions
A rotation-invariant weight function W on R
d
is a function of |x|, also called
a radial function. Let w be a weight function dened on [0, ) such that it is
determined by its moments. We consider the rotation-invariant weight function
W
rad
(x) = w(|x|), x R
d
.
5.1 Orthogonal Polynomials for Simple Weight Functions 139
An orthogonal basis for V
d
n
(W) can be constructed from the spherical harmonics
and orthogonal polynomials of one variable.
For 0 j n/2, let Y
n2j

: 1 a
d
n2 j
denote an orthonormal basis for
the space H
d
n2 j
of spherical harmonics, where a
d
n
= dimH
d
n
. Fix j and n with
0 j n/2, and let
w
j,n
(t) :=[t[
2n4 j+d1
w([t[), t R.
Denote by p
(2n4 j+d1)
m
the polynomials orthogonal with respect to w
j,n
.
Theorem 5.1.2 For 0 j n/2 and 1 a
d
n2 j
, dene
P
n
j,
(x) := p
(2n4 j+d1)
2 j
(|x|)Y
n2 j

(x). (5.1.3)
Then the P
n
j,
form an orthonormal basis of V
d
n
(W
rad
).
Proof Since the weight function [t[
2n4 j+d1
w([t[) is an even function, the poly-
nomial p
(2n4 j+d1)
2j
(t) is also even, so that P
j,
(x) is indeed a polynomial
of degree n in x. To prove the orthogonality, we use an integral with polar
coordinates,
_
R
d
f (x)dx =
_

0
r
d1
_
S
d1
f (rx
/
)d(x
/
)dr. (5.1.4)
Since Y
n2 j

(x) = r
n2 j
Y
n2 j

(x
/
), it follows from the orthonormality of the
Y
n2 j

that
_
R
d
P
n
j,
(x)P
m
j
/
,
/
(x)W
rad
(x)dx =
n2 j,m2 j
/
,
/
d1

_

0
r
2n4 j+d1
p
(2n4 j+d1)
2 j
(r)p
(2n4 j+d1)
2 j
/
(r)w(r)dr.
The last integral is 0 if j ,= j
/
by the orthogonality with respect to w
j,n
. This estab-
lishes the orthogonality of the P
n
j,
. To show that they form a basis of V
d
n
(W
rad
),
we need to show that

0jn/2
dimH
d
n2 j
= dimV
d
n
(W
rad
) = dimP
d
n
,
which, however, follows directly from Theorem 4.2.7.
5.1.3 Multiple Hermite polynomials on R
d
The multiple Hermite polynomials on R
d
are polynomials orthogonal with
respect to the classical weight function
W
H
(x) = e
|x|
2
, x R
d
.
140 Examples of Orthogonal Polynomials in Several Variables
The normalization constant of W
H
is
d/2
, determined by using (5.1.4):
_
R
d
e
|x|
2
dx =
d

i=1
_
R
e
x
2
i
dx
i
=
d/2
.
The Hermite weight function is both a product function and a rotationally
invariant function. The space V
d
n
(W
H
) of orthogonal polynomials has, therefore,
two explicit mutually orthogonal bases.
First orthonormal basis Since W
H
is a product of Hermite weight functions
of one variable, the multiple Hermite polynomials dened by
P

(W
H
; x) = H

1
(x
1
) H

d
(x
d
), [[ = n, (5.1.5)
form a mutually orthogonal basis for V
d
n
with respect to W
H
, as shown in
Theorem 5.1.1. If we replace H

i
by the orthonormal Hermite polynomials, then
the basis becomes orthonormal.
Second orthonormal basis Since W
H
is rotationally invariant, it has a basis
given by
P
j,
(W
H
; x) = [h
H
j,n
]
1
L
n2 j+(d2)/2
j
(|x|
2
)Y
n2 j

(x), (5.1.6)
where L
a
j
is a Laguerre polynomial and the constant h
H
j,n
is given by the formula
[h
H
j,n
]
2
=
_
d
2
_
n2 j
_
(2j)!.
That the P
j,
are mutually orthogonal follows from Theorem 5.1.2, since, by
the orthogonality of the L

m
, the polynomials L
n2 j+(d2)/2
m
(t
2
) can be seen to
be orthogonal with respect to [t[
2n2 j+d1
e
t
2
upon making the change of vari-
able t

t. Since the Y
n2 j

are orthonormal with respect to d/


d1
, the
normalization constant is derived from
_
h
H
j,n
_
2
=

d1

d/2
_

0
_
L
n2 j+(d2)/2
j
(t)
_
2
e
t
t
n2 j+(d2)/2
dt
and the L
2
norm of the Laguerre polynomials.
The orthogonal polynomials in V
d
n
(W
H
) are eigenfunctions of a second-order
differential operator: for n = 0, 1, 2. . .,
_
2
d

i=1
x
i

x
i
_
P =2nP P V
d
n
(W
H
). (5.1.7)
This fact follows easily from the product structure of the basis (5.1.5) and the
differential equation satised by the Hermite polynomials (Subsection 1.4.1).
5.2 Classical Orthogonal Polynomials on the Unit Ball 141
5.1.4 Multiple Laguerre polynomials on R
d
+
These are orthogonal polynomials with respect to the weight function
W
L

(x) = x

1
1
x

d
d
e
[x[
,
i
>1, x R
d
+
,
which is a product of Laguerre weight functions in one variable. The normaliza-
tion constant is
_
R
d
+
W
L

(x)dx =

d
i=1
(
i
+1).
The multiple Laguerre polynomials dened by
P

(W
L

; x) = L

1
(x
1
) L

d
(x
d
), [[ = n, (5.1.8)
are polynomials that are orthogonal with respect to W
L

and, as shown in Theorem


5.1.1, the set P

(W
L

) : [[ =n is a mutually orthogonal basis for V


d
n
(W
L

), and
it becomes an orthonormal basis if we replace L

i
by the orthonormal Laguerre
polynomials.
The orthogonal polynomials in V
d
n
(W
L

) are eigenfunctions of a second-order


differential operator. For n = 0, 1, 2, . . .,
d

i=1
x
i

2
P
x
2
i
+
d

i=1
(
i
+1x
i
)
P
x
i
=nP. (5.1.9)
This fact follows from the product nature of (5.1.8) and the differential equation
satised by the Laguerre polynomials (Subsection 1.4.2).
5.2 Classical Orthogonal Polynomials on the Unit Ball
The classical orthogonal polynomials on the unit ball B
d
of R
d
are dened with
respect to the weight function
W
B

(x) = (1|x|
2
)
1/2
, >
1
2
, x B
d
. (5.2.1)
The normalization constant of W
B

is given by
w
B

=
_
_
B
d
W
B

(x)dx
_
1
=
2

d1
( +
d+1
2
)
( +
1
2
)(
d
2
)
=
( +
d+1
2
)

d/2
( +
1
2
)
.
The value of w
B

can be veried by the use of the polar coordinates x = rx


/
,
x
/
S
d1
, which leads to the formula
_
B
d
f (x)dx =
_
1
0
r
d1
_
S
d1
f (rx
/
)d(x
/
)dr (5.2.2)
and a beta integral. The polynomials P in V
d
n
with respect to W
B

are eigenfunc-
tions of the second-order differential operator D

, that is,
D

P =(n+d)(n+2 1)P, (5.2.3)


142 Examples of Orthogonal Polynomials in Several Variables
where
D

:=
d

j=1

x
j
x
j
_
(2 1) +
d

i=1
x
i

x
i
_
.
This fact will be proved in Subsection 8.1.1. In the following we will give sev-
eral explicit bases of V
d
n
with respect to W
B

. They all satisfy the differential


equation (5.2.3).
5.2.1 Orthonormal bases
We give two explicit orthonormal bases in this subsection.
First orthonormal basis Since the weight function W
B

is rotationally invariant,
it has a basis given explicitly in terms of spherical polar coordinates, as in (5.1.3).
Proposition 5.2.1 For 0 j
n
2
and Y
n2 j

: 1 a
d
n2 j
an orthonormal
basis of H
d
n2 j
, the polynomials
P
n
j,
(W
B

; x) = (h
j,n
)
1
P
(1/2,n2 j+(d2)/2)
j
(2|x|
2
1)Y
n2 j

(x), (5.2.4)
(cf. (5.1.2)) form an orthonormal basis of V
d
n
(W
B

); the constant is given by


(h
j,n
)
2
=
( +
1
2
)
j
(
d
2
)
nj
(n j + +
d1
2
)
j!( +
d+1
2
)
nj
(n+ +
d1
2
)
.
Proof The orthogonality of the P
n
j,i
follows easily from Theorem 5.1.2 and the
orthogonality of the Jacobi polynomials (Subsection 1.4.4). The constant, using
(5.2.2) and the orthonormality of the Y
n2 j

, comes from
_
h
B
j,n
_
2
= w
B

d1
_
1
0
_
P
(1/2,n2 j+(d2)/2)
j
(2r
2
1)
_
2
r
2n4 j+d1
(1r
2
)
1/2
dr,
which can be evaluated, upon changing the variable 2r
2
1 to t, from the L
2
norm of the Jacobi polynomial P
(a,b)
n
(t) and the properties of the Pochhammer
symbol.
Second orthonormal basis We need the following notation. Associated with
x = (x
1
, . . . , x
d
) R
d
, for each j dene by x
j
a truncation of x, namely
x
0
= 0, x
j
= (x
1
, . . . , x
j
), 1 j d.
Note that x
d
= x. Associated with = (
1
, . . . ,
d
), dene

j
:= (
j
, . . . ,
d
), 1 j d, and
d+1
:= 0.
5.2 Classical Orthogonal Polynomials on the Unit Ball 143
Recall that C

k
denotes the Gegenbauer polynomial associated with the weight
function (1x
2
)
1/2
on [1, 1]; see Subsection 1.4.3.
Proposition 5.2.2 Let > 0. For N
d
0
, the polynomials P

given by
P

(W
B

; x) = (h

)
1
d

j=1
(1|x
j1
|
2
)

j
/2
C

j
_
x
j
_
1|x
j1
|
2
_
,
where
j
= + [
j+1
[ +
dj
2
, form an orthonormal basis of V
d
n
(W
B

); the
constants h

are dened by
(h

)
2
=
( +
d
2
)
[[
( +
d+1
2
)
[[
d

j=1
( +
dj
2
)
[
j
[
(2 +2[
j+1
[ +d j)

j
( +
dj+1
2
)
[
j
[

j
!
.
If =0, the basis comes fromlim
0

1
C

n
(x) =
2
n
T
n
(x) and the constant needs
to be modied accordingly.
Proof The orthonormality and the normalization constants can both be veried
by the use of the formula
_
B
d
f (x)dx =
_
B
d1
_
1
1
f
_
x
d1
,
_
1|x
d1
|
2
y
_
dy
_
1|x
d1
|
2
dx
d1
,
which follows from the simple change of variable x
d
y
_
1|x
d1
|
2
. Using
this formula repeatedly to reduce the number of iterations of the integral, that is,
by reducing the dimension d by 1 at each step, we end up with
w
B

_
B
d
P

(W
B

; x)P

(W
B

; x)W
B

(x)dx
= (h

)
2
w
B

j=1
_
1
1
_
C
+[
j+1
[+(dj)/2

j
(t)
_
2
(1t
2
)
+[
j+1
[+(dj1)/2
dt
,
,
where the last integral can be evaluated by using the normalization constant of
the Gegenbauer polynomials.
5.2.2 Appells monic orthogonal and biorthogonal polynomials
The denition of these polynomials can be traced back to the work of Her-
mite, Didon, Appell, and Kamp e de F eriet; see Appell and de F eriet [1926] and
Chapter XII, Vol. II, of Erd elyi et al. [1953]. There are two families of these
polynomials.
144 Examples of Orthogonal Polynomials in Several Variables
Monic orthogonal Appell polynomials V

The rst family of polynomials,


V

, is dened by the generating function


(12a, x) +|a|
2
)
(d1)/2
=

N
d
0
a

(x), (5.2.5)
where a B
d
. We prove two properties of V

in the following.
Proposition 5.2.3 The polynomials V

are orthogonal with respect to W


B

on B
d
.
Proof From the generating function of the Gegenbauer polynomials and (5.2.5),
(12a, x) +|a|
2
)
(d1)/2
=

n=0
|a|
n
C
+(d1)/2
n
(a/|a|, x)).
Replacing a by at and comparing the coefcients of t
n
gives
|a|
n
C
+(d1)/2
n
(a/|a|, x)) =

[[=n
a

(x).
Hence, in order to prove that V

is orthogonal to polynomials of lower degree, it


is sufcient to prove that
_
B
d
|a|
n
C
+(d1)/2
n
(a/|a|, x))x

W
B

(x)dx = 0
for all monomials x

with [[ n 1. Since W
B

is rotation invariant, we
can assume that a = (0, . . . , 0, c) without loss of generality, which leads to
C
+(d1)/2
n
(a/|a|, x)) = C
+(d1)/2
n
(x
d
); making the change of variables x
i
=
y
i
_
1x
2
d
for i = 1, . . . , d 1, the above integral is equal to
_
1
1
C
+(d1)/2
n
(x
d
)x

d
d
(1x
2
d
)
+([[
d
)/2+(d2)/2
dx
d

_
B
d1
y

1
1
y

d1
d1
(1|y|
2
)
1/2
dy.
If any
i
for 1 i d 1 is odd then the second integral is zero. If all
i
for 1
i d1 are even then x

d
d
(1x
2
d
)
[[/2
d
/2
is a polynomial of degree [[ n1;
hence, the rst integral is equal to zero by the orthogonality of the Gegenbauer
polynomials.
Proposition 5.2.4 The polynomials V

are constant multiples of monic orthog-


onal polynomials; more precisely,
V

(x) =
2
[[
!
_
+
d 1
2
_
[[
x

+R

(x), R


d
n1
.
5.2 Classical Orthogonal Polynomials on the Unit Ball 145
Proof Let = +(d 1)/2 in this proof. We use the multinomial and negative
binomial formulae to expand the generating function as follows:
(12a, x) +|a|
2
)

= [1a
1
(2x
1
a
1
) a
d
(2x
d
a
d
)]

()
[[
!
a

(2x
1
a
1
)

1
(2x
d
a
d
)

d
=

()
[[
!

(
1
)

1
(
d
)

d
!
2
[[[[
x

a
+
.
Therefore, upon making the change in summation indices
i
+
i
=
i
and com-
paring the coefcients of a

on both sides, we obtain the following explicit


formula for V

(x):
V

(x) = 2
[[
x

()
[[[[
(
1
+
1
)

1
(
d
+
d
)

d
( )!!
2
2[[
x
2
,
from which the fact that V

is a constant multiple of a monic orthogonal


polynomial is evident.
Moreover, using the formulae
()
mk
=
(1)
k
()
m
(1 m)
k
and
(m+k)
k
(mk)!
=
(1)
k
(m)
2k
m!
as well as
2
2k
(m)
2k
=
_

m
2
_
k
_
1m
2
_
k
,
which can all be veried directly fromthe expression for the Pochhammer symbol
(a)
b
, given in Denition 1.1.3, we can rewrite the formula as
V

(x) =
2
[[
x

!
_
+
d 1
2
_
[[
F
B
_

2
,
1
2
; [[
d 3
2
;
1
x
2
1
, . . . ,
1
x
2
d
_
,
where 1 = (1
1
, . . . , 1
d
) and F
B
is the Lauricella hypergeometric series
of d variables dened in Section 1.2.
Biorthogonal Appell polynomials U

The second family of polynomials, U

,
can be dened by the generating function
_
(1a, x))
2
+|a|
2
(1|x|
2
)

=

N
d
0
a

(x). (5.2.6)
An alternative denition is given by the following Rodrigues-type formula.
146 Examples of Orthogonal Polynomials in Several Variables
Proposition 5.2.5 The polynomials U

satisfy
U

(x) =
(1)
[[
(2)
[[
2
[[
( +
1
2
)
[[
!
(1|x|
2
)
+1/2

[[
x

1
1
x

d
d
(1|x|
2
)
[[+1/2
.
Proof Let a, ) =

d
i=1
a
i
/x
i
, where denotes the gradient operator. The
following formula can be easily veried by induction:
a, )
n
(1|x|
2
)
b
=

0jn/2
n!2
n2 j
(b)
nj
j!(n2 j)!
a, x)
n2 j
(1|x|
2
)
bn+j
|a|
2 j
.
Putting b = n +
1
2
and multiplying the above formula by (1 |x|
2
)
+1/2
gives
(1|x|
2
)
+1/2
a, )
n
(1|x|
2
)
n+1/2
=

0jn/2
(1)
nj
n!2
n2 j
( j + +
1
2
)
nj
j!(n2j)!
a, x)
n2 j
|a|
2 j
(1|x|
2
)
j
,
where we have used (n +
1
2
)
nj
= (1)
nj
( j + +
1
2
)
nj
. However,
expanding the generating function shows that it is equal to
(1a, x))
2
_
1+
|a|
2
(1|x|
2
)
(1a, x))
2
_

i=0

j=0
(1)
j
()
j
(2 +2 j)
i
i! j!
|a|
2 j
a, x)
i
(1|x|
2
)
j
.
Making use of the formula
()
j
(2 +2 j)
i
=
()
j
(2)
i+2 j
(2)
2 j
=
(2)
i+2 j
2
2 j
( +
1
2
)
j
,
and setting i +2 j = n, we obtain
_
(1a, x))
2
+|a|
2
(1|x|
2
)

n=0
_
(2)
n
(1)
n
n!( +
1
2
)
n
2
n


0jn/2
(1)
nj
n!2
n2 j
( j + +
1
2
)
nj
j!(n2 j)!
a, x)
n2 j
|a|
2 j
(1|x|
2
)
j
_
.
Comparing these formulae gives
[(1a, x))
2
+|a|
2
(1|x|
2
)]

n=0
(2)
n
(1)
n
n!( +
1
2
)
n
2
n
(1|x|
2
)
+1/2
a, )
n
(1|x|
2
)
n+1/2
.
5.2 Classical Orthogonal Polynomials on the Unit Ball 147
Consequently, comparing the coefcients of a

in the above formula and in


(5.2.6) nishes the proof.
The most important property of the polynomials U

is that they are biorthogo-


nal to V

. More precisely, the following result holds.


Proposition 5.2.6 The polynomials U

and V

are biorthogonal:
w
B

_
B
d
V

(x)U

(x)W
B

(x)dx =
+
d1
2
[[ + +
d1
2
(2)
[[
!

,
.
Proof Since the set V

forms an orthogonal basis, we only need to consider


the case [[ [[, which will also show that U

is an orthogonal basis. Using


the Rodrigues formula and integration by parts,
w
B

_
B
d
V

(x)U

(x)W
B

(x)dx
= w
B

(2)
[[
2
[[
( +
1
2
)
[[
!
_
B
d
_

[[
x

1
1
x

d
d
V

(x)
_
(1|x|
2
)
[[+1/2
dx.
However, since V

is a constant multiple of a monic orthogonal polynomial and


[[ [[,

[[
x

1
1
x

d
d
V

(x) = 2
[[
_
+
d1
2
_
[[

,
,
from which we can invoke the formula for the normalization constant w
B

of W
B

to conclude that
_
B
d
V

(x)U

(x)W
B

(x)dx =
w
B

w
[[+
(2)
[[
( +
d1
2
)
[[
( +
1
2
)
[[
!

,
.
The constant can be simplifed using the formula for w
B

.
We note that the biorthogonality also shows that U

is an orthogonal polyno-
mial with respect to W
B

. However, U

is not a monic orthogonal polynomial. To


see what U

is, we derive an explicit formula for it.


Proposition 5.2.7 The polynomial U

is given by
U

(x) =
(2)
[[
!

(1)
[[
(
1
)
2
1
(
d
)
2
d
2
2[[
!( +
1
2
)
[[
x
2
(1|x|
2
)
[[
.
Proof Let : R R be a real differentiable function. Using the chain rule and
induction gives
148 Examples of Orthogonal Polynomials in Several Variables
d
n
dx
n
(x
2
) =
[n/2]

j=0
(2x)
n2 j
(n)
2 j
j!

(nj)
(x
2
),
where
(m)
denotes the mth derivative of . Hence, for a function : R
d
R,
we have

[[
x

1
1
x

d
d
(x
2
1
, . . . , x
2
d
)
= 2
[[

(
1
)
2
1
. . . (
d
)
2
d
2
2[[
!
x
2
(

)(x
2
1
, . . . , x
2
d
),
where

denotes the partial derivative of ([[ [[)th order of . In


particular, applying the formula to (x) = (1x
1
x
d
)
[[+1/2
gives

[[
x

1
1
x

d
d
(1|x|
2
)
[[+1/2
= (1)
[[
2
[[
_
+
1
2
_
[[
(1|x|
2
)
1/2

(1)
[[
(
1
)
2
1
. . . (
d
)
2
d
2
2[[
!( +
1
2
)
[[
x
2
(1|x|
2
)
[[
.
The stated formula follows from the Rodriguestype formula for U

.
Using the denition of the Lauricella hypergeometric series F
B
(see Subsec-
tion 1.2.1), we can rewrite the above formula as
U

(x) =
(2)
[[
x

!
F
B
_

2
,
1
2
; +
1
2
;
1|x|
2
x
2
1
, . . . ,
1|x|
2
x
2
d
_
.
There are a great number of formulae for both Appell polynomials, V

and U

;
we refer to Appell and de F eriet [1926] and Erd elyi et al. [1953].
5.2.3 Reproducing kernel with respect to W
B

on B
d
Let P
n
(W
B

; , ) denote the reproducing kernel of V


d
n
(W
B

) as dened in (3.6.2).
This kernel satises a remarkable and concise formula.
Theorem 5.2.8 For > 0, x, y B
d
,
P
n
(W
B

; x, y) = c

n+

_
1
1
C

n
_
x, y) +t
_
1|x|
2
_
1|y|
2
_
(1t
2
)
1
dt, (5.2.7)
where = +
d1
2
and c

=( +
1
2
)/[

()].
In the case = m/2 and m = 1, 2, 3, . . . this expression was established in
Corollary 4.3.6. For > 0, the proof will appear as a special case of a more
5.2 Classical Orthogonal Polynomials on the Unit Ball 149
general result in Theorem 8.1.16. When = 0, the formula for P(W
0
; x, y) is
found in Corollary 4.2.9.
It is worth mentioning that the kernel P
n
(W
B

; x, y) is rotation invariant, in the


sense that P
n
(W
B

; x, y) = P
n
(W
B

; x, y) for each rotation ; this is obviously a


consequence of the rotation invariance of W
B

.
For the constant weight W
1/2
(x) = 1, there is another formula for the reproduc-
ing kernel. Let P(x, y) = P
n
(W
1/2
; x, y).
Theorem 5.2.9 For n = 1, 2, 3, . . . ,
P
n
(x, y) =
n+
d
2
d
2
_
S
d1
C
d/2
n
(x, ))C
d/2
n
(, y))d(). (5.2.8)
Proof Using the explicit orthonormal basis of P
n
j
:= P
n
j,
(W
1/2
) in Proposi-
tion 5.2.1, with =
1
2
, we obtain on the one hand
P
n
(x, y) =

02 jn

P
n
j,
(x)P
n
j,
(y). (5.2.9)
On the other hand, by (5.2.7) we have for S
d1
P
n
(x, ) =
n+
d
2
d
2
C
d/2
n
(x, )), S
d1
, x B
d
. (5.2.10)
Furthermore, when restricted to S
d1
, P
n
j,
becomes a spherical harmonic,
P
n
j,
() = H
n
Y
j,n2k
(), S
d1
,
where, using the fact that P
(0,b)
k
(1) = 1, we see that the formula
H
n
= (h
j,n
)
1
P
(0,n2 j+(d2)/2)
j
(1) =

n+
d
2
d
2
,
is independent of j. Consequently, integrating over S
d1
we obtain

1
d
_
S
d1
P
n
j,
()P
n
j
/
,
/
()d() = H
2
n

j, j
/
,
/ =
n+
d
2
d
2

j, j
/
,
/ .
Multiplying the above equation by P
n
j,
(x) and P
n
j
/
,
(y) and summing over all
j, j
/
, ,
/
, the right-hand side becomes P
n
(x, y), by (5.2.9), whereas the left-
hand side becomes, by (5.2.9) and (5.2.10), the right-hand side of (5.2.8). This
completes the proof.
150 Examples of Orthogonal Polynomials in Several Variables
5.3 Classical Orthogonal Polynomials on the Simplex
Let T
d
denote the simplex in the Euclidean space R
d
,
T
d
=x R
d
: x
1
0, . . . , x
d
0, 1x
1
x
d
0.
For d = 2, the region T
2
is the triangle with vertices at (0, 0), (0, 1) and (1, 0).
The classical orthogonal polynomials on T
d
are orthogonal with respect to the
weight function
W
T

(x) = x

1
1/2
1
x

d
1/2
d
(1[x[)

d+1
1/2
,
i
>
1
2
, (5.3.1)
where x T
d
and [x[ = x
1
+ +x
d
is the usual
1
norm for x T
d
. Upon using
the formula
_
T
d
f (x)dx =
_
T
d1
_
1
0
f (x
d1
, (1[x
d1
[)y)dy(1[x
d1
[)dx
d1
, (5.3.2)
which follows from the change of variables x
d
y(1 [x
d1
[), we see that the
normalization constant w
T

of W
T

is given by the Dirichlet integral


(w
T

)
1
=
_
T
d
x

1
1/2
1
x

d
1/2
d
(1[x[)

d+1
1/2
dx
=
(
1
+
1
2
) (
d+1
+
1
2
)
([[ +
d+1
2
)
, (5.3.3)
where [[ =
1
+ +
d+1
. The orthogonal polynomials in V
d
n
(W
T

) are
eigenfunctions of a second-order differential operator. For n = 0, 1, 2, . . . ,
d

i=1
x
i
(1x
i
)

2
P
x
2
i
2

1i<jd
x
i
x
j

2
P
x
i
x
j
+
d

i=1
__

i
+
1
2
_

_
[[ +
d+1
2
_
x
i

P
x
i
=
n
P, (5.3.4)
where
n
= n(n +[[ +
d1
2
) and
i
>
1
2
for 1 i d +1. The proof of
equation (5.3.4) will be given in Subsection 8.2.1. For this weight function, we
give three explicit orthogonal bases.
An orthonormal basis To state this basis we use the notation of x
j
and
j
as
in the second orthonormal basis on B
d
(Subsection 5.2.1). For = (
1
, . . . ,
d+1
)
and 0 j d +1, let
j
:= (
j
, . . . ,
d+1
).
Proposition 5.3.1 For N
d
0
and [[ = n, the polynomials
P

(W
T

; x) = (h

)
1
d

j=1
(1[x
j1
[)

j
P
(a
j
,b
j
)

j
_
2x
j
1[x
j1
[
1
_
,
5.3 Classical Orthogonal Polynomials on the Simplex 151
where a
j
=2[
j+1
[ +[
j+1
[ +
dj1
2
and b
j
=
j

1
2
, form an orthonormal basis
of V
d
n
(W
T

), in which the constant is given by


(h

)
2
=
1
_
[[ +
d+1
2
_
2[[
d

j=1
(a
j
+b
j
+1)
2
j
(a
j
+1)

j
(b
j
+1)

j
(a
j
+b
j
+1)

j
!
.
Proof Using the second expression of the Jacobi polynomial as a hypergeometric
function in Proposition 1.4.14, we see that P

(W
T

; x) is indeed a polynomial of
degree [[ in x. To verify the orthogonality and the value of the constant we use
the formula (5.3.2), which allows us to reduce the integral over T
d
to a multiple
of d integrals of one variable, so that the orthogonality will follow from that of
the Jacobi polynomials. Indeed, using the formula (5.3.2) repeatedly gives
w
T

_
T
d
P

(W
T

; x)P

(W
T

; x)W
T

(x)dx
= w
T

(h

)
2
d

j=1
_
1
0
[P
(a
j
,b
j
)

j
(2t 1)]
2
(1t)
a
j
t
b
j
dt
,
.
Hence, using the norm for the Jacobi polynomials, we obtain that
[h

]
2
= w
T

j=1
(a
j
+1)(b
j
+1)
(a
j
+b
j
+2)
(a
j
+1)

j
(b
j
+1)

j
(a
j
+b
j
+
j
+1)
(a
j
+b
j
+2)

j
(a
j
+b
j
+2
j
+1)
j
!
,
which simplies, using a
d
=
d+1
, a
j1
+1 = a
j
+b
j
+2
j
+2, and a
1
+b
1
+
2
1
+2 =[[ +2[[ +
d+1
2
, to the stated formula.
Next we give two bases that are biorthogonal, each of which has its own dis-
tinguished feature. The rst is the monic orthogonal basis and the second is given
by the Rodrigues formulas.
Monic orthogonal basis For N
d
0
and [[ = n, we dene
V
T

(x) =

0
(1)
n+[[
d

i=1
_

i
_
(
i
+
1
2
)

i
(
i
+
1
2
)

i
([[ +
d1
2
)
n+[[
([[ +
d1
2
)
n+[[
x

,
where means
1

1
, . . . ,
d

d
.
Proposition 5.3.2 For V
T

as dened above, the set V


T

: [[ = n forms a
monic orthogonal basis of V
d
n
with respect to W
T

.
Proof The fact that V
T

(x) equals x

plus a polynomial of lower degree is evident.


We will prove that V
T

is orthogonal to polynomials of lower degree with respect


to W
T

, for which a different basis for the space


d
n
is used. Let
X

(x) = x

1
1
x

d
d
(1[x[)

d+1
,
152 Examples of Orthogonal Polynomials in Several Variables
where N
d+1
0
. Then the X

with [[ = n form a basis of


d
n
; that is,

d
n
= spanx

: N
d
0
, [[ n = spanX

: N
d+1
0
, [[ = n.
This can be seen easily from applying the binomial theorem to (1[x[)

d+1
. Thus,
to prove the orthogonality of V
T

it is sufcient to show that V


T

is orthogonal to
X

for all N
d+1
0
satisfying [[ = n 1. Using (5.3.3) and the formula for V
T

,
it follows for [[ = n1 and [[ = n that
w
T

_
T
d
V
T

(x)X

(x)W
T

(x)dx
=

_
(1)
n+[[
d

i=1
_

i
_
(
i
+
1
2
)

i
(
i
+
1
2
)

i
([[ +
d1
2
)
n+[[
([[ +
d1
2
)
n+[[

d
i=1
(
i
+
i
+
i
+
1
2
)(
d+1
+
d+1
+
1
2
)
([[ +[[ +[[ +
d+1
2
)
_
= (1)
n

d
i=1
(
i
+
i
+
1
2
)(
i
+1)(
d+1
+
d+1
+
1
2
)
(2n+[[ +
d+1
2
)

i=1

i
=0
(1)

i
_

i
__

i
+
i
+
i

1
2

i
_
.
A ChuVandermonde sum shows that, for m, n N and a R,
n

k=0
(1)
k
_
n
k
__
a+k
m
_
=
(1)
m
(a)
m
m!
2
F
1
_
n, 1+a
1+am
; 1
_
=
(1)
m
(a)
m
(m)
n
m!(1+am)
n
=
(m)
n
m!(1+a)
nm
.
In particular, the sum becomes zero, because of the factor (m)
n
, if m < n. Since
[[ = n 1, there is at least one i for which
i
<
i
; consequently, with n =
i
,
m =
i
and a =
i
+
i

1
2
, we conclude that V
T

is orthogonal to X

.
Further results on V
T

, including a generating function and its L


2
norm, will be
given in Subsection 8.2.3.
Rodrigues formula and biorthogonal basis It is easy to state another basis,
U
T

, that is biorthogonal to the basis V


T

. Indeed, we can dene U


T

by
U
T

(x) = x

1
+1/2
1
x

d
+1/2
d
(1[x[)

d+1
+1/2


[[
x

1
1
x

d
d
x

1
+
1
1/2
1
x

d
+
d
1/2
d
(1[x[)
[[+
d+1
1/2
.
These are analogues of the Rodrigues formula for orthogonal polynomials on the
simplex. For d = 2 and up to a constant multiple, the U

are also called Appell


5.3 Classical Orthogonal Polynomials on the Simplex 153
polynomials; see Appell and de F eriet [1926] and Chapter XII, Vol. II, of Erd elyi
et al. [1953].
Proposition 5.3.3 The polynomials U
T

are orthogonal polynomials and they


are biorthogonal to the polynomials V
T

:
_
T
d
V
T

(x)U
T

(x)W
T

(x)dx =

d
i=1
(
i
+
1
2
)

i
(
d+1
+
1
2
)
[[
([[ +
d+1
2
)
[[
!
,
.
Proof It is evident from the denition that U
T

is a polynomial of degree n.
Integrating by parts leads to
w
T

_
T
d
V
T

(x)U
T

(x)W
T

(x)dx
= w
T

_
T
d
_

[[
x

1
1
. . . x

d
d
V

(x)
_
d

i=1
x

i
+
i
1/2
i
(1[x[)
[[+
d+1
1/2
dx.
Since V
T

is an orthogonal polynomial with respect to W


T

, the integral on the


left-hand side is zero for [[ > [[. For [[ [[ the fact that V
T

is a monic
orthogonal polynomial gives

[[
x

1
1
x

d
d
V
T

(x) =!
,
,
from which the stated formula follows. The fact that the U
T

are biorthogonal to
V
T

shows that they are orthogonal polynomials with respect to W


T

.
Reproducing kernel with respect to W

on T
d
Let P
n
(W
T

; , ) denote the
reproducing kernel of V
d
n
(W

) as dened in (3.6.2). This kernel satises a


remarkable and concise formula.
Theorem 5.3.4 For
i
> 0, 1 i d, x, y T
d
, and
k
:=[[ +
d1
2
,
P
n
(W
T

; x, y) =
(2n+

)(

)
n

(
1
2
)
n
c

(5.3.5)

_
[1,1]
d+1
P
(

1/2,1/2)
n
_
2z(x, y, t)
2
1
_
d+1

i=1
(1t
2
i
)

i
1
dt,
where z(x, y, t) =

x
1
y
1
t
1
+ +

x
d
y
d
t
d
+
_
1[x[
_
1[y[t
d+1
and [c

]
1
=
_
[1,1]
d+1
d+1
i=1
(1 t
2
i
)

i
1
dt. If some
i
= 0 then the formula holds under the
limit relation (1.5.1).
This theorem will be proved in Subsection 8.2.2. We note that when = 0 we
have the limiting case P(W
0
; x, y), which was given in Corollary 4.4.10.
154 Examples of Orthogonal Polynomials in Several Variables
5.4 Orthogonal Polynomials via Symmetric Functions
We now present a non-classical example, two families of orthogonal polynomials
derived from symmetric polynomials.
5.4.1 Two general families of orthogonal polynomials
These orthogonal polynomials are derived from symmetric and antisymmetric
polynomials. Although they are orthogonal with respect to unusual measures,
they share an interesting property, that of having the largest number of distinct
common zeros, which will be discussed in the next subsection.
We recall some basic facts about symmetric polynomials. Apolynomial f
d
is called symmetric if f is invariant under any permutation of its variables. The
elementary symmetric polynomials in
d
k
are given by
e
k
= e
k
(x
1
, . . . , x
d
) =

1i
1
<...<i
k
d
x
i
1
. . . x
i
k
, k = 1, 2, . . . , d,
and they satisfy the generating function
d

i=1
(1+rx
i
) =
d

k=0
e
k
(x
1
, . . . , x
d
)r
k
.
The polynomials e
k
form a linear basis for the subspace of symmetric polynomi-
als. Hence, any symmetric polynomial f can be uniquely represented in terms of
elementary symmetric polynomials. Consider the mapping
u : x = (x
1
, . . . , x
d
) (e
1
(x), . . . , e
d
(x)), (5.4.1)
where the e
i
are elementary symmetric polynomials in x, which is one-to-one on
the region S =x R
d
: x
1
< x
2
< < x
d
; denote the image of S by R, that is,
R =u R
d
: u = u(x), x
1
< x
2
< < x
d
, x R
d
. (5.4.2)
Let D(x) denote the Jacobian of this mapping. We claim that
D(x) = det
_
_
e
i
x
j
_
i, j
_
=

1i<jd
(x
i
x
j
). (5.4.3)
To prove this equation we use the following identities, which follow from the
denition of the elementary symmetric functions:
e
k
x
j
= e
k1
(x
1
, . . . , x
j1
, x
j+1
, . . . , x
d
)
and
e
k1
(x
1
, . . . , x
j1
, x
j+1
, . . . , x
d
) e
k1
(x
2
, . . . , x
d
)
= (x
1
x
j
)e
k2
(x
2
, . . . , x
j1
, x
j+1
, . . . , x
d
).
5.4 Orthogonal Polynomials via Symmetric Functions 155
Subtracting the rst column from the other columns in the determinant, using the
above two identities and taking out the factors, we obtain
D(x) =
d

i=2
(x
1
x
i
)D(x
2
, . . . , x
d1
),
from which the formula (5.4.3) follows.
Since the square of the Jacobian is a symmetric polynomial in x, under the
mapping (5.4.1), it becomes a polynomial in u, which we denote by (u), that is,
(u) =

1i<jd
(x
i
x
j
)
2
, u R.
Let d be a nonnegative measure on R. Dene a measure on R as the image
of the product measure d(x) = d(x
1
) d(x
d
) under the mapping (5.4.1),
that is,
d(u) = d(x
1
) d(x
d
), u R.
Our examples are orthogonal polynomials with respect to the measures
[(u)]
1/2
d and [(u)]
1/2
d,
denoted by P
n,1/2

and P
n,1/2

, respectively.
The explicit formulae for these polynomials are given below. Let p
n
be
orthonormal polynomials with respect to d on R. For = (
1
, . . . ,
d
), n

1

d
0 and n N
0
, set
P
n,1/2

(u) =

S
d
p

1
(x

1
) p

d
(x

d
), u R, (5.4.4)
where S
d
denotes the symmetric group on d objects, so that the summation is
performed over all permutations of 1, 2, . . . , d. By the fundamental theorem
on symmetric functions, P
n,1/2

is a polynomial in u of degree n.
We now dene the second set of orthogonal polynomials. For = (
1
, . . . ,
d
),
0
1

d
= n and n N
0
, set
P
n,1/2

(u) =
E
n

(x)
D(x)
, where E
n

(x) = det (p

i
+di
(x
j
))
d
i, j=1
. (5.4.5)
Since E
n

(x) vanishes for x


i
= x
j
, i ,= j, and since D(x) =
1i<jd
(x
i
x
j
) is
a factor of E
n

(x), permutation of the variables in E


n

and D can only change the


signs. This happens to both E
n

and D, so E
n

/D is a symmetric polynomial. Since


the polynomial D is of degree d 1 in each of its variables x
i
and the polynomial
E
n

is of degree n +d 1 in x
i
, it follows that E
n

/D is a symmetric polynomial
of degree n in x
i
or, equivalently, a polynomial in u of degree n.
Let us now verify that the systems P
n,1/2

and P
n,1/2

are orthogonal on
R with respect to
1/2
d and
1/2
d, respectively. For any polynomial f in u,
that is, for any symmetric polynomial in x, changing variables gives
156 Examples of Orthogonal Polynomials in Several Variables
_
R
f (u)[(u)]
1/2
d(u) =
_
S
f (u(x)) d(x) =
1
d!
_
R
d
f (u(x)) d(x),
and, similarly,
_
R
f (u)[(u)]
1/2
d(u) =
_
S
f (u(x))
2
(x)d(x).
In particular, it follows that
_
R
P
n,1/2

(u)P
m,1/2

(u)[(u)]
1/2
d(u)
=
1
d!
_
R
d
P
n,1/2

(u(x))P
m,1/2

(u(x))d(x).
From the last expression and (5.4.4) we see that the polynomials of the system
P
n,1/2

are mutually orthogonal: the above integral is nonzero only when n =m


and = . Moreover, if
/
1
, . . . ,
/
d
/
, 1 d
/
d, are the distinct elements in ,
and m
i
is the number of occurrences of
/
i
, then
_
R
_
P
n,1/2

(u)

2
[(u)]
1/2
d(u) = m
1
! m
d
/ !
Hence,
_
1/

m
1
! m
d
/ !P
n,1/2

_
forms an orthonormal family on R.
Similarly, for the system P
n,1/2

,
_
R
P
n,1/2

(u)P
m,1/2

(u)[(u)]
1/2
d(u) =
_
S
E
n

(x)E
m

(x)d(x)
=
1
d!
_
R
d
E
n

(x)E
m

(x)d(x),
from which we can proceed as above. The system P
n,1/2

is orthonormal.
In the case d = 2, the mapping (5.4.1) takes the form
u = (u
1
, u
2
) : u
1
= x +y, u
2
= xy,
and the function takes the form (u
1
, u
2
) = (x y)
2
= u
2
1
4u
2
. The orthog-
onal polynomials (5.4.4) are exactly the Koornwinder polynomials, discussed in
Section 2.7, on the domain bounded by two lines and a parabola.
5.4.2 Common zeros and Gaussian cubature formulae
The orthogonal polynomials with respect to
1/2
have the maximal number of
common zeros. Let P
(1/2)
n
denote orthogonal bases of degree n with respect to
[(u)]
1/2
d, respectively.
Theorem 5.4.1 The orthogonal polynomials in P
(1/2)
n
or P
(1/2)
n
with respect to
the measures [(u)]
1/2
d or [(u)]
1/2
d, respectively, have dim
d
n
common
zeros.
5.4 Orthogonal Polynomials via Symmetric Functions 157
Proof Let p
n
(x) be orthonormal polynomials with respect to d(x) on R, as in
Subsection 5.4.1. Let x
k,n

n
k=1
be the zeros of p
n
, ordered by x
1,n
< < x
n,n
.
From the formula (5.4.4) it follows that P
n,1/2

vanishes on all u
,n
, which is
the image of (x

1
,n
, . . . , x

d
,n
) under the map (5.4.1), where the x

i
,n
are the
zeros of p
n
. Using the symmetric mapping (5.4.1) and taking into account the
restriction x
1
x
2
x
d
in the denition of R, we see that the num-
ber of distinct common zeros of P
(1/2)
n
is equal to the cardinality of the set
N
d
: 0
d

1
n1. Denote the cardinality by N
n1,d
. It follows
by setting
d
= k that
N
n1,d
=
n1

k=0
# N
d
:
1

d1
k =
n1

k=0
N
k,d1
,
which is the same as the equation dim
d
n1
=

r
d
k
, and induction shows that
N
n,d
= dim
d
n1
. Therefore, P
(1/2)
n
has dim
d
n1
distinct common zeros.
Similarly, let t
k,n+d1
be the zeros of the orthogonal polynomial p
n+d1
. From
the formula (5.4.5) it follows that P
n,1/2

vanishes on all u
,n
, which are the
images of (t

1
,n+d1
, . . . , t

d
,n+d1
) under the mappings (5.4.1). The number of
distinct common zeros of P
1/2
n
is equal to the cardinality of the set N
d
: 0

d
< <
1
n +d 2, which is easily seen to be equal to the cardinality
of the set dened in the previous paragraph. Hence, P
(1/2)
n
has dim
d
n1
distinct
common zeros.
Together with Theorem 3.8.4, this shows that Gaussian cubature formulae exist
for all weight functions (u)
1/2
d. Moreover, we can write down the cubature
formulae explicitly. Let us recall from Subsection 5.4.1 that p
n
is the nth orthog-
onal polynomial with respect to d on R and that the set x
k,n

n
k=1
contains the
zeros of p
n
. The Gaussian quadrature formula with respect to d takes the form
_
R
f (x)d =
n

k=1
f (x
k,n
)
k,n
, f
d
2n1
. (5.4.6)
Let u
,n
denote the image of x
,n
= (x

1
,n
, . . . , x

d
,n
) under the map (5.4.1).
Theorem 5.4.2 Using the notation in Subsection 5.4.1, both the measures
[(u)]
1/2
d and [(u)]
1/2
d admit Gaussian cubature formulae. Moreover, the
formulae of degree 2n1 take the form
_
R
f (u)[(u)]
1/2
d(u) =
n

1
=1

2
=1

d1

d
=1

1/2
,n
f (u
,n
)
and
_
R
f (u)[(u)]
1/2
d(u) =
n+d1

1
=1

1
1

2
=1

d1
1

d
=1

1/2
,n
f (u
,n+d1
),
158 Examples of Orthogonal Polynomials in Several Variables
where

1/2
,n
=

m
1

1
,n

m
d
/

d
/
,n
m
1
! m
d
/ !
and
1/2
,n
= J(x
,n
)
2

1
,n

d
,n
and where, for a given multi-index N
d
0
,
/
1
, . . . ,
/
d
/ are its distinct elements
and m
1
, . . . , m
d
/ their respective multiplicities.
Proof We need only determine the weights
1/2
,n
and
1/2
,n
in the two
formulae.
First notice that by changing variables the rst cubature formula can be
rewritten as
1
d!
_
R
d
f (x)d(x) =
n

1
=1

2
=1

d1

d
=1

1/2
,n
f (x
,n
),
which is exact for symmetric polynomials f whose degree in each variable is at
most 2n1. Let
k,n
be the polynomial of degree n 1 uniquely determined by

k,n
(x
j,n
) =
k, j
, 1 j n. Let
f (x) =

S
d

1
,n
(x

1
)

d
,n
(x

d
),
which is symmetric with degree n1 in each of its variables. There are d! sum-
mands in the denition of f . On the one hand we can write the integration on the
left-hand side of the rst cubature formula as a sum of products of one-variable
integrals. Using (5.4.6) the integral is equal to

1
,n

d
,n
. On the other hand,
the right-hand side of the rst cubature formula equals m
1
! m
d
/ !
1/2
,n
, since
only those terms with = count.
In the same way as above, the second cubature formula can be rewritten as
1
d!
_
R
d
f (x)J(x)
2
d(x) =
n+d1

1
=1

1
1

2
=1

d1
1

d
=1

1/2
,n
f (x
,n
),
which is exact for symmetric polynomials f whose degree in each variable is at
most n1. Now let
f (x) =
_
det(

i
(x
j
))
d
i, j=1

2
_
D(x)
2
which is a symmetric polynomial with degree 2n 1 in each of its vari-
ables. Clearly, the right-hand side of the second cubature formula is simply

1/2
,n
/D(x
,n
)
2
, while, using the Lagrange expansion of a determinant,
det (

i
(x
j
))
d
i, j=1
=

S
d
sign()

1
(x

1
)

d
(x

d
),
we see that the left-hand side of the formula is equal to
2

1
,n
. . .
2

d
,n
, which
proves the theorem.
5.5 Chebyshev Polynomials of Type A
d
159
5.5 Chebyshev Polynomials of Type A
d
The Chebyshev polynomials of type A
d
are orthogonal polynomials derived from
orthogonal exponential functions that are invariant under the reection group A
d
,
just as the Chebyshev polynomials are derived from e
i
e
i
, which is invari-
ant or skew-invariant under Z
2
. They are the d-variable extension of the second
family of Koornwinder polynomials, discussed in Section 2.9.
The space R
d
can be identied with the hyperplane
R
d+1
H
:=t R
d+1
: t
1
+ +t
d+1
= 0 R
d+1
.
In this section we shall adopt the convention of using a bold letter, such as
t = (t
1
, . . . , t
d+1
), to denote the homogeneous coordinates in R
d+1
H
that satisfy
t
1
+ +t
d+1
= 0. The root lattice of the group A
d
is generated by reections

i j
, dened by, in homogeneous coordinates,
t
i j
:= t 2
t, e
i, j
)
e
i, j
, e
i, j
)
e
i, j
= t (t
i
t
j
)e
i, j
, where e
i, j
:= e
i
e
j
.
This group can be identied with the symmetric group of d +1 elements. The
fundamental domain of A
d
is dened by
=
_
t R
d+1
H
: 1 t
i
t
j
1, 1 i < j d +1
_
,
which is the domain that tiles R
d+1
H
under A
d
in the sense that

kZ
d+1
H

(t +k) = 1, for almost all t R


d+1
H
. (5.5.1)
The dual lattice of Z
d+1
H
is given by k/(d +1) : k H, where
H:=k Z
d+1
R
d+1
H
: k
1
k
d+1
mod d +1.
There is a close relation between tiling and exponential functions. We dene the
exponential functions

k
(t) := exp
_
2i
d +1
k
T
t
_
, k H.
It is easy to see that these functions are periodic in the sense that
k
(t) =
k
(t +j)
j Z
d+1
R
d+1
H
. Furthermore, they are orthogonal:
1

d +1
_

k
(t)
j
(t)dt =
k,j
. (5.5.2)
This orthogonality can be deduced by taking the Fourier transform of (5.5.1) and
using the Poisson summation formula. For d = 2 the orthogonality is veried
directly in Proposition 2.9.1. However, the choice of is different from that of
Section 2.9, which is the reason for the different normalization of the integral
over .
160 Examples of Orthogonal Polynomials in Several Variables
Dene the invariant operator P
+
and the skew-invariant operator P

by
P

f (t) :=
1
(d +1)!
_

A
+
f (t)

A

f (t)
_
,
where A
+
(A

) contains even (odd) elements in A


d
. The invariant and skew-
invariant functions dened by, respectively,
TC
k
(t) :=P
+

k
(t), k , and TS
k
(t) :=
1
i
P

k
(t), k
0
,
are analogues of cosine and sine functions; here
:=k H: k
1
k
2
k
d+1
,

0
:=k H: k
1
> k
2
> > k
d+1
.
The fundamental domain of the A
d
lattice is the union of congruent simplexes
under the action of the group A
d
. We x one simplex as
:=
_
t R
d+1
H
: 0 t
i
t
j
1, 1 i j d +1
_
and dene an inner product on by
f , g)

:=
1
[[
_

f (t)g(t)dt =
(d +1)!

d +1
_

f (t)g(t)dt.
Proposition 5.5.1 For k, j , we have
TC
k
, TC
j
)

=

k,j
[kA
d
[
and TS
k
, TS
j
)

=

k,j
(d +1)!
, k, j
0
,
where [kA
d
[ denotes the cardinality of the orbit kA
d
:=k : A
d
.
Proof Both these relations follow from the fact that if f (t)g(t) is invariant under
A
d
then
1

d +1
_

f (t)g(t)dt = f , g)

and from the orthogonality of


k
(t) over .
The Chebyshev polynomials of type A
d
are images of the generalized cosine
and sine functions under the change of variables t z, where z
1
, . . . , z
d
denote the
rst d elementary symmetric functions of e
2it
1
, . . . , e
2it
d+1
and are dened by
z
k
:=
_
d +1
k
_
1

JN
d+1
[J[=k
e
2i
jJ
t
j
, (5.5.3)
where N
d+1
= 1, 2, . . . , d +1. Let v
k
:=
_
d +1 k
k
, k
d+1k
_
with t
k
means that t is repeated k times. Then v
k
/(d +1), k = 1, 2, . . . , d, are
vertices of the simplex and z
k
equals TC
v
k (t), as can be seen from
(v
k
)
T
t = (d +1)(t
1
+ +t
k
), by the homogeneity of t. Thus TC
k
is a symmetric
polynomial in e
2it
1
, . . . , e
2it
d+1
and hence a polynomial in z
1
, . . . , z
d
. For TS
k
we
need the following lemma:
5.5 Chebyshev Polynomials of Type A
d
161
Lemma 5.5.2 Let
v

:=
_
d(d +1)
2
,
(d 2)(d +1)
2
, . . . ,
(2d)(d +1)
2
,
d(d +1)
2
_
.
Then
TS
v
(t) =
(2i)
d(d+1)/2
(d +1)!

1<d+1
sin(t

). (5.5.4)
Proof Let := (d, d 1, . . . , 1, 0) N
d+1
0
. Then
exp
_
2i
d +1
v

t
_
= exp[id(t
1
+ +t
d+1
)]exp(2i t) = exp(2i t).
Using the Vandermonde determinant (see, for example, p. 40 of Macdon-
ald [1992]),

S
d+1
()(x

) = det
_
x
d+1j
i
_
1i, jd+1
=

1i<jd+1
(x
i
x
j
)
and setting x
j
= e
2it
j
, we obtain
TS
v
(t)=
1
[G[

G
()exp
_
2i
d +1
v

t
_
=
1
(d +1)!

1<d+1
_
e
2it

e
2it

_
=
1
(d +1)!

1<d+1
_
e
it

e
it

_
,
where the second equality follows from t
1
+ + t
d+1
= 0; from this we
have (5.5.4).
The same argument that proves (5.5.4) also shows that
TS
k+v
(t) = det
_
x

j
+
i
_
1i, jd+1
k ,
where x
i
= e
2it
i
and := (k
1
k
d+1
, k
2
k
d+1
, . . . , k
d
k
d+1
, 0). Since is
a partition, TS
k+v
(t) is divisible by TS
v
in the ring Z[x
1
, . . . , x
d
] (cf. p. 40 of
Macdonald [1995]) and the quotient
s

(x
1
, . . . , x
d
) =TS
k+v
(t)/TS
v
(t)
is a symmetric polynomial in x
1
, . . . , x
d
, which is the Schur function in the
variables x
1
, . . . , x
d
corresponding to the partition . In particular, s

is a poly-
nomial in the elementary symmetric polynomials z
1
, . . . , z
d
; more precisely, it is
a polynomial in z of degree (k
1
k
d+1
)/(d +1), as shown by formula (3.5) in
Macdonald [1995]. This is our analogue of the Chebyshev polynomials of the
second kind.
162 Examples of Orthogonal Polynomials in Several Variables
Denition 5.5.3 Dene the index mapping : N
d
0
by

i
=
i
(k) :=
k
i
k
i+1
d +1
, 1 i d. (5.5.5)
Under the change of variables t z = (z
1
, . . . , z
k
) in (5.5.3), dene
T

(z) :=TC
k
(t) and U

(z) :=
TS
k+v
(t)
TS
v
(t)
, N
d
0
.
The polynomials T

(z) and U

(z) are called the Chebyshev polynomials type A


d
,
of the rst and the second kind, respectively.
The polynomials T

and U

are of degree [[ in z. They are analogues of the


classical Chebyshev polynomials of the rst kind and the second kind, respec-
tively. In particular, we shall show that they are respectively orthogonal on the
domain

, the image of under t z. We need the Jacobian of the change of


variables. Let us dene
w(x) = w(x(t)) :=

1<d+1
sin
2
(t

)
under this change of variables.
Lemma 5.5.4 The Jacobian of the change of variables t x is given by
det
(z
1
, z
2
, . . . , z
d
)
(t
1
, t
2
, . . . , t
d
)
=
d

k=1
2i
_
d+1
k
_

1<d+1
[ sin(t

)[.
Proof Regarding t
1
, t
2
, . . . , t
d+1
as independent variables, one sees that
z
k
t
j
=
2i
_
d+1
k
_

jIN
d+1
,[I[=k
e
2i
I
t

.
For each xed j, let N
( j)
m
denote the set of N
m
j and split I N
d+1
into two
parts, one containing j, d +1 and the other not. We then obtain, after canceling
the common factor,
z
k
t
j

z
k
t
d+1
=
2i
_
d+1
k
_
_
_
_

jIN
d+1
d+1
,[I[=k


d+1IN
j
d+1
,[I[=k
_
_
_
e
2i
I
t

=
2i
_
d+1
k
_
_
e
2it
j
e
2it
d+1
_

IN
j
d
,[I[=k1
e
2i
I
t

.
Hence, setting f
d
j,k
:=

IN
j
d
,[I[=k1
e
2i
I
t

and dening the matrix F


d
:=
_
f
d
j,k
_
1j,kd
, we have
5.5 Chebyshev Polynomials of Type A
d
163
det
(z
1
, z
2
, . . . , z
d
)
(t
1
, t
2
, . . . , t
d
)
=
d

k=1
2i
_
d+1
k
_
d

j=1
_
e
2it
j
e
2it
d+1
_
det F
d
.
By denition f
d
j,1
= 1; splitting N
j
into a part containing d and a part not
containing d, it follows that
f
d
j,k
f
d
d,k
=
_
e
2it
d
e
2it
j
_

IN
j,d
d
,[I[=k2
e
2i
I
t

=
_
e
2it
d
e
2it
j
_
f
d1
j,k1
,
which allows us to use induction and show that
det F
d
=

1<d1
_
e
2it

e
2it
d
_
det F
d1
=
=

1<d
_
e
2it

e
2it

_
.
The use of (5.5.4) now proves the lemma.
From the homogeneity of t, the z
k
in (5.5.3) satisfy z
k
= z
d+1k
. Hence, we can
dene real coordinates x
1
, . . . , x
d
by
x
k
=
z
k
+z
d+1k
2
, x
d+1k
=
z
k
z
d+1k
2i
for 1 k
d
2
|,
and we dene x
(d+1)/2
=
1

2
z
(d+1)/2
when d is odd. Instead of making the change
of variables t z, we consider t x of R
d+1
H
R
d
. The image of the simplex
under this mapping is

:=
_
x = x(t) R
d
: t R
d+1
H
,

1i<jd+1
sin(t
i
t
j
) 0
_
.
For d = 2, this is the second family of Koornwinder polynomials, presented in
Section 2.9, and

is the region bounded by the hypercycloid.


The Chebyshev polynomials are orthogonal with respect to the weight func-
tions W
1/2
and W
1/2
, respectively, on

, where
W

(z) :=

1<d+1
[ sin(t

)[
2
.
We will need the cases =
1
2
and =
1
2
of the weighted inner product
f , g)
W

:= c

f (z)g(z)W

(z)dz,
where c

is a normalization constant, c

:= 1/
_

(z)dz.
The orthogonality of T

and U

, respectively, then follows from the orthog-


onality of TC
k
and TS
k
with the change of variables. More precisely,
Proposition 5.5.1 leads to the following theorem:
164 Examples of Orthogonal Polynomials in Several Variables
Theorem 5.5.5 The polynomials T

and U

are orthogonal polynomials with


respect to W
1/2
and W
1/2
, respectively, and
T

, T

)
W
1/2
=

,
[kA
d
[
and U

,U

)
W
1/2
=

,
(d +1)!
for , N
d
0
, where k is dened by =(k).
Both T

(z) and U

(z) are polynomials of degree [[ =


1
+ +
d
in z.
Moreover, they both satisfy a simple recursion relation.
Theorem 5.5.6 Let P

denote either T

or U

. Then
P

(z) = P

d
,
d1
,...,
1
(z), N
d
0
, (5.5.6)
and P

satises the recursion relation


_
d +1
i
_
z
i
P

(z) =

jv
i
A
d
P
+(j)
(z), N
d
0
, (5.5.7)
in which the components of (j), j v
i
A
d
, have values in 1, 0, 1, U

(z) = 0
whenever has a component
i
=1 and
T
0
(z) = 1, T

k
(z) = z
k
, 1 k d,
U
0
(z) = 1, U

k
(z) =
_
d +1
k
_
z
k
, 1 k d.
Proof The relation (5.5.6) follows readily from the fact that (k
i
k
j
) = k
j
k
i
and (5.5.5). For a proof of the recursion relations we need
TC
j
(t)TC
k
(t) =
1
[A
d
[

A
d
TC
k+j
(t), j, k , (5.5.8)
TC
j
(t)TS
k
(t) =
1
[A
d
[

A
d
TS
k+j
(t), j, k , (5.5.9)
which follow, using simple computation, from the denitions of TC
k
and TS
k
.
The relation (5.5.7) follows immediately from (5.5.8) and (5.5.9). Further, using
(5.5.9) it is not difcult to verify that
z
k
TS
v
(t) =
k!(d +1k)!
(d +1)!
TS
v

+v
k (t), 1 k d,
from which the values of P
0
and P

k
are obtained readily. If has a component

i
=1 then, by (5.5.5), k
i
() = k
i+1
() (d +1). A quick computation shows
that then k
i
() +v

i
= k
i+1
() +v

i+1
, which implies that k+v

, so that
TS
k+v
(t) = 0 and U

(z) = 0.
By (5.5.6) together with z
k
= z
dk+1
we can derive a sequence of real
orthogonal polynomials from either T

or U

.
5.6 Sobolev Orthogonal Polynomials on the Unit Ball 165
5.6 Sobolev Orthogonal Polynomials on the Unit Ball
Sobolev orthogonal polynomials are orthogonal with respect to an inner prod-
uct that involves derivatives. In this section we consider two types of Sobolev
orthogonal polynomials on the unit ball B
d
.
5.6.1 Sobolev orthogonal polynomials dened via the gradient operator
In this subsection we consider the inner product dened by
f , g)

:=

d
_
B
d
f (x) g(x)dx +
1

d
_
S
d1
f (x)g(x)d(x), (5.6.1)
where is a xed positive number and x y stands for the inner product of vectors
x and y. Since the positive deniteness of the bilinear form (5.6.1) can be easily
veried, it is an inner product and consequently polynomials orthogonal with
respect to f , g)

exist. Let V
d
n
() denote the space of orthogonal polynomials
of degree n with respect to (5.6.1).
In analogy to (5.2.4), a basis of V
d
n
() can be given in terms of polynomials
of the form
Q
n
j,
(x) := q
j
(2|x|
2
1)Y
n2 j

(x), 0 2 j n, (5.6.2)
where Y
n2 j

: 0 a
d
n2 j
, with a
d
m
:= dimH
d
m
, is an orthonormal basis of
H
d
n2 j
and q
j
is a polynomial of degree j in one variable. We need the following
lemma.
Lemma 5.6.1 Let Q
n
j,
be dened as above. Then

_
(1|x|
2
)Q
n
j,
(x)

= 4
_
J

q
j
_
(2r
2
1)Y
n2 j

(x),
where is the Laplacian operator, = n2 j +
d2
2
and
(J

q
j
)(s) = (1s
2
)q
//
j
(s) +[ 1( +3)s]q
/
j
(s) ( +1)q
j
(s).
Proof Using spherical polar coordinates, it follows from (4.1.4) and (4.1.5) that

_
(1|x|
2
)Q
n
j,
(x)

=
_
(1r
2
)q
j
(2r
2
1)r
n2 j
Y
n2j

(x
/
)
_
= 4r
n2 j
_
4r
2
(1r
2
)q
//
j
(2r
2
1)
+2(( +1) ( +3)r
2
)q
/
j
(2r
2
1)
( +1)q
j
(2r
2
1)

Y
n2 j

(x
/
).
Setting s 2r
2
1 gives the stated result.
It is interesting that does not appear in the Q
n
j,
.
166 Examples of Orthogonal Polynomials in Several Variables
Theorem 5.6.2 For 0 j n/2, let Y
n2 j

: 1 a
d
n2j
be an orthonormal
basis of H
d
n2 j
. Then a mutually orthogonal basis Q
n
j,
: 0 j
n
2
, 1
a
d
n2j
for V
d
n
() is given by
Q
n
0,
(x) =Y
n

(x),
Q
n
j,
(x) = (1|x|
2
)P
(1,n2 j+(d2)/2)
j1
(2|x|
2
1)Y
n2 j

(x)
(5.6.3)
for 0 < j n/2. Furthermore, the norms of these polynomials are
Q
n
0,
, Q
n
0,
)

= n +1, Q
n
j,
, Q
n
j,
)

=
2 j
2
n+
d2
2
. (5.6.4)
Proof As in Proposition 5.2.1, we only need to prove the orthogonality. We start
with Greens identity
_
B
d
f (x) g(x)dx =
_
S
d1
f (x)
d
dr
g(x)d
_
B
d
f (x)g(x)dx,
where d/dr is the normal derivative, which coincides with the derivative in the
radial direction. The above identity can be used to rewrite the inner product
, )

as
f , g)

=
1

d
_
S
d1
f (x)
_

d
dr
g(x) +g(x)
_
d

d
_
B
d
f (x)g(x)dx.
First we consider the case j = 0, that is, the orthogonality of Q
n
0,
=Y
n

. Setting
j = 0 in (5.2.4) shows that Y
n

is an orthogonal polynomial in V
d
n
(W
B

). Since
Q
n
j,
(x)[
r=1
= 0,
d
dr
Q
n
j,
(x)[
r=1
=2P
(1,n2 j+(d2)/2)
j1
(1)Y
n2 j

(x
/
)
and Q
m
j,

d
m2
, it follows from the above expression for f , g)

that, for
m < n, j > 0 and 0
m2 j
,
Q
n
0,
, Q
m
j,
)

=2P
(1,n2 j+(d2)/2)
j1
(1)

d
_
S
d1
Y
n

(x
/
)Y
m

(x
/
)d(x
/
) = 0.
Furthermore, using the fact that (d/dr)Y
n

(x)[
r=1
= nY
n

(x
/
), the same considera-
tion shows that
Q
n
0,
, Q
m
0,
)

= (n+1)
1

d
_
S
d1
_
Y
n

(x
/
)

2
d(x
/
)
n,m
= (n+1)
n,m
.
Next we consider Q
n
j,
for j 1. In this case Q
n
j,
(x)[
r=1
= 0 since it contains the
factor 1|x|
2
, which is zero on S
d1
. Consequently, the rst term in
5.6 Sobolev Orthogonal Polynomials on the Unit Ball 167
Q
n
j,
, Q
m
l,
)

=
1

d
_
S
d1
Q
n
j,
(x
/
)
_

d
dr
Q
m
l,
+Q
m
l,
_
(x
/
)d(x
/
)

d
_
B
d
Q
n
j,
(x)Q
m
l,
(x)dx
is zero. For the second term, we use Lemma 5.6.1 to derive a formula for Q
n
j,
.
The formula in the lemma gives
Q
n
j,
(x) = 4
_
J

P
(1,)
j1
_
(2r
2
1)Y
n2 j

(x), = n2 j +
d2
2
.
On the other hand, the differential equation satised by the Jacobi polynomial
becomes, for P
(1,)
j1
,
(1s
2
)y
//
[1 +(3+)s]y
/
+( j 1)( j + +1)y = 0,
which implies that (J

P
(1,)
j1
)(s) =j( j +)P
(1,)
j1
(s). Consequently, denoting
by P
n
j,
(W
B

; x) the polynomials in (5.2.4) without their normalization constant,


we obtain
Q
n
j,
(x) = 4 j( j +)P
(1,)
j1
(2r
2
1)Y
n2 j

(x)
= 4 j(n j +
d2
2
)P
n2
j1,
(W
1
; x). (5.6.5)
Hence, using the fact that Q
m
l,
(x) = (1 |x|
2
)P
m2
l1,
(W
3/2
; x), we derive from
(5.6.5) that
_
B
d
Q
m
l,
(x)Q
n
j,
(x)dx
=4j(n j +
d2
2
)
_
B
d
P
m2
l1,
(W
3/2
; x)P
n2
j1,
(W
3/2
; x)(1|x|
2
)dx
=4j(n j +
d2
2
)
_
B
d
_
P
n2
j1,
(W
3/2
; x)
_
2
(1|x|
2
)dx
n,m

j,l

,
.
Using the norm of P
n
j,
(W
B

; x) computed in the proof of Proposition 5.2.1, we can


then verify (5.6.4) for j 1.
From the explicit form of the basis (5.6.3) it follows that Q
n
j,
is related to
polynomials orthogonal with respect to W
3/2
(x) = 1|x|
2
. In fact, we have
Q
n
j,
(x) = (1|x|
2
)P
n2
j1,
(W
3/2
; x), j 1,
which has already been used in the above proof. An immediate consequence is
the following corollary.
Corollary 5.6.3 For n 1,
V
d
n
() =H
d
n
(1|x|
2
)V
d
n2
(W
3/2
).
168 Examples of Orthogonal Polynomials in Several Variables
Recall that the classical orthogonal polynomials in V
d
n
(W
B

) are eigenfunctions
of a second-order differential operator D

for >
1
2
; see (5.2.3). Usng the
relation (5.6.5), we can deduce the following result.
Corollary 5.6.4 The polynomials P in V
d
n
() are eigenfunctions of D
1/2
, and
D
1/2
P =(n+d)(n2)P P V
d
n
().
5.6.2 Sobolev orthogonal polynomials dened via the Laplacian operator
We consider the inner product dened by
f , g)

:= a
d
_
B
d
[(1|x|
2
) f (x)] [(1|x|
2
)g(x)] dx,
where a
d
= (4d
d1
)
1
so that 1, 1)

= 1. To see that it is an inner product, we


need to show only that f , f )

> 0 if f ,= 0. However, if (1 |x|


2
) f (x) = 0
then (1 |x|
2
) f (x) is a solution of the Dirichlet problem with zero boundary
condition for the Laplace operator on the unit ball, so that by the uniqueness of
the Dirichlet problem f must be the zero function. Let V
d
n
() denote the space
of orthogonal polynomials of degree n with respect to (5.6.1).
As in the case of , ), a basis of V
d
n
() can be given in terms of polynomials
of the form
R
n
j,
(x) := q
j
(2|x|
2
1)Y
n2 j

(x), 0 2 j n, (5.6.6)
where Y
n2 j

: 0 a
d
n2 j
, with a
d
m
:= dimH
d
m
, is an orthonormal basis of
H
d
n2j
and q
j
is a polynomial of degree j in one variable. We need the following
lemma, in which J

is dened as in Lemma 5.6.1.


Lemma 5.6.5 Let >1. The polynomials p

j
dened by
p

0
(s) = 1, p

j
(s) = (1s)P
(2,)
j1
(s), j 1,
are orthogonal with respect to the inner product ( f , g)

dened by
( f , g)

:=
_
1
1
(J

f )(s)(J

g)(s)(1+s)

ds, >1.
Proof The three-term relation of the Jacobi polynomials gives
(1s)P
(2,)
j1
(s) =
2
2 j + +1
_
( j +1)P
(1,)
j1
(s) jP
(1,)
j
(s)
_
and the differential equation satised by P
(1,)
j1
is
(1s
2
)y
//
+[1+ (3+)s]y
/
+( j 1)( j + +1)y = 0.
5.6 Sobolev Orthogonal Polynomials on the Unit Ball 169
Using these two facts, we easily deduce that
1
2
(2j + +1)J

_
(1s)P
(2,)
j1
(s)
_
= ( j +1)J

P
(1,)
j1
(s) jJ

P
(1,)
j1
(s)
= ( j +1)
_
[( j 1)( j + +1) ( +1)]P
(1,)
j1
(s)
j[j( j + +2) ( +1)]P
(1,)
j
(s)
_
=j( j +1)
_
( j +)P
(1,)
j1
(s) ( j + +1)P
(1,)
j
(s)
_
.
We need one more formula for the Jacobi polynomials (p. 782, formula (22.7.8)
in Abramowitz and Stegun [1970]):
(2j + +1)P
(0,)
j
(s) = ( j + +1)P
(1,)
j
(s) ( j +)P
(1,)
j1
(s),
which implies immediately that
J

_
(1s)P
(2,)
j1
(s)
_
= 2 j( j +1)P
(0,)
j
(s).
Hence, for j, j
/
1, we conclude that
(p

j
, p

j
/
)

=
_
1
1
J

_
(1s)P
(2,)
j1
(s)
_
J

_
(1s)P
(2,)
j
/
1
(s)
_
(1+s)

ds
=4 j( j +1) j
/
( j
/
+1)
_
1
1
P
(0,)
j
(s)P
(0,)
j
/
(s)(1+s)

ds = 0
whenever j ,= j
/
. Furthermore, for j 1 we have
(p

0
, p

j
)

=2j( j +1)( +1)


_
1
1
P
(0,)
j
(s)(1+s)

ds = 0,
since (J

0
)(s) = (J

1)(s) =( +1).
Theorem 5.6.6 A mutually orthogonal basis for V
d
n
() is given by
R
n
0,
(x) =Y
n

(x),
R
n
j,
(x) = (1|x|
2
)P
(2,n2 j+(d2)/2)
j1
Y
n2 j

(x), 1 j
n
2
,
(5.6.7)
where Y
n2 j

: 1
n2 j
is an orthonormal basis of H
d
n2 j
. Furthermore
R
n
0,
, R
n
0,
)

=
2n+d
d
, R
n
j,
, R
n
j,
)

=
8j
2
( j +1)
2
d(n+
d
2
)
. (5.6.8)
Proof First we show that R
n
j,
V
d
n
(). Let p

j
be dened as in the previous
lemma. Set q
j
:= p

n2 j
j
with
k
=k +(d 2)/2. Using (5.2.2), Lemma 5.6.1 and
the orthonormality of Y
n2 j

, we obtain
170 Examples of Orthogonal Polynomials in Several Variables
R
n
j,
, R
n
/
j
/
,
/
)

=
,
/
n2 j,n
/
2 j
/
1
4d
_
1
0
r
d+2(n2 j)1
4
2
(J

n2 j
q
j
)(2r
2
1)]
2
(J

n
/
2 j
/
q
j
/ )(2r
2
1)dr.
The change of variable r
_
1
2
(1+s) shows then that
R
n
j,
, R
n
/
j
/
,
/
)

=
,
/
n2 j,n
/
2 j
/
1
2

n2j
d
(q
j
, q
j
/ )

n2 j
, (5.6.9)
which proves the orthogonality by Lemma 5.6.5. To compute the norm of Q
n
0,
we use the fact that
[(1|x|
2
)Y
n2 j

(x)] =2dY
n

(x) 4x, )Y
n

=2(d +2n)Y
n

(x),
by Eulers formula on homogeneous polynomials, which shows that
Q
n
0,
, Q
n
0,
)

=
(2n+d)
2

d1
d
_
1
0
r
d1+2n
dr
_
S
d1
[Y
n

(x)]
2
dx =
2n+d
d
.
Furthermore, using equation (5.6.9), the proof of Lemma 5.6.5 shows that
Q
n
j,
, Q
n
j,
)

=
1
2

j
d
(p
j
, p
j
/ )

j
=
4 j
2
( j +1)
2
2

j
d
_
1
1
_
P
(0,
j
)
j
(s)
_
2
(1+s)

j
ds
=
8 j
2
( j +1)
2
(
j
+2 j +1)d
=
8 j
2
( j +1)
2
(n+
d
2
)d
using the expression for the norm of the Jacobi polynomial.
From the explicit formula for the basis (5.6.7) it follows that R
n
j,
is related to
the polynomials orthogonal with respect to W
5/2
(x) = (1 |x|
2
)
2
on B
d
. In fact
we have
R
n
j,
(x) = (1|x|
2
)P
n2
j1,
(W
5/2
; x), j 1,
which leads to the following corollary.
Corollary 5.6.7 For n 1,
V
d
n
() =H
d
n
(1|x|
2
)V
d
n2
(W
5/2
).
This corollary should be compared with Corollary 5.6.3. The polynomials
in V
d
n
(), however, are not all eigenfunctions of a second-order differential
operator.
5.7 Notes 171
5.7 Notes
By the classical orthogonal polynomials we mean the polynomials that are
orthogonal with respect to the Jacobi weight functions on the cube, ball or sim-
plex in R
d
, the Laguerre weight function on R
d
+
, and the Hermite weight on R
d
.
These polynomials are among the rst that were studied.
The study of the Hermite polynomials of several variables was begun by Her-
mite. He was followed by many other authors; see Appell and de F eriet [1926]
and Chapter XII, Vol. II, of Erd elyi et al. [1953]. Analogues of the Hermite
polynomials W
H
can be dened more generally for the weight function
W(x) = (det A)
1/2

d/2
exp
_
x
T
Ax
_
, (5.7.1)
where A is a positive denite matrix. Since A is positive denite it can be written
as A = B
T
B. Thus the orthogonal polynomials for W in (5.7.1) can be derived
from the Hermite polynomials for W
H
by a change of variables. One interesting
result for such generalized Hermite polynomials is that two families of general-
ized Hermite polynomials, dened with respect to the matrices A and A
1
, can be
biorthogonal.
For the history of the orthogonal polynomials V

and U

on B
d
, we refer to
Chapter XII, Vol. II of Erd elyi et al. [1953]. These polynomials were studied in
detail in Appell and de F eriet [1926]. They are sometimes used to derive explicit
formulae for Fourier expansions. Rosier [1995] used them to study Radon trans-
forms. Orthogonal bases for ridge polynomials were discussed in Xu [2000b],
together with a FunkHecke type formula for orthogonal polynomials. Compact
formulae (5.2.7) for the reproducing kernels were proved in Xu [1999a] and used
to study expansion problems. The formula (5.2.8) was proved in Petrushev [1999]
in the context of approximation by ridge functions and in Xu [2007] in con-
nection with Radon transforms. The orthogonal polynomials for the unit weight
function on B
d
were used in Maiorov [1999] to study questions related to neural
networks.
The orthogonal polynomials U

on T
d
were dened in Appell and de
F eriet [1926] for d = 2. The polynomials V

on T
d
are extensions of those for
the unit weight function on T
d
dened in Grundmann and M oller [1978]. The
formula (5.3.5) for the reproducing kernel appeared in Xu [1998d]. A product for-
mula for orthogonal polynomials on the simplex was established in Koornwinder
and Schwartz [1997]. The orthogonal polynomials on the simplex also appeared
in the work of Grifths [1979], Grifths and Span` o [2011, 2013] and Rosen-
gren [1998, 1999]. A basis for the orthogonal polynomials on the simplex can
be given in terms of Bernstein polynomials, as shown in Farouki, Goodman and
Sauer [2003] and Waldron [2006]. Further properties of the Rodrigues formulae
on the simplex were given in Aktas and Xu [2013]. A probability interpretation
of these and other related polynomials was given in Grifths and Span` o [2011].
172 Examples of Orthogonal Polynomials in Several Variables
The orthogonal polynomials dened via symmetric functions were studied
in Berens, Schmid and Xu [1995b]. The family of Gaussian cubature formulae
based on their common zeros is the rst family for all n and in all dimen-
sions to be discovered. The polynomials E
n

(x) were studied in Karlin and


McGregor [1962, 1975].
The Chebyshev polynomials of type A
d
were studied systematically by
Beerends [1991]. They are generalizations of the Koornwinder polynomials
for d = 2 and of the partial results in Eier and Lidl [1974, 1982], Dunn and
Lidl [1982], Ricci [1978] and Barcry [1984]. We have followed the presenta-
tion in Li and Xu [2010], who studied these polynomials from the point of view
of tiling and discrete Fourier analysis and also studied their common zeros. It
turned out that the set of orthogonal polynomials U

: [[ = n of degree n has
dim
d
n1
distinct real common zeros in

, so that the Gaussian cubature formu-


lae exist for W
1/2
on

by Theorem 3.8.4. Gaussian cubature, however, does not


exist for W
1/2
. For further results and references, we refer to Beerends [1991]
and Li and Xu [2010].
The symmetric orthogonal polynomials associated with A
d
are related to
the BC
n
-type orthogonal polynomials in several variables; see, for example,
Vretare [1984], Beerends and Opdam [1993] and van Diejen [1999]. They are
special cases of the Jacobi polynomials for root systems studied in Opdam[1988].
The Chebyshev polynomials in the form of symmetric trigonometric orthogonal
polynomials have also been studied recently for compact simple Lie groups; see
Nesterenko, Patera, Szajewska and Tereszkiewicz [2010] and Moody and Pat-
era [2011], and the references therein, but only a root system of the A
d
type leads
to a full basis of algebraic orthogonal polynomials.
For the Sobolev orthogonal polynomials on the unit ball, the orthogonal basis
for the inner product , )

was constructed in Xu [2006b], in response to a prob-


lemin the numerical solution of a Poisson equation on the disk raised by Atkinson
and Hansen [2005]. The orthogonal basis for , )

was constructed in Xu [2008],


in which an orthogonal basis was also constructed for a second inner product
dened via the gradient,
f , g)
,II
:=

d
_
B
d
f (x) g(x)dx + f (0)g(0).
As shown in Corollary 5.6.4, orthogonal polynomials with respect to , )

are
eigenfunctions of the differential operator D

in the limiting case =


1
2
. Eigen-
functions of D

for further singular cases, =


3
2
,
5
2
, . . ., were studied in Pi nar
and Xu [2009].
Despite extensive research into the Sobolev orthogonal polynomials in one
variable, study of the Sobolev polynomials in several variables started only
recently. We refer to Lee and Littlejohn [2005], Bracciali, Delgado, Fern andez,
P erez and Pi nar [2010], Aktas and Xu [2013], and P erez, Pi nar and Xu [2013].
5.7 Notes 173
Besides the unit ball, the only other case that has been studied carefully is that
of the simplex. Let D

denote the differential operator on the right-hand side of


(5.3.4). Then the orthogonal polynomials in V
d
n
(W

) on the simplex are eigen-


functions of D

for
i

1
2
, 1 i d. The limiting cases where some, or all,

i
= 1 are studied in Aktas and Xu [2013]. In each case a complete basis was
found for the differential equation and an inner product of the Sobolev type was
constructed such that the eigenfunctions are orthogonal with respect to the inner
product.
6
Root Systems and Coxeter Groups
There is a far reaching extension of classical orthogonal polynomials that uses
nite reection groups. This chapter presents the part of the theory needed for
our analysis. We refer to the books by Coxeter [1973], Grove and Benson [1985]
and Humphreys [1990] for the algebraic structure theorems. We will begin with
the orthogonal groups, denitions of reections and root systems and descrip-
tions of the innite families of nite reection groups. A key part of the chapter
is the denition and fundamental theorems for the differentialdifference (Dunkl)
operators.
6.1 Introduction and Overview
For x, y R
d
the inner product is x, y) =
d
j=1
x
j
y
j
and the norm is |x| =
x, x)
1/2
. A matrix w = (w
i j
)
d
i, j=1
is called orthogonal if ww
T
= I
d
, where w
T
denotes the transpose of w and I
d
is the d d identity matrix. Equivalent
conditions for orthogonality are the following:
1. w is invertible and w
1
= w
T
;
2. for each x R
d
, |x| =|xw|;
3. for each x, y R
d
, x, y) =xw, yw);
4. the rows of w form an orthonormal basis for R
d
.
The set of orthogonal matrices is closed under multiplication and inverses
(by condition (2), for example) and forms the orthogonal group, denoted
O(d). Condition (4) shows that O(d) is a closed bounded subset of all d d
matrices and hence is a compact group. If w O(d) then det w = 1. The
subgroup SO(d) = w O(d) : det w = 1 is called the special orthogonal
group.
6.1 Introduction and Overview 175
Denition 6.1.1 The right regular representation of O(d) is the homomorphism
w R(w) of linear maps of
d
to itself (endomorphisms), given by R(w)p(x) =
p(xw) for all x R
d
, p
d
.
Further, R(w) is an automorphism of P
d
n
for each n, w O(d). Note that
R(w
1
w
2
) =R(w
1
)R(w
2
) (by the homomorphismproperty of R). The Laplacian
commutes with each w O(d), that is, (R(w)p)(x) = (p)(xw) for any x R
d
.
In the present context the basic tool for constructing orthogonal transformations
is the reection.
Denition 6.1.2 For a nonzero u R
d
, the reection along u, denoted by
u
, is
dened by
x
u
= x 2
x, u)
|u|
2
u.
Writing
u
= I
d
2
_
uu
T
_
1
u
T
u shows that
u
=
T
u
and
u

u
= I
d
(note that
uu
T
= |u|
2
while u
T
u is a matrix). The matrix entries of
u
are (
u
)
i j
=
i j

2u
i
u
j
/|u|
2
. It is clear that x
u
= x exactly when x, u) = 0, that is, the invariant
set for
u
is the hyperplane u

=x : x, u) =0. Also, u
u
=u and any nonzero
multiple of u determines the same reection. Since
u
has one eigenvector for the
eigenvalue 1 and d 1 independent eigenvectors for the eigenvalue +1 (any
basis for u

), it follows that det


u
=1.
Proposition 6.1.3 Suppose that x
(i)
, y
(i)
R
d
, |x
(i)
|
2
=|y
(i)
|
2
=1 for i =1, 2,
and x
(1)
, y
(1)
) = x
(2)
, y
(2)
); then there is a product w of reections such that
x
(1)
w = x
(2)
and y
(1)
w = y
(2)
.
Proof If x
(1)
= x
(2)
then put y
(3)
= y
(1)
, else let u = x
(1)
x
(2)
,= 0, so that
x
(1)

u
= x
(2)
, and let y
(3)
= y
(1)

u
. If y
(3)
= y
(2)
then the construction is nished.
Otherwise, let v = y
(3)
y
(2)
; then y
(3)

v
=y
(2)
and x
(2)

v
= x
(2)
since x
(2)
, v) =
x
(2)
, y
(3)
) x
(2)
, y
(2)
) =x
(1)
, y
(1)
) x
(2)
, y
(2)
) = 0. One of
u
,
v
and
u

v
is
the desired product.
The following is crucial for analyzing groups generated by reections.
Proposition 6.1.4 Suppose that u, v are linearly independent in R
d
, and set
cos = u, v)/(|u||v|); then
u

v
is a plane rotation in spanu, v through an
angle 2.
Proof Assume that |u| =|v| = 1; thus cos =u, v) and |vu, v)u| = sin,
where 0 < <. Let v
/
= (sin)
1
(v u, v)u), so that u, v
/
is an orthonormal
basis for spanu, v. With respect to this basis
u
,
v
, and
u

v
have the matrix
representations
176 Root Systems and Coxeter Groups

u
=
_
1 0
0 1
_
,

v
=
_
cos2 sin2
sin2 cos2
_
,

v
=
_
cos2 sin2
sin2 cos2
_
,
and
u

v
is a rotation.
For two nonzero vectors u, v, denote cos(u, v) = u, v)/(|u||v|). Conse-
quently, for a given m = 1, 2, 3, . . . , (
u

v
)
m
= I
d
if and only if cos(u, v) =
cos( j/m) for some integer j. Since (
u

v
)
1
=
v

u
for any two reections
u
and
v
we see that
u
and
v
commute if and only if u, v) = 0. The conjugate of
a reection is also a reection:
Lemma 6.1.5 Let u R
d
, u ,= 0, and let w O(d); then w
1

u
w =
uw
.
Proof For x R
d
,
xw
1

u
w = x
2xw
1
, u)
|u|
2
uw = x
2x, uw)
|u|
2
uw.
6.2 Root Systems
The idea of a root system seems very simple, yet in fact it is a remarkably deep
concept, with many ramications in algebra and analysis.
Denition 6.2.1 A root system is a nite set R of nonzero vectors in R
d
such
that u, v R implies that u
v
R. If, additionally, u, v R and v = cu for some
scalar c R implies that c =1 then R is said to be reduced.
Clearly u R implies that u = u
u
R for any root system. The set
_
u

: u R
_
is a nite set of hyperplanes; thus there exists u
0
R
d
such that
u, u
0
) ,= 0 for all u R. With respect to u
0
dene the set of positive roots
R
+
=u R : u, u
0
) > 0,
so that R = R
+
(R
+
).
Denition 6.2.2 The Coxeter group W =W(R) generated by the root system R
is the subgroup of O(d) generated by
u
: u R.
Note that for the purpose of studying the group W one can replace R by
a reduced root system (for example u/|u| : u R). There is a very useful
polynomial associated with R.
6.2 Root Systems 177
Denition 6.2.3 For any reduced root system R, the discriminant, or alternating
polynomial, is given by
a
R
(x) =

uR
+
x, u).
Theorem 6.2.4 For v R and w W(R),
a
R
(x
v
) =a
R
(x) and a
R
(xw) = det wa
R
(x).
Proof Assume that v R
+
; by denition R
+
v is a disjoint union E
1
E
2
,
where E
1
=u R
+
: u
v
= u and u E
2
implies that u
v
= c
u
u
/
for some u
/

E
2
and u
/
,= u and c
u
=1. Thus u
/

v
= c
u
u and so x, u)x, u
/
) is invariant under

v
(note that x
v
, u) =x, u
v
) because
T
v
=
v
). Then a
R
(x
v
) =x, v)
1

2
,
where
1
=

uE
1
x, u) and
2
=

uE
2
x, u
v
). The second product is a per-
mutation having an even number of sign changes of
uE
2
x, u). This shows that
a
R
(x
v
) = a
R
(x). Let w =
u
1

u
2

u
n
be a product of n reections, where
u
1
, . . . , u
n
R
+
; then a
R
(xw) = (1)
n
a
R
(x). Obviously det w = (1)
n
.
There is an amusing connection with harmonic polynomials: a product

vE
x, v) is harmonic exactly when the elements of E are nonzero multiples
of some system R
+
of positive roots.
Lemma 6.2.5 Suppose that u is a nonzero vector in R
d
and p(x) is a polynomial
such that p(x
u
) =p(x) for all x R
d
; then p(x) is divisible by x, u).
Proof The divisibility property is invariant under O(d), so one can assume that
u = (1, 0, . . . , 0). Any polynomial p can be written as
n
j=0
x
j
1
p
j
(x
2
, . . . , x
d
); then
p(x
u
) =
n
j=0
(x
1
)
j
p
j
(x
2
, . . . , x
d
). Thus p(x
u
) =p(x) implies that p
j
= 0
unless j is odd.
Theorem 6.2.6 Suppose E is a nite set of nonzero vectors in R
d
; then

vE
x, v) = 0 if and only if there are scalars c
v
, v E such that c
v
v : v E
= R
+
for some reduced root system and such that no vector in E is a scalar
multiple of another vector in E.
Proof First we show a
R
= 0 for any reduced root system. The polynomial p =
a
R
satises p(x
u
) = p(x) for any u R
+
because commutes with R(
u
).
By Lemma 6.2.5 p(x) is divisible by x, u), but deg p dega
R
2 and thus p =0.
Now suppose that p = 0 for p(x) =
vE
x, v), for a set E. Assume that
|v| = 1 for every v E. Let u be an arbitrary element of E. Without loss of
generality assume that u = (1, 0, . . . , 0) and expand p(x) as

n
j=0
x
j
1
p
j
(x
2
, . . . , x
d
)
for some n. Then p(x) =

n
j=0
x
j
1
[p
j
+( j +2)( j +1)p
j+2
]. Since p is a multiple
of x, u), p
0
= 0. This implies that p
2 j
= 0 for each j = 0, 1, 2, . . . and thus that
178 Root Systems and Coxeter Groups
p(x)/x, u) is even in x
1
. Further, p
1
,= 0 or else p = 0; thus x
1
= x, u) is not a
repeated factor of p. The product
x, v) : v E, v ,= u is invariant under
u
.
This shows that the set v
u
: v E has the same elements as v : v E up
to multiplication of each by 1. Hence E (E) satises the denition of a root
system.
Theorem 6.2.7 For any root system R, the group W(R) is nite; if also R is
reduced then the set of reections contained in W(R) is exactly
u
: u R
+
.
Proof Let E = span(R) have dimension r, so that W can be considered as a
subgroup of O(r) and every element of E

is invariant under W. The denition


of a root system shows that uw R for every u R and w W. Since R con-
tains a basis for E, this shows that any basis element v of E has a nite orbit
vw : w W, hence W is nite. If w =
v
W is a reection then det w = 1
and so a
R
(xw) = a
R
(x). By Lemma 6.2.5, a
R
(x) is divisible by x, v). But lin-
ear factors are irreducible and the unique factorization theorem shows that some
multiple of v is an element of R.
Groups of the type W(R) are called nite reection groups or Coxeter groups.
The dimension of span(R) is called the rank of R. If R can be expressed as a dis-
joint union of non-empty sets R
1
R
2
with u, v) =0 for every u R
1
, v R
2
then
each R
i
(i = 1, 2) is itself a root system and W(R) = W(R
1
) W(R
2
), a direct
product. Further, W(R
1
) and W(R
2
) act on the orthogonal subspaces span(R
1
)
and span(R
2
), respectively. In this case the root system R and the reection group
W(R) are called decomposable. Otherwise the system and group are indecompos-
able on irreducible. There is a complete classication of indecomposable nite
reection groups. Some more concepts are now needed for the discussion.
Assume that the rank of R is d, that is, span(R) = R
d
. The set of hyperplanes
H =
_
v

: v R
+
_
divides R
d
into connected open components called cham-
bers. A theorem states that the order of the group equals the number of chambers.
Recall that the positive roots are dened in terms of some vector u
0
R
d
. The
connected component of the complement of H which contains u
0
is called the
fundamental chamber. The roots corresponding to the bounding hyperplanes of
this chamber are called simple roots, and they form a basis of R
d
. The (sim-
ple) reections corresponding to the simple roots are denoted s
i
, i =1, . . . , d. Let
m
i j
be the order of s
i
s
j
(clearly m
ii
= 1 and m
i j
= 2 if and only if s
i
s
j
= s
i
s
j
,
for i ,= j). The following is the fundamental theorem in this topic, proved by
H. S. M. Coxeter; see Coxeter and Moser [1965, p. 122].
Theorem 6.2.8 The group W(R) is isomorphic to the abstract group generated
by s
i
: 1 i d subject to the relations (s
i
s
j
)
m
i j
= 1.
6.2 Root Systems 179
The Coxeter diagram is a graphical way of displaying the above relations: it is a
graph with d nodes corresponding to simple reections; nodes i and j are joined
with an edge when m
i j
> 2; an edge is labeled by m
i j
when m
i j
> 3. The root
system is indecomposable if and only if the Coxeter diagram is connected. We
proceed to the description of the innite families of indecomposable root systems
and two exceptional cases. The systems are named by a letter and a subscript
indicating the rank. It is convenient to introduce the standard basis vectors
i
=
_
0, . . . , 0,
i
1
, 0, . . . , 0
_
for R
d
, 1 i d; the superscript i above the element l
indicates that it occurs in the ith position.
6.2.1 Type A
d1
For a type A
d1
system, the roots are given by R =
_
v
(i, j)
=
i

j
: i ,= j
_

R
d
. The span is exactly (1, 1, . . . , 1)

and thus the rank is d 1. The reec-


tion
i j
=
v
(i, j)
interchanges the components x
i
and x
j
of each x R
d
and is
called a transposition, often denoted by (i, j). Thus W(R) is exactly the sym-
metric or permutation group S
d
on d objects. Choose u
0
= (d, d 1, . . . , 1); then
R
+
=v
(i, j)
=
i

j
: i > j and the simple roots are
i

i+1
: 1 i d 1.
The corresponding reections are the adjacent transpositions (i, i +1). The
general fact about reection groups, that simple reections generate W(R),
specializes to the well-known statement that adjacent transpositions generate
the symmetric group. Since (i, i +1)(i +1, i +2) is a permutation of period 3,
the structure constants satisfy m
i,i+1
= 3 and m
i j
< 3 otherwise. The Coxeter
diagram is
It is clear that for any two roots u, v there is a permutation w W(R) such that
uw = v; hence any two reections are conjugate in the group, w
1

u
w =
uw
.
The alternating polynomial is
a
R
(x) =

uR
+
x, u) =

1i<jd
(x
i
x
j
).
The fundamental chamber is x : x
1
> x
2
> > x
d
; if desired, the chambers
can be restricted to span(R) =
_
x :

d
i=1
x
i
= 0
_
.
6.2.2 Type B
d
The root system for a type B
d
system is R = v
(i, j)
=
i

j
: i ,= j u
(i, j)
=
sign( j i)(
i
+
j
) : i ,= j
i
. The reections corresponding to the three
different sets of roots are
180 Root Systems and Coxeter Groups
x
i j
=
_
x
1
, . . . ,
i
x
j
, . . . ,
j
x
i
, . . .
_
,
x
i j
=
_
x
1
, . . . ,
i
x
j
, . . . ,
j
x
i
, . . .
_
,
x
i
=
_
x
1
, . . . ,
i
x
i
, . . .
_
,
for v
(i, j)
, u
(i, j)
and
i
respectively. The group W(R) is denoted W
d
; it is the
full symmetry group of the hyperoctahedron
1
,
2
, . . . ,
d
R
d
(also
of the hypercube) and is thus called the hyperoctahedral group. Its elements
are exactly the d d permutation matrices with entries 1 (that is, each row
and each column has exactly one nonzero element 1). With the same u
0
=
(d, d 1, . . . , 1) as in the previous subsection, the positive root system is R
+
=
_

j
,
i
+
j
: i < j
_

i
: 1 i d and the simple roots are to be found in

i+1
: i < d
d
. The order of
d1,d

d
is 4. The Coxeter diagram is
4
This group has two conjugacy classes of reections,
_

i j
,
i j
: i < j
_
and
i
:
1 i d. The alternating polynomial is
a
R
(x) =
d

i=1
x
i
1i<jd
_
x
2
i
x
2
j
_
.
The fundamental chamber is x : x
1
> x
2
> > x
d
> 0.
6.2.3 Type I
2
(m)
The type I
2
(m) systems are dihedral and correspond to symmetry groups of reg-
ular m-gons in R
2
for m 3. Using a complex coordinate system z = x
1
+ix
2
and z = x
1
ix
2
, a rotation through the angle can be expressed as z ze
i
, and
the reection along (sin, cos) is z ze
2i
. Now let = e
2i/m
; then the
reection along
v
( j)
=
_
sin
j
m
, cos
j
m
_
corresponds to
j
: z z
j
for 1 j 2m; note that v
(m+j)
=v
( j)
. Choose
u
0
=
_
cos

2m
, sin

2m
_
;
then the positive roots are
_
v
( j)
: 1 j m
_
and the simple roots are v
(1)
, v
(m)
.
Then
m

1
maps z to z and has period m. The Coxeter diagram is
c c
m
. A
simple calculation shows that
j

j
=
2 jn
for any n, j; thus there are two
conjugacy classes of reections
2i
,
2i+1
when m is even but only one class
when m is odd. There are three special cases for m: I
2
(3) is isomorphic to A
2
;
6.2 Root Systems 181
I
2
(4) = B
2
; and I
2
(6) is called G
2
in the context of Weyl groups. To compute the
alternating polynomial, note that
x
1
sin
j
m
x
2
cos
j
m
=
1
2
_
exp
_
i
_
1
2

j
m
___
_
z z
j
_
for each j = 1, . . . , m; thus
a
R
(x) = 2
m
i
1
(z
m
z
m
).
The fundamental chamber is (r cos, r sin) : r > 0, 0 < </m.
6.2.4 Type D
d
For our purposes, the type D
d
system can be considered as a special case
of the system for type B
d
, d 4. The root system is a subset of that for
B
d
, namely R =
_

j
: i ,= j
_

_
sign( j i)(
i
+
j
) : i ,= j
_
. The associated
group is the subgroup of W
d
of elements with an even number of sign changes
(equivalently, the subgroup xing the polynomial

d
j=1
x
j
). The simple roots are

i+1
: i < d
d1
+
d
and the Coxeter diagram is

The alternating polynomial is


a
R
(x) =

1i<jd
_
x
2
i
x
2
j
_
and the fundamental chamber is x : x
1
> x
2
> >[x
d
[ > 0.
6.2.5 Type H
3
The type H
3
system generates the symmetry group of the regular dodecahe-
dron and of the regular icosahedron. The algebraic number (the golden ratio)
=
1
2
(1 +

5), which satises


2
= +1, is crucial here. Note that
1
=
1 =
1
2
(

5 1). For the choice u


0
= (3, 2, ) the positive root system
is R
+
= (2, 0, 0), (0, 2, 0), (0, 0, 2), (,
1
, 1), (1, ,
1
), (
1
, 1, ),
(
1
, 1, ), (
1
, 1, ), where the choices of signs in are indepen-
dent of each other. The full root system R = R
+
(R
+
) is called the
icosidodecahedron as a conguration in R
3
. Thus there are 15 reec-
tions in the group W(R). It is the symmetry group of the icosahe-
dron Q
12
= (0, , 1), (1, 0, ), (, 1, 0) (which has 12 vertices
and 20 triangular faces) and of the dodecahedron Q
20
=
_
0,
1
,
_
,
_
, 0,
1
_
,
_

1
, , 0
_
, (1, 1, 1) (which has 20 vertices and 12
pentagonal faces); see Coxeter [1973] for the details.
To understand the geometry of this group, consider the spherical Coxeter
complex of R, namely the great circles on the unit sphere which are intersections
182 Root Systems and Coxeter Groups
with the planes
_
v

: v R
+
_
. There are vefold intersections at the vertices of
the icosahedron (so the subgroup of W(R) xing such a vertex, the so-called sta-
bilizer, is of the type I
2
(5)), and there are threefold intersections at the vertices
of the dodecahedron (with stabilizer group I
2
(3)). The fundamental chamber
meets the sphere in a spherical triangle with vertices at ( + 2)
1/2
(, 1, 0),
3
1/2
(1, 1, 1), 2
1
(1, ,
1
) and the simple roots are v
1
= (,
1
, 1), v
2
=
(1, ,
1
) and v
3
= (
1
, 1, ). The angles between the simple roots are cal-
culated as cos(v
1
, v
2
) =
1
2
=cos
2
3
, cos(v
1
, v
3
) =0, cos(v
2
, v
3
) =
1
2
=
cos
4
5
. Thus the Coxeter diagram is
c c c
5
. The alternating polynomial can
be expanded, by computer algebra for instance, but we do not need to write it
down here.
6.2.6 Type F
4
For a type F
4
system the Coxeter diagram is
c c c c
4
. The group contains
W
4
as a subgroup (of index 3). One conjugacy class of roots consists of R
1
=

j
: i ,= jsign( j i)(
i
+
j
) : i ,= j and the other class is R
2
=2
i
:
1 i 4
1

4
. Then R = R
1
R
2
and there are 24 positive
roots. With the orthogonal coordinates y
1
= (x
1
+x
2
)/

2, y
2
= (x
1
+x
2
)/

2,
y
3
= (x
3
+x
4
)/

2, y
4
= (x
3
+x
4
)/

2, the alternating polynomial is


a
R
(x) = 2
6

1i<j4
(x
2
i
x
2
j
)(y
2
i
y
2
j
).
6.2.7 Other types
There is a four-dimensional version of the icosahedron corresponding to the
root system H
4
, with diagram
c c c c
5
; it is the symmetry group of the
600 cell (Coxeter [1973]) and was called the hecatonicosahedroidal group by
Grove [1974]. In addition there are the exceptional Weyl groups E
6
, E
7
, E
8
, but
they will not be studied in this book.
6.2.8 Miscellaneous results
For any root system, the subgroup generated by a subset of simple reections
(the result of deleting one or more nodes from the Coxeter diagram) is called a
parabolic subgroup of W(R). For example, the parabolic subgroups of the type
A
d1
diagram are Young subgroups, with S
m
S
dm
maximal for each d with
1 md. Removing one interior node from the B
d
diagram results in a subgroup
of the form S
m
W
dm
.
As mentioned before, the number of chambers of the Coxeter complex (the
complement of
_
v

: v R
+
_
) equals the order of the group W(R). For each
6.3 Invariant Polynomials 183
chamber C there is a unique w W(R) such that C
0
w =C, where C
0
is the fun-
damental chamber. Thus for each v R there exists w W(R) such that vw is a
simple root, assuming that R is reduced, and so each
v
is conjugate to a simple
reection. (Any chamber could be chosen as the fundamental chamber and its
walls would then determine a set of simple roots.) It is also true that the positive
roots when expressed as linear combinations of the simple roots (recall that they
form a basis for span(R)) have all coefcients nonnegative.
The number of conjugacy classes of reections equals the number c of con-
nected components of the Coxeter diagram after all edges with an even label have
been removed. Also, the quotient G =W/W
/
of W by the commutator subgroup
W
/
, that is, the maximal abelian image, is Z
c
2
. The argument is easy: Gis the group
subject to the same relations but now with commuting generators. Denote these
generators by s
/
1
, s
/
2
, . . . , so that (s
/
i
)
2
= 1 and G is a direct product of Z
2
factors.
Consider two reections s
1
, s
2
linked by an edge with an odd label 2m+1 (recall
that the label 3 is to be understood when no label is indicated); this means that
(s
1
s
2
)
2m+1
=1 and thus s
2
=w
1
s
1
w with w= (s
2
s
1
)
m
; also (s
/
1
)
2m+1
(s
/
2
)
2m+1
=
s
/
1
s
/
2
= 1. This implies that s
/
1
= s
/
2
and thus that s
1
, s
2
are conjugate in W. Rela-
tions of the form (s
2
s
3
)
2m
= 1 have no effect on s
/
2
, s
/
3
. Thus simple reections in
different parts of the modied diagram have different images in G and cannot be
conjugate in W. By the above remarks it sufces to consider the conjugacy classes
of simple reections.
There is a length function on W(R): for any w in the group it equals both
the number of factors in the shortest product w = s
i
1
s
i
2
s
i
m
in terms of sim-
ple reections and the number of positive roots made negative by w, that is,
[R
+
w (R
+
)[. For the group S
d
it is the number of adjacent transpositions
required to express a permutation; it is also the number of inversions in a per-
mutation considered as a listing of the numbers 1, 2, . . . , d. For example, the
permutation (4, 1, 3, 2) has length 3 +0 +1 +0 = 4 (the rst entry is larger than
the three following entries, and so on), and (1, 2, 3, 4), (4, 3, 2, 1) have lengths 0
and 6 respectively.
6.3 Invariant Polynomials
Any subgroup of O(d) has representations on the spaces of homogeneous polyno-
mials P
d
n
of any given degree n. These representations are dened as w R(w)
where R(w)p(x) = p(xw); sometimes this equation is written as wp(x), when
there is no danger of confusion. The effect on the gradient is as follows: let p(x)
denote the row vector ( p/x
1
p/x
d
); then
[R(w)p] (x) =
_
p(xw)
x
1

p(xw)
x
d
_
w
T
= R(w)p(x)w
T
.
184 Root Systems and Coxeter Groups
Denition 6.3.1 For a nite subgroup W of O(d), let
W
denote the space
of W-invariant polynomials p
d
: R(w)p = p for all w W and set
P
W
n
= P
d
n

W
, the space of invariant homogeneous polynomials of degree
n = 0, 1, 2, . . .
There is an obvious projection onto the space of invariants:

w
p =
1
[W[

wW
R(w)p.
When W is a nite reection group,
W
has a uniquely elegant structure; there
is a set of algebraically independent homogeneous generators whose degrees are
fundamental constants associated with W. The following statement is known as
Chevalleys theorem.
Theorem 6.3.2 Suppose that R is a root system in R
d
and that W =W(R); then
there exist d algebraically independent W-invariant homogeneous polynomials
_
q
j
: 1 j d
_
of degrees n
j
, such that
W
is the ring of polynomials generated
by
_
q
j
_
, [W[ = n
1
n
2
n
d
and the number of reections in W is
d
j=1
(n
j
1).
Corollary 6.3.3 The Poincar e series for W has the factorization

n=0
_
dimP
W
n
_
t
n
=
d

j=1
(1t
n
j
)
1
.
Proof The algebraic independence means that the set of homogeneous polynomi-
als q
m
1
1
q
m
2
2
q
m
d
d
:

d
j=1
n
j
m
j
=k is linearly independent for any k =0, 1, 2, . . .;
thus it is a basis for P
W
k
. The cardinality of the set equals the coefcient of t
k
in
the product.
It sufces to study the invariants of indecomposable root systems; note that
if dimspan(R) = m < d then the orthogonal complement of the span provides
d m invariants (the coordinate functions) of degree 1. When the rank of R is d
and R is indecomposable, the group W(R) is irreducibly represented on R
d
; this is
called the reection representation. The numbers
_
n
j
: 1 j d
_
are called the
fundamental degrees of W(R) in this situation. The coefcient of t
k
in the product

d
j=1
[1+(n
j
1)t] is the number of elements of W(R) whose xed-point set is
of codimension k (according to a theorem of Shephard and Todd [1954]).
Lemma 6.3.4 If the rank of a root system R is d then the lowest degree of a non-
constant invariant polynomial is 2, and if R is indecomposable then there are no
proper invariant subspaces of R
d
.
Proof Identify R
d
with P
d
1
, and suppose that E is an invariant subspace. Any
polynomial in E has the form p(x) = x, u) for some u R
d
. By assumption
6.3 Invariant Polynomials 185
R(
v
) p(x) = x
v
, u) = x, u
v
) E for any v R; thus p(x) R(
v
) p(x) =
x, uu
v
) E and x, uu
v
) = 2x, v)u, v)/|v|
2
. If p is W-invariant then
this implies that u, v) = 0 for all v R, but span(R) =R
d
and so u = 0.
However, if R is indecomposable and u ,=0 then there exists v R with u, v) ,=
0 and the equation for x, uu
v
) shows that x, v) E. Let
R
1
=v
1
R : x, v
1
) E.
Any root v
2
R satises either v
2
R
1
or v
2
, v
1
) = 0 for any v
1
R
1
. By
denition R
1
= R and E =P
d
1
.
Clearly q(x) =

d
j=1
x
2
j
is invariant for any subgroup of O(d). Here is another
general theorem about the fundamental invariants.
Theorem 6.3.5 Let
_
q
j
: 1 j d
_
be fundamental invariants for the root sys-
tem R, and let J(x) be the Jacobian matrix
_
q
i
(x)/x
j
_
d
i, j=1
; then det J(x) =
ca
R
(x) (see Denition 6.2.3) for some scalar multiple c.
Proof The algebraic independence implies that det J ,= 0. For any invariant q
and v R, q(x
v
) =q(x)
v
; thus any row in J(x
v
) equals the corresponding
row of J(x) multiplied by
v
. Now det J (x
v
) = det J(x)det
v
=det J(x), and
so det J(x) has the alternating property. By Lemma 6.2.5, det J is a multiple of
a
R
. Further, det J is a homogeneous polynomial of degree

d
j=1
(n
j
1), which
equals the number of reections in W(R), the degree of a
R
.
6.3.1 Type A
d1
invariants
The type A
d1
invariants are of course the classical symmetric polynomials. The
fundamental invariants are usually taken to be the elementary symmetric poly-
nomials e
1
, e
2
, . . . , e
d
, dened by
d
j=0
e
j
(x)t
j
=
d
i=1
(1+tx
i
). Restricted to the
span (1, 1, . . . , 1)

of the root systeme


1
=0, the fundamental degrees of the group
are 2, 3, . . . , d; by way of verication note that the product of the degrees is d!
and the sum
d
j=2
( j 1) =
1
2
d (d 1), the number of transpositions. In this case
det J(x) =
i<j
(x
i
x
j
), the Vandermonde determinant. To see this, argue by
induction on d; (this is trivial for d = 1). Let e
/
i
denote the elementary symmetric
function of degree i in (x
1
x
2
x
d1
); then e
i
= e
/
i
+x
d
e
/
i1
. The ith row
of J(x) is
_


x
j
e
/
i
+x
d

x
j
e
/
i1
e
/
i1
_
.
The rst row is (1 1 1) and the terms e
/
d
in the last row are 0. Now subtract x
d
times the rst row from the second row, subtract x
d
times the resulting second row
186 Root Systems and Coxeter Groups
from the third row and so on. The resulting matrix has ith row (for 1 i d 1
with e
/
0
= 1) equal to
_


x
j
e
/
i

i1

j=0
e
/
i1j
(x
d
)
j
_
,
and last row equal to
_
0 0

d1
j=0
e
/
d1j
(x
d
)
j
_
. The last entry is

d1
j=1
(x
j
x
d
), and the proof is complete by induction.
6.3.2 Type B
d
invariants
The type B
d
invariants are generated by the elementary symmetric functions q
j
in
the squares of x
i
, that is,
d
j=0
q
j
(x)t
j
=
d
i=1
_
1+tx
2
i
_
. The fundamental degrees
are 2, 4, 6, . . . , 2d, the order of the group is 2
d
d! and the number of reections is

d
j=1
(2 j 1) = d
2
. Also, det J(x) = 2
d

d
j=1
x
j
i<j
_
x
2
i
x
2
j
_
. To see this, note
that /x
i
= 2x
i
/
_
x
2
i
_
for each i, and use the result for A
d1
.
6.3.3 Type D
d
invariants
The system with type D
d
invariants has all but one of the fundamental invariants
of the type B
d
system, namely q
1
, q
2
, . . . , q
d1
, and it also has e
d
= x
1
x
d
. The
fundamental degrees are 2, 4, 6, . . . , 2d 2, d, the order of the group is 2
d1
d! and
the number of reections is

d1
j=1
(2j 1) +(d 1) = d
2
d.
6.3.4 Type I
2
(m) invariants
For a system with type I
2
(m) invariants, in complex coordinates Z and z for R
2
,
where z = x
1
+ix
2
, the fundamental invariants are z z and z
m
+ z
m
. The funda-
mental degrees are 2 and m, the order of the group is 2m and the number of
reections is m.
6.3.5 Type H
3
invariants
The fundamental invariants for a type H
3
system are
q
1
(x) =
3

j=1
x
2
j
,
q
2
(x) =

x, u) : u Q
12
, (3, 2, ), u) > 0,
q
3
(x) =

x, u) : u Q
20
, (3, 2, ), u) > 0.
That is, the invariants q
2
, q
3
are products of inner products with half the vertices of
the icosahedron Q
12
or of the dodecahedron Q
20
, respectively. The same argument
used for the alternating polynomial a
R
shows that the products of q
2
(x
v
) and
q
3
(x
v
) for any v R have the same factors as q
2
(x), q
3
(x) (respectively) with an
even number of sign changes. By direct verication, q
2
is not a multiple of q
1
and
6.4 DifferentialDifference Operators 187
q
3
is not a polynomial in q
1
and q
2
. Thus the fundamental degrees are 2, 6, 10, the
order of the group is 120 and there are 2+6+103 = 15 reections.
6.3.6 Type F
4
invariants
For a type F
4
system, using the notation from type B
4
(in which q
i
(x) denotes
the elementary symmetric function of degree i in x
2
1
, . . . , x
2
4
), a set of fundamental
invariants is given by
g
1
= q
1
,
g
2
= 6q
3
q
1
q
2
,
g
3
= 24q
4
+2q
2
2
q
2
1
q
2
,
g
4
= 288q
2
q
4
8q
3
2
27q
3
1
q
3
+12q
2
1
q
2
2
.
These formulae are based on the paper by Ignatenko [1984]. The fundamental
degrees are 2, 6, 8, 12 and the order of the group is 1152.
6.4 DifferentialDifference Operators
Keeping in mind the goal of studying orthogonal polynomials for weights which
are invariant under some nite reection group, we now consider invariant dif-
ferential operators. This rather quickly leads to difculties: it may be hard to
construct a sufcient number of such operators for an arbitrary group (although
this will be remedied), but the biggest problem is that they do not map poly-
nomials to polynomials unless the polynomials are themselves invariant. So, to
deal with arbitrary polynomials a new class of operators is needed. There are two
main concepts: rst, the reection operator p [p(x)p(x
v
)]/x, v) for a given
nonzero v R
d
, the numerator of which is divisible by x, v) because it has the
alternating property for
v
(see Lemma 6.2.5); second, the reection operators
corresponding to the positive roots of some root system need to be assembled in
a way which incorporates parameters and mimics the properties of derivatives. It
turns out that this is possible to a large extent, except for the customary product
rule, and the number of independent parameters equals the number of conjugacy
classes of reections.
Fix a reduced root system R, with positive roots R
+
and associated reection
group W =W(R). The parameters are specied in the form of a multiplicity func-
tion (the terminology comes from analysis on homogeneous spaces formed from
compact Lie groups).
Denition 6.4.1 A multiplicity function is a function v
v
dened on a root
system with the property that
u
=
v
whenever
u
is conjugate to
v
in W, that is,
when there exists w W such that uw = v. The values can be numbers or formal
parameters (transcendental extensions of Q).
188 Root Systems and Coxeter Groups
Denition 6.4.2 The rst-order differentialdifference operators D
i
are dened,
in coordinate form for 1 i d and in coordinate-free form for u R
d
by, for
p
d
,
D
i
p(x) =
p(x)
x
i
+

vR
+

v
p(x) p(x
v
)
x, v)
v
i
,
D
u
p(x) =u, p(x)) +

vR
+

v
p(x) p(x
v
)
x, v)
u, v).
Clearly, D
i
= D

i
and D
u
maps P
d
n
into P
d
n1
for n 1; that is, D
u
is a
homogeneous operator of degree 1. Under the action of W, D
u
transforms like
the directional derivative.
Proposition 6.4.3 For u R
d
and w W, D
u
R(w) p = R(w)D
uw
p for p
d
.
Proof It is convenient to express D
u
as a sum over R and to divide by 2; thus
D
u
R(w)p(x) =uw, p(xw)) +
1
2

vR

v
p(xw) p(x
v
w)
x, v)
u, v)
=uw, p(xw)) +
1
2

vR

v
p(xw) p(xw
vw
)
xw, vw)
uw, vw)
=uw, p(xw)) +
1
2

zR

zw
1
p(xw) p(xw
z
)
xw, z)
uw, z)
= R(w)D
uw
p(x).
In the third line we changed the summation variable from v to z and then used the
property of from Denition 6.4.1 giving
zw
1 =
z
.
The important aspect of differentialdifference operators is their commutativ-
ity. Some lemmas will be established before the main proof. An auxiliary operator
is convenient.
Denition 6.4.4 For v R
d
and v ,= 0 dene the operator
v
by

v
f (x) =
f (x) f (x
v
)
x, v)
.
Lemma 6.4.5 For u, v R
d
and v ,= 0,
u, )
v
f (x)
v
u, ) f (x) =
u, v)
x, v)
_
2v, f (x
v
))
v, v)

f (x) f (x
v
)
x, v)
_
.
Proof Recall that u, R(
v
) f ) =u
v
, R(
v
)f ); thus
6.4 DifferentialDifference Operators 189
u, )
v
f (x) =
u, f (x)) u
v
, f (x
v
))
x, v)

u, v)[ f (x) f (x
v
)]
x, v)
2
and

v
u, ) f (x) =
u, f (x)) u, f (x
v
))
x, v)
.
Subtracting the second quantity from the rst gives the claimed equation since
uu
v
= (2u, v)/v, v))v.
In the proof below that D
u
D
z
= D
z
D
u
, Lemma 6.4.5 takes care of the inter-
action between differentiation and differencing. The next lemma handles the
difference parts. Note that the product of two reections is either the identity
or a plane rotation (the plane being the span of the corresponding roots).
Lemma 6.4.6 Suppose that B(x, y) is a bilinear form on R
d
such that
B(x
v
, y
v
) = B(y, x) whenever v span(x, y), and let w be a plane rotation in
W; then

v
B(u, v)
x, u)x, v)
: u, v R
+
,
u

v
= w
_
= 0, x R
d
,

v
B(u, v)
u

v
: u, v R
+
,
u

v
= w = 0,
where the two equations refer to rational functions and operators, respectively.
Proof Let E be the plane of w (the orthogonal complement of E is pointwise
xed under w); thus
u

v
= w implies that u, v E. Let R
0
= R
+
E and let
W
0
= W(R
0
), the reection group generated by R
0
. Because W
0
is a dihedral
group,
v
w
v
= w
1
for any v R
0
. Denote the rst sum above by s(x). We will
show that s is W
0
-invariant. Fix z R
0
; it sufces to show that s(x
z
) = s(x).
Consider the effect of
z
on R
0
: the map
v

z

z
is an involution and so
dene v =(v)v
z
for v R
0
, where v R
0
and (v) =1. Then
z

z
=
v
,
also
2
v = v and (v) =(v). Then
s(x
z
) =

_

u

v
B(u, v)
x
z
, u)x
z
, v)
: u, v R
0
,
u

v
= w
_
=

_

u

v
B(u, v)
x
z
, u)x
z
, v)
: u, v R
0
,
u

v
= w
_
=

v
B( (u)u
z
, (v)v
z
)
x, (u)u)x, (v)v)
: u, v R
0
,
z

z
= w
_
=

v
B(v, u)
x, u)x, v)
: u, v R
0
,
u

v
=
z
w
z
= w
1
_
= s(x).
190 Root Systems and Coxeter Groups
The second line follows by a change in summation variables and the third line
uses the multiplicity function property that
u
=
u
because
u
=
z

z
and
also x
z
, u) = x, (u)
z
); the fourth line uses the assumption about the form
B and also (
u

v
)
1
=
v

u
. Now let q(x) = s(x)
vR
0
x, v); then q(x) is a
polynomial of degree [R
0
[ 2 and it has the alternating property for W
0
. Thus
q(x) = 0.
Let f (x) be an arbitrary polynomial and, summing over u, v R
0
,
u

v
= w,
consider

v
B(u, v)
u

v
f (x)
=

v
B(u, v)
_
f (x) f (x
v
)
x, u)x, v)

f (x
u
) f (xw)
x, u)x, v
u
)
_
.
The coefcient of f (x) in the sum is 0 by the rst part of the lemma. For
a xed z R
0
the term f (x
z
) appears twice, for the case
u

z
= w with
coefcient
u

z
B(u, z)/(x, u)x, z)) and for the case
z

v
= w with coefcient

v
B(z, v)/(x, z)x, v
z
)). But
v
=
z

z
and thus v =u (using the notation
of the previous paragraph), and so the second coefcient equals

u
B(z, u)
x, z)x, (u)u)
=

z

u
B((u)
z
, z
z
)
x, z)x, (u)u)
=

z

u
B( (u)u, z)
x, z)x, (u)u)
,
which cancels out the rst coefcient. To calculate the coefcient of f (xw) we
note that, for any z R
0
,
z

v
= w if and only if
v

z
= w; thus

v
B(z, v)
x, z)x, v
z
)
=

z

v
B(z, (v)
z
)
x, z)x, v)
=

z

v
B(v, z)
x, z)x, v)
,
which shows that the coefcient is s(x) and hence 0.
Corollary 6.4.7 Under the same hypotheses,

v
B(u, v)
u

v
: u, v R
+
= 0.
Proof Decompose the sum into parts
u

v
= w for rotations w W and a part

v
= 1 (that is, u = v). Each rotation contributes 0 by the lemma, and
2
u
= 0
for each u because
u
f (x
u
) =
u
f (x) for any polynomial f .
Theorem 6.4.8 For t, u R
d
, D
t
D
u
=D
u
D
t
.
Proof For any polynomial f , the action of the commutator can be expressed as
follows:
6.4 DifferentialDifference Operators 191
(D
t
D
u
D
u
D
t
) f
= (t, )u, ) u, )t, )) f
+

vR
+

v
[u, v)(t, )
v

v
t, )) t, v)(u, )
v

v
u, ))] f
+

v,zR
+

z
(t, v)u, z) u, v)t, z))
v

z
f .
The rst line of the right-hand side is trivially zero, the second line vanishes by
Lemma 6.4.5 and the third line vanishes by Lemma 6.4.6 applied to the bilinear
form B(x, y) = t, x)u, y) u, x)t, y). To see that B satises the hypothesis, let
v =ax+by with a, b R and v ,=0; then B(v, y) =aB(x, y), B(x, v) =bB(x, y) and
B(x
v
, y
v
) = B(x, y)
_
1
2ax, v)
v, v)

2by, v)
v, v)
_
= B(x, y)(12) = B(y, x).
The norm |x|
2
is invariant for every reection group. The corresponding
invariant operator is called the h-Laplacian (the prex h refers to the weight
function

vR
+
[x, v)[

v
, discussed later on in this book) and is dened as

h
=
d
i=1
D
2
i
. The denition is independent of the choice of orthogonal basis;
this is a consequence of the following formulation (recall that
i
denotes the ith
unit basis vector).
Theorem 6.4.9 For any polynomial f (x),
d

i=1
D
2
i
f (x) =f (x) +

vR
+

v
_
2v, f (x))
v, x)
|v|
2
f (x) f (x
v
)
v, x)
2
_
.
Proof Write
i
for /x
i
; then
D
2
i
f (x) =
2
i
f (x) +

vR
+

v
v
i
(
i

v
+
v

i
) f (x) +

u,vR
+

v
u
i
v
i

v
f (x).
The middle term equals

vR
+

v
v
i
_
2
i
f (x)
v, x)
v
i
f (x) f (x
v
)
v, x)
2

i
+
i

v
, f (x
v
))
v, x)
_
.
Summing the rst two parts of this term over 1 i d clearly yields the reec-
tion part (the sum over R
+
) of the claim. Since v
v
= v, the third part sums to

vR
+

v
v +v
v
, f (x
v
))
_
v, x) = 0.
Finally, the double sum that is the last term in D
2
i
f (x) produces

u,vR
+

v
u, v)
u

v
f (x), which vanishes by Lemma 6.4.6 applied to the
bilinear form B(x, y) =x, y).
192 Root Systems and Coxeter Groups
The product rule for D
u
is more complicated than that for differentiation. We
begin with a case involving a rst-degree factor.
Proposition 6.4.10 Let t, u R
d
; then
D
u
(, t) f )(x) x, t)D
u
f (x) =u, t) f (x) +2

vR
+

v
u, v)t, v)
|v|
2
f (x
v
).
Proof This is a routine calculation which uses the relation x, t) x
v
, t) =
2x, v)v, t)/|v|
2
.
Proposition 6.4.11 If W acts irreducibly on R
d
(span(R) = R
d
and R is
indecomposable) then D
u
x, t) =
_
1+
2
d

vR
+

v
_
u, t) for any t, u R
d
.
Proof From the product rule it sufces to consider the bilinear form B(t, u) =

vR
+

v
u, v)t, v)/|v|
2
. This is clearly symmetric and invariant under W,
since B(t
v
, u
v
) = B(t, u) for each v R
+
. Suppose that positive values
are chosen for
v
; then the form can be diagonalized so that B(t, u) =

d
i=1

i
t,
/
i
)u,
/
i
) for some orthonormal basis
/
i
of R
d
and numbers
i
. But
the subspace x : B(x, x) =
1
x, x) is W-invariant since it includes
/
1
; by the
irreducibility assumption it must be the whole of R
d
. Thus B(t, u) =
1
t, u)
for any t, u R
d
. To evaluate the constant, consider

d
i=1
B(
i
,
i
) = d
1
=

d
i=1

vR
+

v
v
2
i
/|v|
2
=
vR
+

v
.
Here is a more general product rule, which is especially useful when one of the
polynomials is W-invariant.
Proposition 6.4.12 For polynomials f , g and u R
d
,
D
u
( f g)(x) = g(x)D
u
f (x) + f (x)u, g(x))
+

vR
+

v
f (x
v
)u, v)
g(x) g(x
v
)
v, x)
.
For conciseness in certain proofs, the gradient corresponding to D
u
is use-
ful: let

f (x) = (D
i
f (x))
d
i=1
, considered as a row vector. Then

R(w) f (x) =

f (xw)w
1
for w W, and D
u
f =u,

f ).
6.5 The Intertwining Operator
There are classical fractional integral transforms which map one family of orthog-
onal polynomials onto another. Some of these transforms can be interpreted as
special cases of the intertwining operator for differentialdifference operators.
Such an object should be a linear map V on polynomials with at least the property
6.5 The Intertwining Operator 193
that D
i
V p(x) = V(/x
i
)p(x) for each i; it turns out that it is uniquely dened
by the additional assumptions V1 = 1 and VP
d
n
P
d
n
, that is, homogeneous
polynomials are preserved. It is easy to see that the formal inverse always exists,
that is, an operator T such that TD
i
= (/x
i
)T.
Heuristically, let T p(x) = exp
_

d
i=1
x
i
D
(y)
i
_
p(y)[
y=0
. Now apply the formal
series to a polynomial, where D
(y)
i
acts on y; then set y = 0, obtaining a
polynomial in x. Here is the precise denition.
Denition 6.5.1 For n = 0, 1, 2, . . . dene T on P
d
n
by
T p(x) =
1
n!
_ d

i=1
x
i
D
(y)
i
_
n
p(y)
for p P
d
n
. Then T is extended to
d
by linearity.
Of course, the commutativity of D
i
allows us to write down the nth power
and to prove the intertwining property.
Proposition 6.5.2 For any polynomial p and 1 i d,

x
i
T p(x) = TD
i
p(x).
Proof It sufces to let p P
d
n
. Then

x
i
T p(x) =
n
n!
D
(y)
i
_
d

j=1
x
j
D
(y)
j
_
n1
p(y) = TD
i
p(x),
because D
(y)
i
commutes with multiplication by x
j
.
Note that the existence of T has been proven for any multiplicity function, but
nothing has been shown about the existence of an inverse. The situation where
T has a nontrivial kernel was studied by Dunkl, de Jeu and Opdam [1994]. The

v
values which occur are called singular values; they are related to the structure
of the group W and involve certain negative rational numbers. We will show that
the inverse exists whenever
v
0 (in fact, for values corresponding to integrable
weight functions). The construction depends on a detailed analysis of the oper-
ator

d
j=1
x
j
D
j
= x,

). A certain amount of algebra and group theory will be


required, but nothing really more complicated than the concepts of group algebra
and some matrix theory. The following is a routine calculation.
Proposition 6.5.3 For any smooth function f on R
d
,
d

j=1
x
j
D
j
f (x) =
d

j=1
x
j

x
j
f (x) +

vR
+

v
[ f (x) f (x
v
)] .
194 Root Systems and Coxeter Groups
Note that if f P
d
n
then

d
j=1
x
j
D
j
f (x) = (n+

v
) f (x)

v
f (x
v
). We
will construct functions c
n
(w) on W with the property that

wW
c
n
(w)xw,

f (xw)) = f (x),
for n 1. This requires the use of (matrix) exponentials in the group algebra.
Let K be some extension eld of the rational numbers Q containing at least

v
; then KW is the algebra

w
a(w)w: a(w) K with the multiplication rule

w
a(w)w
w
/ b(w
/
)w
/
=
w
_

z
a(z)b
_
z
1
w
__
w. The center of KW is the span
of the conjugacy classes (a conjugacy class is a subset of W with the following
property: w, w
/
if and only if there exists z W such that w
/
=z
1
wz). We use
as the name of both the subset of W and the element of KW. Since z
1

v
z =
vz
for v R
+
, the reections are in conjugacy classes by themselves. Label them

1
,
2
, . . . ,
r
(recall that r is the number of connected components of the Coxeter
diagram after all edges with an even label have been removed); further, let
/
i
=
v
for any v with
v

i
, 1 i r.
The group algebra KW can be interpreted as an algebra of [W[ [W[ matri-
ces

w
a(w)M(w), where M(w) is a permutation matrix with M(w)
y,z
= 1 if
y
1
z = w, else M(w)
y,z
= 0, for y, z W. On the one hand, a class
i
corre-
sponds to

M(
v
) :
v

i
, which has integer entries; thus all eigenvalues
of
i
are algebraic integers (that is, the zeros of monic polynomials with inte-
ger coefcients). On the other hand,
i
is a (real) symmetric matrix and can be
diagonalized. Thus there is an orthogonal change of basis of the underlying vec-
tor space, and R
[W[
=

where
i
acts as 1 on each eigenspace E

. Since
each
v
commutes with
i
, the transformed matrices for
v
have a corresponding
block structure (and we have
v
E

= E

, equality as spaces). Further, since all


the group elements commute with
i
, the projections of
v
onto E

for
v

i
are all conjugate to each other. Hence each projection has the same multiplic-
ities of its eigenvalues, say 1 with multiplicity n
0
and 1 with multiplicity n
1
(the projections are certainly involutions); thus (n
0
+n
1
) = [
i
[ (n
0
n
1
), tak-
ing the trace of the projection of
i
to E

. This shows that is rational, satises


[[ [
i
[ and is an algebraic integer, and so all the eigenvalues of
i
are inte-
gers in the interval [[
i
[ , [
i
[]. Note that this is equivalent to the eigenvalues of
1 :
i
=[
i
[ 1
i
being integers in [0, 2[
i
[].
Denition 6.5.4 For s R and 1 i r, the functions q
i
(w; s) are dened by
exp[s([
i
[ 1
i
)] =

wW
q
i
(w; s)w.
Proposition 6.5.5 For 1 i r and w W:
(i) q
i
(1; 0) = 1 and q
i
(w; 0) = 0 for w ,= 1;
(ii) s < 0 implies q
i
(w; s) 0;
6.5 The Intertwining Operator 195
(iii) q
i
(w; s) is a linear combination of the members of e
s
: is an eigenvalue
of [
i
[ 1
i
;
(iv) q
i
(w; s)/s =

i
[q
i
(w; s) q
i
(w; s)];
(v)
w
q
i
(w; s) = 1.
Proof The values q
i
(w; 0) are determined by exp(0) = 1 QW. For s < 0,

w
q
i
(w; s)w = exp(s[
i
[)exp(s
i
). The matrix for s
i
has all entries nonneg-
ative, and the product of any two conjugacy classes is a nonnegative integer linear
combination of other classes; thus q
i
(w; s) 0. Part (iii) is a standard linear alge-
bra result (and recall that
i
corresponds to a self-adjoint matrix). For part (iv),

w
q
i
(w; s)w =(exp[s([
i
[ 1
i
)]) ([
i
[ 1
i
)
=

i
[q
i
(w; s)wq
i
(w; s)w]
=

w
[q
i
(w; s)wq
i
(w; s)w],
replacing w by w in the last sum. Finally, the trivial homomorphism w 1
extended to KW maps [
i
[ 1
i
to 0, and exp(0) = 1.
The classes of reections are combined in the following.
Denition 6.5.6 The functions q

(w; s) are given by

wW
q

(w; s)w =
r

i=1
_

w
q
i
_
w; s
/
i
_
w
_
.
Since the classes
i
are in the center of KW, this is a product of commuting
factors. Proposition 6.5.5 shows that q

(w; s) is a linear combination of products


of terms like exp(
/
i
s), where is an integer and 0 2[
i
[. Also,

wW
q

(w; s)w = exp


_
s

vR
+

v
(1
v
)
_
and

s
q

(w; s) =

vR
+

v
[q

(w; s) q

(w
v
; s)] .
Denition 6.5.7 For
v
0, n 1 and w W, set
c
n
(w) =
_
0

(w; s)e
ns
ds.
Proposition 6.5.8 With the hypotheses of Denition 6.5.7 we have
(i) c
n
(w) 0;
(ii) nc
n
(w) +

vR
+

v
[c
n
(w) c
n
(w
v
)] =
1,w
(1 if w = 1, else 0);
(iii)

wW
c
n
(w) = 1/n.
196 Root Systems and Coxeter Groups
Proof The positivity follows from that of q

(w; s). Multiply both sides of the


differential relation just before Denition 6.5.7 by e
ns
and integrate by parts over
< s 0, obtaining
q

(w; 0) lim
s
e
ns
q

(w; s) nc
n
(w) =

vR
+

v
[c
n
(w) c
n
(w
v
)] .
Since q

(w; s) is a sum of terms e


s
with 0 and n 1, the integral
for c
n
is absolutely convergent and the above limit is zero. From Proposi-
tion 6.5.5 q

(w; 0) = 1 if w = 1, else q

(W; 0) is 0. Finally,

w
c
n
(w) =
_
0

w
q

(w; s)e
ns
ds =
_
0

e
ns
ds = 1/n (Proposition 6.5.5, part (v)).
Proposition 6.5.9 For n 1 let f P
d
n
; then

wW
c
n
(w)xw,

f (xw)) = f (x).
Proof Indeed,

wW
c
n
(w)xw,

f (xw))
=

wW
c
n
(w)
_
nf (xw) +

vR
+

v
[ f (xw) f (xw
v
)]
_
=

wW
c
n
(w)nf (xw) +

vR
+

v
wW
f (xw)[c
n
(w) c
n
(w
v
)]
=

wW
q

(w; 0) f (xw) = f (x).


The last line uses (ii) in Proposition 6.5.8.
Corollary 6.5.10 For any polynomial f
d
,

wW
_
0

(w; s)xw,

f (e
s
xw))e
s
ds = f (x) f (0).
Proof Express f as a sum of homogeneous components; if g is homogeneous of
degree n then

g(e
s
xw) = e
(n1)s

g(xw).
The above results can be phrased in the language of forms.
Denition 6.5.11 An R
d
-valued polynomial f(x) = ( f
i
(x))
d
i=1
(with each f
i

d
) is a -closed 1-form if D
i
f
j
=D
j
f
i
for each i, j or a -exact 1-form if there
exists g
d
such that f =

g.
The commutativity of D
i
shows that each -exact 1-form is -closed. The
corollary shows that g is determined up to a constant by

g. The construction of
6.5 The Intertwining Operator 197
the intertwining operator depends on the fact that -closed 1-forms are -exact.
This will be proven using the functions c
n
(w).
Lemma 6.5.12 If f is a -closed 1-form and u R
d
then
D
u
x, f(x)) =
_
1+
d

i=1
x
i

x
i
_
u, f(x)) +

vR
+

v
u, f(x) f(x
v
)
v
).
Proof Apply the product rule (Proposition 6.4.10) to x
i
f
i
(x) to obtain
D
u
x, f(x)) =
d

i=1
_
x
i
D
u
f
i
(x) +u
i
f
i
(x)
_
+2

vR
+

v
u, v)v, f (x
v
))
|v|
2
=u, f(x)) +2

vR
+

v
u, v)v, f (x
v
))
|v|
2
+
d

i, j=1
x
i
u
j
D
i
f
j
(x)
=
_
1+
d

i=1
x
i

x
i
_
u, f(x)) +2

vR
+

v
u, v)v, f (x
v
))
|v|
2
+

vR
+

v
u, f(x) f (x
v
)).
In the third line we replaced D
j
f
i
by D
i
f
j
(employing the -closed hypothesis).
The use of Proposition 6.5.3 produces the fourth line. The terms involving f (x
v
)
add up to
vR
+

v
u, f (x
v
)
v
).
Theorem 6.5.13 Suppose that f is a homogeneous -closed 1-form with each
f
i
P
d
n
for n 0; then the homogeneous polynomial dened by
F(x) =

wW
c
n+1
(w)xw, f(xw))
satises

F = f, and f is -exact.
Proof We will show that D
u
F(x) =u, f(x)) for any u R
d
. Note that
D
u
[R(w)x, f (x))] = R(w)[D
uw
x, f(x))] ;
thus

wW
c
n+1
(w)D
u
[xw, f(xw))]
=

wW
c
n+1
(w)
_
(n+1)uw, f(xw)) +

vR
+

v
uw, f(xw) f (xw
v
)
v
)
_
=

wW
uw, f(xw))
_
c
n+1
(w)(n+1) +

vR
+

v
[c
n+1
(w) c
n+1
(w
v
)]
_
=

wW
uw, f(xw))q

(w; 0) =u, f(x)).


198 Root Systems and Coxeter Groups
The second line comes from the lemma, then in the sum involving f(xw
v
) the
summation variable w is replaced by w
v
.
Corollary 6.5.14 For any -closed 1-form f the polynomial F dened by
F(x) =

wW
_
0

(w; s)xw, f(e


s
xw))e
s
ds
satises

F = f, and f is -exact.
In the following we recall the notation
i
f (x) = f (x)/x
i
;
i
f (xw) denotes
the ith derivative evaluated at xw.
Denition 6.5.15 The operators V
n
: P
d
n
P
d
n
are dened inductively by
V
0
(a) = a for any constant a R and
V
n
f (x) =

wW
c
n
(w)
_
d

i=1
(xw)
i
V
n1
[
i
f (xw)]
_
, n 1, f P
d
n
.
Theorem 6.5.16 For f P
d
n
, n 1 and 1 i d, D
i
V
n
f = V
n1

i
f ; the
operators V
n
are uniquely dened by these conditions and V
0
1 = 1.
Proof Clearly D
i
V
0
a = 0. Suppose that the statement is true for n 1; then g =
(V
n1

i
f )
d
i=1
is a -closed 1-form, homogeneous of degree n1 because D
j
g
i
=
V
n2

i
f =D
i
g
j
. Hence the polynomial
F(x) =

wW
c
n
(w)xw, g(xw))
is homogeneous of degree n and satises D
i
F =g
i
=V
n1

i
f for each i, by Theo-
rem 6.5.13. The uniqueness property is also proved inductively: suppose that V
n1
is uniquely determined; then by Proposition 6.5.9 there is a unique homogeneous
polynomial F such that D
i
F =V
n1

i
f for each i.
Denition 6.5.17 The intertwining operator V is dened on
d
as the linear
extension of the formal sum

n=0
V
n
; that is, if f =
m
n=0
f
n
with f
n
P
d
n
then
V f =
m
n=0
V
n
f
n
.
The operator V, as expected, commutes with each R(w).
Proposition 6.5.18 Let f
d
and w W; then VR(w) f = R(w)V f .
Proof For any u R
d
, D
u
V f (x) =V u, f (x)). Substitute uw for u and xw for
x to obtain on the one hand
6.5 The Intertwining Operator 199
R(w)(D
uw
V f )(x) =V uw, f (xw))
=V uw, [R(w) f ] (x)w)
=V u, [R(w) f ] (x))
=D
u
V [R(w) f ] (x);
on the other hand, D
u
[R(w)V f ] (x) = R(w)(D
uw
V f )(x). Therefore

[R(w)V f ] =

[VR(w) f ] ,
which shows that VR(w) f R(w)V f is a constant by Proposition 6.5.9. In the
homogeneous decomposition of f , the constant term is clearly invariant under V
and R(w); thus VR(w) f = R(w)V f .
The inductive type of denition used for V suggests that there are bounded-
ness properties which involve similarly dened norms. The following can be
considered as an analogue of power series with summable coefcients.
Denition 6.5.19 For f
d
let | f |
A
=

n=0
| f
n
|
S
, where f =

n=0
f
n
with
each f
n
P
d
n
and |g|
S
= sup
|x|=1
[g(x)[. Let A(B
d
) be the closure of
d
in the
A-norm.
Clearly A
_
B
d
_
is a commutative Banach algebra under pointwise operations
and is contained in C
_
B
d
_
C

(x : |x| < 1). Also, | f |


S
| f |
A
. We will
show that V is bounded in the A-norm. The van der CorputSchaake inequality
(see Theorem 4.5.3) motivates the denition of a gradient-type norm associated
with

.
Denition 6.5.20 Suppose that f P
d
n
; for n =0 let | f |

= f , and for n 1 let


|f |

=
1
n!
sup
_

i=1
y
(i)
,

) f (x)

: y
(1)
, . . . , y
(n)
S
d1
_
.
Proposition 6.5.21 For n 0 and f P
d
n
, | f |
S
|f |

.
Proof Proceeding inductively, suppose that the statement is true for P
d
n1
and
n 1. Let x S
d1
and, by Proposition 6.5.9, let
f (x) =

wW
c
n
(w)xw,

f (xw)).
Thus [ f (x)[
w
c
n
(w)[ xw,

f (xw))[; c
n
(w) 0 by Proposition 6.5.8. For
each w, [ xw,

f (xw))[ sup
|u|=1
[ u,

f (xw))[. But, for given u and w,


let g(x) = u,

f (xw)) =

uw
1
,

R(w) f (x)
_
P
d
n1
, and the inductive
hypothesis implies that
200 Root Systems and Coxeter Groups
[g(x)[
1
(n1)!
sup
_

n1

i=1
y
(i)
,

)g(x)

: y
(1)
, . . . , y
(n1)
S
d1
_
=
1
(n1)!
sup
_

n1

i=1
y
(i)
,

)uw
1
,

)R(w) f (x)

:
y
(1)
, . . . , y
(n1)
S
d1
_
n|R(w) f |

= n| f |

.
Note that |R(w) f |

=| f |

for any w W, because

R(w) f (x) = R(w)

f (x)w
1
.
Thus [ f (x)[
w
c
n
(w)n| f |

and
w
c
n
(w) = 1/n by Proposition 6.5.8.
The norm |f |

was dened in van der Corput and Schaake [1935]; it is the


same as | f |

but with all parameters 0.


Proposition 6.5.22 For n 0 and f P
d
n
, |V f |

=|f |

.
Proof For any y
(1)
, . . . , y
(n)
S
d1
,
n

i=1
y
(i)
,

)V f (x) =V
n

i=1
y
(i)
, ) f (x) =
n

i=1
y
(i)
, ) f (x);
the last equation follows from V1 = 1. Taking the supremum over y
(i)
shows
that |V f |

=| f |

.
Corollary 6.5.23 For f
d
and x B
d
, we have [V f (x)[ |f |
A
and |V f |
A

|f |
A
.
Proof Write f =

n=0
f
n
with f
n
P
d
n
; then |V f
n
|

=| f
n
|

= |f
n
|
S
by The-
orem 4.5.3 and [V f
n
(x)[ |V f
n
|

for |x| 1 and each n 0. Thus |V f


n
|
S

|f
n
|
S
and |V f |
A
|f |
A
.
It was originally conjectured that a stronger bound held: namely, [V f (x)[
sup[ f (y)[ : y co(xW), where co(xW) is the convex hull of xw : wW. This
was later proven by R osler [1999].
6.6 The -Analogue of the Exponential
For x, y C
d
the function exp(x, y)) has important applications, such as its use in
Fourier transforms and as a reproducing kernel for polynomials (Taylors theorem
6.6 The -Analogue of the Exponential 201
in disguise): for f
d
, the formal series exp
_

d
i=1
x
i
/y
i
_
f (y)[
y=0
= f (x).
Also (/x
i
) exp(x, y)) = y
i
exp(x, y)) for each i. There is an analogous func-
tion for the differentialdifference operators. In this section
v
0 is assumed
throughout. A superscript such as (x) on an operator refers to the R
d
variable on
which it acts.
Denition 6.6.1 For x, y R
d
let K(x, y) = V
(x)
exp(x, y)) and K
n
(x, y) =
(1/n!)V
(x)
(x, y)
n
) for n 0.
It is clear that for each y R
d
the function f
y
(x) = exp(x, y)) is in A
_
B
d
_
and that |f
y
|
A
=exp(|y|); thus the sum K(x, y) =

n=0
K
n
(x, y) converges abso-
lutely, and uniformly on closed bounded sets. Here are the important properties
of K
n
(x, y). Of course, K
0
(x, y) = 1.
Proposition 6.6.2 For n 1 and x, y R
d
:
(i) K
n
(x, y) =

wW
c
n
(w)xw, y)K
n1
(xw, y);
(ii) [K
n
(x, y)[
1
n!
max
wW
[ xw, y)[
n
;
(iii) K
n
(xw, yw) = K
n
(x, y) for any w W;
(iv) K
n
(y, x) = K
n
(x, y);
(v) D
(x)
u
K
n
(x, y) =u, y)K
n1
(x, y) for any u R
d
.
Proof Fix y R
d
and let g
y
(x) =x, y)
n
. By the construction of V we have
K
n
(x, y) =
1
n!
Vg
y
(x) =
1
n!

wW
c
n
(w)
d

i=1
(xw)
i
V
i
g
y
(xw)
and
i
g
y
(x) = ny
i
x, y)
n1
; thus V
i
g
y
(xw) = n(n 1)!y
i
K
n1
(xw, y), implying
part (i). Part (ii) is proved inductively using part (i) and the facts that c
n
(w) 0
and

w
c
n
(w) = 1/n.
For part (iii),
K
n
(xw, yw) =
1
n!
R(w)Vg
yw
(x) =
1
n!
VR(w)g
yw
(x)
and R(w)g
yw
(x) =xw, yw)
n
=x, y)
n
, hence K
n
(xw, yw) = K
n
(x, y).
Suppose that K
n1
(y, x) = K
n1
(x, y) for all x, y; then, by (i) and (iii),
K
n
(y, x) =

wW
c
n
(w)yw, x)K
n1
(yw, x)
=

wW
c
n
(w)

xw
1
, y
_
K
n1
(xw
1
, y)
=

wW
c
n
(w
1
)xw, y)K
n1
(xw, y)
=K
n
(x, y).
202 Root Systems and Coxeter Groups
Note that

w
q

(w; s)w=exp
_
s

vR
+
(1
v
)
_
is in the center of the group alge-
bra KW and is mapped onto itself under the transformation w w
1
(because
each
1
v
=
v
); hence q

_
w
1
; s
_
= q

(w; s) for each w, s, which implies that


c
n
_
w
1
_
= c
n
(w).
Let u R
d
; then
D
(x)
u
K
n
(x, y) =
1
n!
D
u
Vg
y
(x) =
1
n!
V u, )g
y
(x)
by the intertwining of V, so that
D
(x)
u
K
n
(x, y) =
n
n!
u, y)V
(x)
_
x, y)
n1
_
=u, y)K
n1
(x, y).
Corollary 6.6.3 For x, y, u R
d
the following hold:
(i) [K(x, y)[ exp(max
wW
[ xw, y)[);
(ii) K(x, 0) = K(0, y) = 1;
(iii) D
(x)
u
K(x, y) =u, y)K(x, y).
Observe that part (iii) of the corollary can be given an inductive proof:
K
n
(xw, yw) =

zW
c
n
(z)xwz, yw)K
n1
(xwz, yw)
=

zW
c
n
(z)

xwzw
1
, y
_
K
n1
_
xwzw
1
, y
_
=

zW
c
n
_
w
1
zw
_
xz, y)K
n1
(xz, y),
and c
n
_
w
1
zw
_
= c
n
(z) because
w
q

(w; s)w is in the center of KW.


The projection of K(x, y) onto invariants is called the -Bessel function for the
particular root system and multiplicity function.
Denition 6.6.4 For x, y R
d
let
K
W
(x, y) =
1
[W[

wW
K(x, yw).
Thus K
W
(x, 0) = 1 and K
W
(xw, y) = K
W
(x, yw) = K
W
(x, y) for each w W.
The name Bessel stems fromthe particular case W =Z
2
, where K
W
is expressed
as a classical Bessel function.
6.7 Invariant Differential Operators
This section deals with the large class of differentialdifference operators of arbi-
trary degree as well as with the method for nding certain commutative sets of
differential operators which are related to the multiplicity function
v
. Since the
6.7 Invariant Differential Operators 203
operators D
u
commute, they generate a commutative algebra of polynomials in
D
i
: 1 i d (see Denition 6.4.2 for D
u
and D
i
).
Denition 6.7.1 For p
d
set p(D) = p(D
1
, D
2
, . . . , D
d
); that is, each x
i
is
replaced by D
i
.
For example, the polynomial |x|
2
corresponds to
h
. The action of W on these
operators is dual to its action on polynomials.
Proposition 6.7.2 Let p
d
and w W; then
p(D)R(w) = R(w)
_
R(w
1
)p(D)

.
If p
W
, the algebra of invariant polynomials, then p(D)R(w) = R(w)p(D).
Proof Each polynomial is a sum of terms such as q(x) =
n
i=1
u
(i)
, x), with
each u
(i)
R
d
. By Proposition 6.4.3, q(D)R(w) = R(w)
n
i=1
u
(i)
w,

). Also,
R(w
1
)q(x) =
n
i=1
u
(i)
, xw
1
) =
n
i=1
u
(i)
w, x). This shows that q(D)R(w) =
R(w)
_
R(w
1
)q(D)

. If a polynomial p is invariant then R(w


1
)p = p, which
implies that p(D)R(w) = R(w)p(D).
The effect of p(D) on K(x, y) is easy to calculate:
p
_
D
(x)
_
K(x, y) = p(y)K(x, y), x, y R
d
.
The invariant polynomials for several reection groups were described in Sub-
sections 4.3.14.3.6. The motivation for the rest of this section comes from the
observation that
h
restricted to
W
acts as a differential operator:

h
p(x) =p(x) +

vR
+

v
2v, p(x))
v, x)
, for p
W
.
It turns out that this holds for any operator q(D) with q
W
. However,
explicit forms such as that for
h
are not easily obtainable (one reason is
that there are many different invariants for different groups; only |x|
2
is
invariant for each group). Heckman [1991a] proved this differential operator
result. Some background is needed to present the theorem. The basic prob-
lem is that of characterizing differential operators among all linear operators on
polynomials.
Denition 6.7.3 Let L
_

d
_
be the set of linear transformations of
d

d
(endomorphisms). For n 0 let L
,n
_

d
_
= spana(x)

: a
d
,
N
d
0
, [[ n, where

1
1

d
d
, the space of differential operators of degree
n with polynomial coefcients. Further, let L

d
_
=

n=0
L
,n
_

d
_
.
204 Root Systems and Coxeter Groups
Denition 6.7.4 For S, T L(
d
) dene an operator ad(S)T L(
d
) by
ad(S)T f = (ST TS) f for f
d
.
The name ad refers to the adjoint map from Lie algebra theory. The elements
of L
,0
are just multiplier operators on
d
: f pf for a polynomial p
d
;
we use the same symbol (p) for this operator. The characterization of L

(
d
)
involves the action of ad(p) for arbitrary p
d
.
Proposition 6.7.5 If T L(
d
) and ad(x
i
)T =0 for each i then T L
,0
(
d
)
and T f = (T1) f for each f
d
.
Proof By the hypothesis, x
i
T f (x) = T(x
i
f )(x) for each i and f
d
. The set
E = f
d
: T f = (T1) f is a linear space, contains 1 and is closed under
multiplication by each x
i
; hence E =
d
.
The usual product rule holds for ad: for S, T
1
, T
2
L(
d
), a trivial verication
shows that
ad(S)T
1
T
2
= [ad(S)T
1
] T
2
+T
1
[ad(S)T
2
] .
There is a commutation fact:
S
1
S
2
= S
2
S
1
implies that ad(S
1
)ad(S
2
) = ad(S
2
)ad(S
1
).
This holds because both sides of the second equation applied to T equal S
1
S
2
T
S
1
TS
2
S
2
TS
1
+TS
2
S
1
. In particular, ad(p)ad(q) = ad(q)ad(p) for multipliers
p, q
d
. For any p
d
, ad(p)
i
=
i
p L
,0
(
d
), a multiplier.
It is not as yet proven that L

(
d
) is an algebra; this is a consequence of the
following.
Proposition 6.7.6 Suppose that m, n 0 and , N
d
0
, with [[ = m, [[ = n
and p, q
d
; then p

= pq
+
+T, with T L
,m+n1
(
d
).
Proof Proceed inductively with the degree-1 factors of

; a typical step uses

i
qS = q
i
S +(
i
q)S for some operator S.
Corollary 6.7.7 Suppose that n 1 and T L
,n
(
d
); then ad(p)T
L
,n1
(
d
) and ad(p)
n+1
T = 0, for any p
d
.
In fact Corollary 6.7.7 contains the characterization announced previously. The
effect of ad(x
i
) on operators of the form q

can be calculated explicitly.


Lemma 6.7.8 Suppose that N
d
0
, q
d
and 1 i d; then
ad(x
i
)q

= (
i
)q

1
1

i
1
i

d
d
.
6.7 Invariant Differential Operators 205
Proof Since x
i
commutes with q and

j
j
for j ,= i, it sufces to nd ad(x
i
)

i
i
.
Indeed, by the Leibniz rule, x
i

i
i
f

i
i
(x
i
f ) =

i
j=1
_

i
j
_

j
i
(x
i
)

i
j
i
( f ) =

i
1
i
( f ) for any f
d
.
Theorem 6.7.9 For n 1, T L(
d
) the following are equivalent:
(i) T L
,n
(
d
);
(ii) ad(p)
n+1
T = 0 for each p
d
;
(iii) ad(x
1
)

1
ad(x
2
)

2
ad(x
d
)

d
T = 0 for each N
d
0
with [[ = n+1.
Proof We will show that (i) (ii) (iii) (i). The part (i) (ii) is already
proven. Suppose now that (ii) holds; for any y R
d
let p =

d
i=1
y
i
x
i
. The multino-
mial theorem applies to the expansion of
_

d
i=1
y
i
ad (x
i
)
_
n+1
because the terms
commute with each other. Thus
0 = ad(p)
n+1
T =

[[=n+1
_
[[

_
y

ad(x
1
)

1
ad (x
2
)

2
ad(x
d
)

d
T.
Considering this as a polynomial identity in the variable y, we deduce that each
coefcient is zero. Thus (ii) (iii).
Suppose that (iii) holds. Proposition 6.7.5 shows that (i) is true for n = 0. Pro-
ceed by induction and suppose that (iii) implies (i) for n 1. Note that Lemma
6.7.8 shows that there is a biorthogonality relation
ad(x
1
)

1
ad(x
2
)

2
ad(x
d
)

d
_
q

_
=
d

i=1
(
i
)

i
q

,
where

=
d
i=1

i
i
and negative powers of
i
are considered to be zero
(of course, the Pochhammer symbol (
i
)

i
= 0 for
i
>
i
). For each N
d
0
with [[ = n, let
S

= (1)
n
d

i=1
(
i
!)
1
ad(x
1
)

1
ad(x
2
)

2
ad(x
d
)

d
T.
By hypothesis, ad(x
i
)S

= 0 for each i and, by Proposition 6.7.5, S

is a
multiplier, say, q


d
. Now let T
n
=
[[=n
q

. By construction and the


biorthogonality relation, we have
ad(x
1
)

1
ad(x
2
)

2
ad(x
d
)

d
(T T
n
) = 0
for any N
d
0
with [[ = n. By the inductive hypothesis, T T
n
L
,n1
(
d
);
thus T L
,n
(
d
) and (i) is true.
We now apply this theorem to polynomials in q
j
: 1 j d, the funda-
mental invariant polynomials of W (see Chevalleys theorem 6.3.2). Let L

(
W
)
denote the algebra generated by
_
/q
i
: 1 i d
_

W
, where the latter are
considered as multipliers.
206 Root Systems and Coxeter Groups
Theorem 6.7.10 Let p
W
; then the restriction of p(D) to
W
coincides
with an operator D
p
L

(
W
). The correspondence p D
p
is an algebra
homomorphism.
Proof By the previous theoremwe need to showthat there is a number n such that
ad(g)
n+1
p(D) f =0 for each f , g
W
. Let n be the degree of p as a polynomial
in x; then p is a sum of terms such as

m
i=1
u
(i)
, x) with u
(i)
R
d
and m n. By
the multinomial Leibniz rule,
ad(g)
n+1
m

i=1
u
(i)
,

)
=

_
_
n+1

_
m

i=1
_
ad(g)

i
u
(i)
,

)
_
: N
m
0
, [[ = n+1
_
.
For any in the sumthere must be at least one index i with
i
2, since n+1 >m.
But if g
W
and q
d
then u,

)(gq) = D
u
(gq) = gD
u
q +qu, )g; that
is, ad(g)u,

) is the same as multiplication by u, )g and ad(g)


2
u,

) =
0. Thus D
p
L

(
W
). Furthermore, if p
1
, p
2

W
then the restriction of
p
1
(D)p
2
(D) to
W
is clearly the product of the respective restrictions, so that
D
p
1
p
2
= D
p
1
D
p
2
.
This shows that D
p
is a sum of terms such as
f (q
1
, q
2
, . . . , q
d
)
d

i=1
_

q
i
_

i
with N
d
0
and [[ n. This can be expressed in terms of x. Recall the
Jacobian matrix J(x) =
_
q
i
(x)/x
j
_
d
i, j=1
with det J(x) = ca
R
(x), a scalar mul-
tiple of the alternating polynomial a
R
(x) (see Theorem 6.3.5). The inverse of
J(x) exists for each x such that x, v) ,= 0 for each v R
+
, and the entries
are rational functions whose denominators are products of factors x, v). Since
/q
j
=
d
i=1
_
J(x)
1
_
i, j
/x
i
, we see that each D
p
can be expressed as a dif-
ferential operator on x R
d
with rational coefcients (and singularities on x :

vR
+
x, v) =0, the union of the reecting hyperplanes). This is obvious for
h
.
The -Bessel function is an eigenfunction of each D
p
, p
W
; this implies
that the correspondence p D
p
is one to one.
Proposition 6.7.11 For x, y R
d
, if p
W
then
D
(x)
p
K
W
(x, y) = p(y)K
W
(x, y).
Proof It is clear that K
W
is the absolutely convergent sum over n 0 of the
sequence [W[
1

wW
K
n
(x, yw) and that p
_
D
(x)
_
can be applied term by term.
Indeed,
6.8 Notes 207
D
(x)
p
K
W
(x, y) = p
_
D
(x)
_
K
W
(x, y) =
1
[W[

wW
p
_
D
(x)
_
K(x, yw)
=
1
[W[

wW
p(yw)K(x, yw) =
1
[W[
p(y)

wW
K(x, yw)
= p(y)K
W
(x, y).
The third equals sign uses (iii) in Corollary 6.6.3.
Opdam [1991] established the existence of K
W
as the unique real-entire joint
eigenfunction of all invariant differential operators commuting with D
p
p(x) =
|x|
2
, that is, the differential part of
h
, the eigenvalues specied in Proposition
6.7.11.
6.8 Notes
The fundamental characterization of reection groups in terms of generators and
relations was established by Coxeter [1935]. Chevalley [1955] proved that the
ring of invariants of a nite reection group is a polynomial ring (that is, it is iso-
morphic to the ordinary algebra of polynomials in d variables). It is an heuristic
feeling of the authors that this is the underlying reason why reection groups
and orthogonal polynomials go together. For background information, geomet-
ric applications and proofs of theorems about reection groups omitted in the
present book, the reader is referred to Coxeter and Moser [1965], Grove and
Benson [1985] and Humphreys [1990].
Historically, one of the rst results that involved reection-invariant operators
and spherical harmonics appeared in Laporte [1948] (our theorem 6.2.6 for the
case R
3
).
Differentialdifference operators, intertwining operators and -exponential
functions were developed in a series of papers by Dunkl [1988, 1989a, 1990,
1991]. The construction of the differentialdifference operators came as a result
of several years of research aimed at understanding the structure of non-invariant
orthogonal polynomials with respect to invariant weight functions. Differential
operators do not sufce, owing to their singularities on the reecting hyperplanes.
Subsequently to Dunkls 1989 paper, Heckman [1991a] constructed operators
of a trigonometric type, associated with Weyl groups. There are similar oper-
ators in which elliptic functions replace the linear denominators (Buchstaber,
Felder and Veselov [1994]). R osler [1999] showed that the intertwining opera-
tor can be expressed as an integral transform with positive kernel; this was an
existence proof, so the nding of explicit forms is, as of the time of writing, an
open problem. In some simple cases, however, an explicit form is known; see the
next chapter and its notes.
7
Spherical Harmonics Associated with
Reection Groups
In this chapter we study orthogonal polynomials on spheres associated with
weight functions that are invariant under reection groups. A theory of homo-
geneous orthogonal polynomials in this setting, called h-harmonics, can be
developed in almost complete analogy to that for the ordinary spherical harmon-
ics. The results include several inner products under which orthogonality holds,
explicit expressions for the reproducing kernels given in terms of the intertwin-
ing operator and an integration which shows that the average of the intertwining
operator over the sphere removes the action of the operator. As examples, we dis-
cuss the h-harmonics associated with Z
d
2
and those in two variables associated
with dihedral groups. Finally, we discuss the analogues of the Fourier transform
associated with reection groups.
7.1 h-Harmonic Polynomials
We use the notation of the previous chapter and start with denitions.
Denition 7.1.1 Let R
+
be the system of positive roots of a reection group W
acting in R
d
. Let be a multiplicity function as in Denition 6.4.1. Associated
with R
+
and we dene invariant weight functions
h

(x) :=

vR
+
[v, x)[

v
, x R
d
.
Throughout this chapter we use the notation

and

as follows:

:=

vR
+

v
and

:=

+
d 2
2
. (7.1.1)
Sometimes we will write for

and for

, and particularly in the proof


sections when no confusion is likely. Note that the weight function h

is positively
homogeneous of degree

.
7.1 h-Harmonic Polynomials 209
Recall that the operators D
i
, 1 i d, from Denition 6.4.2 commute and that
the h-Laplacian is dened by
h
=
d
i=1
D
2
i
.
Denition 7.1.2 A polynomial P is h-harmonic if
h
P = 0.
We split the differential and difference parts of
h
, denoting them by L
h
and
D
h
, respectively; that is,
h
= L
h
+D
h
with
L
h
f =
( f h

) f h

,
D
h
f (x) =2

vR
+

v
f (x) f (x
v
)
v, x)
2
|v|
2
.
(7.1.2)
Lemma 7.1.3 Both L
h
and D
h
commute with the action of the reection
group W.
Proof We have L
h
f = f +2h

, f )/h

. We need to prove that R(w)L


h
f =
L
h
R(w) f for every w W. The Laplacian commutes with R(w) since W is a sub-
group of O(d). Every term in 2h

, f )/h

=
1
2
[(h
2

f ) f (h
2

) h
2

f ]/h
2

commutes with R(w) since h


2

is W-invariant. Since
h
commutes with R(w) and
D
h
=
h
L
h
, D
h
commutes with the action of W.
In the proof of the following lemmas and theorem assume that
v
1. Analytic
continuation can be used to extend the range of validity to
v
0. More-
over, we will write d for d
d1
whenever no confusion can result from this
abbreviation.
Lemma 7.1.4 The operator D
h
is symmetric on polynomials in L
2
(h
2

d).
Proof For any polynomial f , D
h
f L
2
(h
2

d) since each factor x, v)


2
in the
denominators of the terms of D
h
f is canceled by h
2

(x) =

vR
+
[x, v)[
2
v
for
each
v
1. For polynomials f and g,
_
S
d1
f D
h
gh
2

d
=

vR
+

v
_
_
S
d1
f (x) g(x)x, v)
2
h
2

(x)d

_
S
d1
f (x) g(x
v
)x, v)
2
h

(x)
2
d
_
|v|
2
=
_
S
d1
(D
h
f ) gh
2

d,
where in the second integral in the sum we have replaced x by x
v
; this step
is valid by the W-invariance of h
2

under each
v
and the relation x
v
, v) =
x, v
v
) =x, v).
210 Spherical Harmonics Associated with Reection Groups
Lemma 7.1.5 For f , g C
2
(B
d
),
_
S
d1
f
n
gh
2

d =
_
B
d
(gL
h
f +f , g))h
2

dx,
where f /n is the normal derivative of f .
Proof Greens rst identity states that
_
S
d1
f
1
n
f
2
d =
_
B
d
( f
2
f
1
+f
1
, f
2
))dx
for f
1
, f
2
C
2
(B
d
). In this identity, rst put f
1
= f h

and f
2
= gh

to get
_
S
d1
_
f g
h

n
+gh

f
n
_
h

d =
_
B
d
_
gh

( f h

) +( f h

), (gh

))

dx
and then put f
1
= h

and f
2
= f gh

to get
_
S
d1
f g
h

n
d =
_
B
d
_
f gh

+( f gh

), h

dx.
Subtracting the second equation from the rst, we obtain
_
S
d1
g
f
n
h
2

d =
_
B
d
_
gh

[( f h

) f h

] +h
2

f , g)
_
dx
by using the product rule for repeatedly.
Theorem 7.1.6 Suppose that f and g are h-harmonic homogeneous polynomi-
als of different degrees; then
_
S
d1
f (x)g(x)h
2

(x)d = 0.
Proof Since f is homogeneous, by Eulers formula, f /n = (deg f ) f . Hence,
by Greens identity in Lemma 7.1.5,
(deg f degg)
_
S
d1
f gh
2

d =
_
B
d
(gL
h
f f L
h
g)h
2

dx
=
_
B
d
(gD
h
f f D
h
g)h
2

dx = 0,
using the symmetry of D
h
from Lemma 7.1.4.
Hence, the h-harmonics are homogeneous orthogonal polynomials with
respect to h
2

d. Denote the space of h-harmonic homogeneous polynomials of


degree n by
H
d
n
(h
2

) =P
d
n
ker
h
,
using notation from Section 4.2. If h
2

is S-symmetric then it follows from Theo-


rem 4.2.7 that P
d
n
admits a unique decomposition in terms of H
d
j
(h
2

). The same
result also holds for the general h-harmonic polynomial h

from Denition 7.1.1.


7.1 h-Harmonic Polynomials 211
Theorem 7.1.7 For each n N
0
, P
d
n
=

[n/2]
j=0
|x|
2 j
H
d
n2 j
(h
2

); that is, there


is a unique decomposition P(x) =
[n/2]
j=0
|x|
2 j
P
n2 j
(x) for P P
d
n
with P
n2 j

H
d
n2 j
(h
2

). In particular,
dimH
d
n
(h
2

) =
_
n+d 1
n
_

_
n+d 3
n2
_
.
Proof We proceed by induction. From the fact that
h
P
d
n
P
d
n2
we obtain
P
d
0
= H
d
0
, P
d
1
= H
d
1
and dimH
d
n
(h
2

) dimP
d
n
dimP
d
n2
. Suppose
that the statement is true for m = 0, 1, . . . , n 1 for some n. Then |x|
2
P
d
n2
is a subspace of P
d
n
and is isomorphic to P
d
n2
, since homogeneous poly-
nomials are determined by their values on S
d1
. By the induction hypothe-
sis |x|
2
P
d
n2
=

[n/2]1
j=0
|x|
2 j+2
H
d
n22 j
(h
2

), and by the previous theorem


H
d
n
(h
2

) H
d
n22 j
(h
2

) for each j = 0, 1, . . . , [n/2] 1. Hence H


d
n
(h
2

)
|x|
2
P
d
n2
in L
2
(h
2

d). Thus dimH


d
n
(h
2

) + dimP
d
n2
dimP
d
n
, and so
dimH
d
n
(h
2

) +dim(|x|
2
P
d
n2
) = dimP
d
n
.
Corollary 7.1.8 P
d
n
(P
d
n2
)

= H
d
n
(h
2

); that is, if p P
d
n
and p P
d
n2
then p is h-harmonic.
A basis of H
d
n
(h
2

) can be generated by a simple procedure. First, we state


Lemma 7.1.9 Let s be a real number and g P
d
n
. Then
D
i
(|x|
s
g) = sx
i
|x|
s2
g+|x|
s
D
i
g,

h
(|x|
s
g) = 2s
_
d
2
+
s
2
1+n+

_
|x|
s2
g+|x|
s

h
g,
(7.1.3)
where, if s < 2, both these identities are restricted to R
d
0.
Proof Since |x|
s
is invariant under the action of the reection group, it follows
from the product formula in Proposition 6.4.12 that
D
i
(|x|
s
g) =
i
(|x|
s
)g+|x|
s
D
i
g,
from which the rst identity follows. To prove the second identity, use the split

h
= L
h
+D
h
. It is easy to see that D
h
(|x|
s
g) = |x|
s
D
h
g. Hence, using the fact
that L
h
f =f +2h

, f )/h

, the second identity follows from


(|x|
s
g) = 2s
_
d
2
+
s
2
1+n
_
|x|
s2
g+|x|
s
g
and
(|x|
s
g), h

) = s|x|
s2
g, h

) +|x|
s
g, h

),
where we have used the relation

(x
i
h

/x
i
) =

.
212 Spherical Harmonics Associated with Reection Groups
Lemma 7.1.10 For 1 i d,
h
[x
i
f (x)] = x
i

h
f (x) +2D
i
f (x).
Proof By the product rules for and for ,

h
[x
i
f (x)] = x
i
f (x) +2
f
x
i
+

vR
+

v
_
2v, f )x
i
v, x)
+
2 f (x)v
i
v, x)
|v|
2
x
i
f (x) (x
v
)
i
f (x
v
)
v, x)
2
_
,
which, upon using the identities
x
i
f (x) (x
v
)
i
f (x
v
) = x
i
[ f (x) f (x
v
)] +(x
i
(x
v
)
i
) f (x
v
)
and
x
i
(x
v
)
i
=
2x, v)v
i
|v|
2
,
is seen to be equal to x
i

h
f (x) +2D
i
f (x).
Denition 7.1.11 For any N
d
0
, dene homogeneous polynomials H

by
H

(x) :=|x|
2

+2[[
D

|x|
2

, (7.1.4)
where D

=D

1
1
D

d
d
and

is as in (7.1.1).
Recall that
i
= (0, . . . , 1, . . . , 0), 1 i d, denotes the standard basis elements
of R
d
. In the following proof we write for

.
Theorem 7.1.12 For each N
d
, H

is an h-harmonic polynomial of degree


[[, that is, H

H
d
[[
(h
2

). Moreover, the H

satisfy the recursive relation


H
+
i
(x) =2(

+[[)x
i
H

(x) +|x|
2
D
i
H

(x). (7.1.5)
Proof First we prove that H

is a homogeneous polynomial of degree [[, using


induction on n = [[. Clearly H
0
(x) = 1. Assume that H

has been proved to be


a homogeneous polynomial of degree n for [[ = n. Using the rst identity in
(7.1.3) it follows that
D
i
H

=(2 +2[[)x
i
|x|
22+2[[
D

(|x|
2
)
+|x|
2+2[[
D
i
D

(|x|
2
),
from which the recursive formula (7.1.5) follows from the denition of H

.
Since D
i
: P
d
n
P
d
n1
, it follows from the recursive formula that H
+
i
is a
homogeneous polynomial of degree n+1 =[[ +1.
Next we prove that H

is an h-harmonic polynomial, that is, that


h
H

= 0.
Setting a =2n2 in the second identity of (7.1.3), we conclude that

h
(|x|
2n2
g) =|x|
2n2

h
g,
7.1 h-Harmonic Polynomials 213
for g P
d
n
. In particular, for g = 1 and n = 0,
h
(|x|
2
) = 0 for x R
d
0.
Hence, setting g = H

and [[ = n and using the fact that D


1
, . . . , D
d
commute,
it follows that

h
H

=|x|
2n+2
D

h
(|x|
2
) = 0,
which holds for all x R
d
since H

is a polynomial.
The formula (7.1.4) denes a one-to-one correspondence between x

and H

.
Since every h-harmonic in H
d
n
(h
2

) can be written as a linear combination of x

with [[ = n, the set H

: [[ = n contains a basis of H
d
n
(h
2

). However, the
h-harmonics in this set are not linearly independent since there are dimP
d
n
of
them, which is more than dimH
d
n
(h
2

). Nevertheless, it is not hard to derive a


basis from them.
Corollary 7.1.13 The set H

: [[ = n contains bases of H
d
n
(h
2

); one
particular basis can be taken as H

: [[ = n,
d
= 0, 1.
Proof The linear dependence relations among the members of H

: [[ =n are
given by
H
+2
1
+ +H
+2
d
=|x|
2n+2
D

h
(|x|
2
) = 0
for every N
d
such that [[ = n 2. These relations number exactly
dimP
d
n2
= # N
d
: [[ = n 2. For each relation we can exclude
one polynomial from the set H

: [[ = n. The remaining dimH


d
n
(h
2

)
(= dimP
d
n
dimP
d
n2
) harmonics still span H
d
n
(h
2

); hence they form a basis


for H
d
n
(h
2

). The basis is not unique, since we can exclude any of the polyno-
mials H
+2
1
, . . . , H
+2
d
for each dependent relation. Excluding H
+2
d
for all
[[ = n 2 from H

: [[ = n is one way to obtain a basis, and it generalizes


the spherical harmonics from Section 4.1.
The polynomials in the above basis are not mutually orthogonal, however. In
fact, constructing orthonormal bases with explicit formulae is a difcult problem
for most reection groups. At the present such bases are known only in the case of
abelian groups Z
d
2
(see Section 7.5) and the dihedral groups acting on polynomials
of two variables (see Section 7.6).
Since (7.1.4) denes a map which takes x

to H

, this map must be closely


related to the projection operator from P
d
n
to H
d
n
(h
2

).
Denition 7.1.14 An operator proj
n,h
is dened on the space of homogeneous
polynomials P
d
n
by
proj
n,h
P(x) =
(1)
n
2
n
(

)
n
|x|
2

+2[[
P(D)|x|
2

,
where (

)
n
is a Pochhammer symbol.
214 Spherical Harmonics Associated with Reection Groups
Theorem 7.1.15 The operator proj
n,h
is the projection operator from P
d
n
to
H
d
n
(h
2

), and it is given by
proj
n,h
P(x) =
[n/2]

j=0
1
4
j
j!(

n+1)
j
|x|
2 j

j
h
P; (7.1.6)
moreover, every P P
d
n
has the expansion
P(x) =
[n/2]

j=0
1
4
j
j!(

+1+n2j)
j
|x|
2 j
proj
n2 j,h

j
h
P(x). (7.1.7)
Proof Use induction on n = [[. The case n = 0 is evident. Suppose that the
equation has been proved for all such that [[ = n. Then, with P(x) = x

and
[[ = n,
D

|x|
2
= (1)
n
2
n
()
n
|x|
22n

[n/2]

j=0
1
4
j
j!( n+1)
j
|x|
2 j

j
h
(x

).
Applying D
i
to this equation and using the rst identity in (7.1.3) with g =

j
h
(x

), we conclude, after carefully computing the coefcients, that


D
i
D

|x|
2
= (1)
n
2
n
()
n
(2 2n)|x|
22n2

[(n+1)/2]

j=0
1
4
j
j!( n)
j
|x|
2 j
_
x
i

j
h
(x

) +2 j
j1
h
D
i
(x

)
_
,
where we have also used the fact that D
i
commutes with
h
. Now, using the
identity

j
h
[x
i
f (x)] = x
i

j
h
f (x) +2 jD
i

j1
h
f (x), j = 1, 2, 3, . . . ,
which follows from Lemma 7.1.10, and the fact that (1)
n
2
n
()
n
(2 2n) =
(1)
n+1
2
n+1
()
n+1
, we conclude that equation (7.1.6) holds for P(x) = x
i
x

,
which completes the induction.
To prove (7.1.7), recall that every P P
d
n
has a unique decomposition P(x) =

[n/2]
j=0
|x|
2 j
P
n2 j
by Theorem 7.1.7, where P
n2 j
H
d
n2 j
(h
2

). It follows from
the second identity of (7.1.3) that

j
h
P =
[n/2]

i=j
4
j
(i)
j
( +n+i)
j
|x|
2i2 j
P
n2i
, (7.1.8)
7.1 h-Harmonic Polynomials 215
from which we obtain that
proj
n,h
P =
[n/2]

j=0
|x|
2 j
[n/2]

i=j
4
j
(i)
j
( n+i)
j
4
j
j!( n+1)
j
|x|
2i2 j
P
n2i
=
[n/2]

i=0
|x|
2i
P
n2i
i

j=0
(i)
j
( n+i)
j
j!( n+1)
j
=
[n/2]

i=0
|x|
2i
P
n2i 2
F
1
_
i, n+i
n+1
; 1
_
.
By the ChuVandermonde identity the hypergeometric function
2
F
1
is zero except
when i = 0, which yields proj
n,h
P = P
n
. Finally, using
j
h
P in place of P and
taking into account (7.1.8), we conclude that
proj
n2 j,h

j
h
P = (j)
j
( n+ j)
j
P
n2 j
;
a simple conversion of Pochhammer symbols completes the proof.
An immediate consequence of this theorem is the following corollary.
Corollary 7.1.16 For each N
d
0
,
H

(x) = (1)
n
2
n
(

)
n
proj
n,h
(x

).
The projection operator turns out to be related to the adjoint operator D

i
of D
i
in L
2
(h
2

d). The adjoint D

i
on H
d
n
(h
2

) is dened by
_
S
d1
p(D
i
q)h
2

d =
_
S
d1
(D

i
p)qh
2

d, p H
d
n
(h
2

), q H
d
n+1
(h
2

).
It follows that D

i
is a linear operator that maps H
d
n
(h
2

) into H
d
n+1
(h
2

).
Theorem 7.1.17 For p H
d
n
(h
2

),
D

i
p = 2(n+

+1)
_
x
i
p(2n+2

)
1
|x|
2
D
i
p

.
Proof First assume that [[ =[[ = n. It follows from (7.1.5) that
(2 +2n)
_
S
d1
x
i
H

h
2

d
=
_
S
d1
(D
i
H

)H

h
2

d
_
S
d1
H
+
i
H

h
2

d = 0.
Since H

: [[ = n contains a basis for H


d
n
(h
2

), it follows that
_
S
d1
x
i
pqh
2

d = 0, 1 i d, p, q H
d
n
(h
2

). (7.1.9)
216 Spherical Harmonics Associated with Reection Groups
Using this equation and the recursive relation (7.1.5) twice we conclude that, for
[[ = n and [[ = n+1,
_
S
d1
H

(D
i
H

)h
2

d = (2 +2n+2)
_
S
d1
x
i
H

h
2

d
=
2 +2n+2
2 +2n
_
S
d1
H
+
i
H

h
2

d,
which gives, using the denition of D

i
,
D

i
H

=
n+ +1
n+
H
+
i
.
Using the recurrence relation (7.1.4) again, we obtain the desired result for
p = H

, which completes the proof by Corollary 7.1.13.


We can also obtain a formula for the adjoint operator dened on L
2
(h
2
d, R
d
),
where we dene the measure by
d = (2)
d/2
e
|x|
2
/2
dx.
Let V
d
n
(h
2

d) denote the space of orthogonal polynomials of degree n with


respect to h
2

d. An orthogonal basis of V
d
n
(h
2

d) is given by
P
n
, j
(x) = L
n2 j+

j
_
1
2
|x|
2
_
Y
h
n2 j,
(x), (7.1.10)
where Y
h
n2 j,
H
d
n2 j
(h
2

) and 0 2 j n. Its verication follows as in (5.1.6).


We introduce the following normalization constants for convenience:
c
h
= c
h,d
=
_
_
R
d
h
2

(x)d
_
1
,
c
/
h
= c
/
h,d
=
d1
_
_
S
d1
h
2

d
_
1
.
(7.1.11)
Using polar coordinates we have for p P
2m
that
_
R
d
p(x)h
2

(x)d =
1
(2)
d/2
_

0
r
2

+2m+d1
e
r
2
/2
dr
_
S
d1
ph
2

d
= 2
m+

+m+1)
(
d
2
)
1

d1
_
S
d1
ph
2

d,
which implies, in particular, that
c
/
h,d
= 2

+1)
(
d
2
)
c
h,d
.
We denote the adjoint operator of D
i
on L
2
(h
2

d, R
d
) by

D

i
, to distinguish it
from the L
2
(h
2

d, S
d1
) adjoint D

i
.
7.2 Inner Products on Polynomials 217
Theorem 7.1.18 The adjoint

D

i
acting on L
2
(h
2

d, R
d
) is given by

i
p(x) = x
i
p(x) D
i
p(x), p
d
.
Proof Assume that
v
1. Analytic continuation can be used to extend the range
of validity to
v
0. Let p and q be two polynomials. Integrating by parts, we
obtain
_
R
d
_

i
p(x)

q(x)h
2

(x)d =
_
R
d
p(x)
_

i
q(x)

h
2

(x)d
+
_
R
d
p(x)q(x)
_
2h

(x)
i
h

(x) +h
2

(x)x
i

d.
For a xed root v,
_
R
d
p(x) p(x
v
)
x, v)
q(x)h
2

(x)d
=
_
R
d
p(x)q(x)
x, v)
h
2

(x)d
_
R
d
p(x
v
)q(x)
x, v)
h
2

(x)d
=
_
R
d
p(x)q(x)
x, v)
h
2

(x)d +
_
R
d
p(x)q(x
v
)
x, v)
h
2

(x)d,
where in the second integral we have replaced x by x
v
, which changes x, v) to
x
v
, v) =x, v) and leaves h
2

invariant. Note also that


h

(x)
i
h

(x) =

vR
+

v
v
i
x, v)
h
2

(x).
Combining these ingredients, we obtain
_
R
d
D
i
p(x)q(x)h
2

(x)d
=
_
R
d
_
p(x)
_
x
i
q(x)
i
q(x)

+

vR
+
_

v
v
i
p(x)
_
2q(x) +q(x) +q(x
v
)
_
x, v)
_
_
h
2

(x)d;
the term inside the large square brackets is exactly p(x)
_
x
i
q(x) D
i
q(x)

.
7.2 Inner Products on Polynomials
For polynomials in
d
, a natural inner product is p, q)

= p()q(0), p, q P
d
n
,
where p() means that x
i
has been replaced by
i
in p(x). The reproducing ker-
nel of this inner product is x, y)
n
/n!, since (x,
y
)
n
/n!)q(y) = q(x) for q P
d
n
and x R
d
. With the goal of constructing the Poisson kernel for h-harmonics,
we consider the action of the intertwining operator V (Section 6.5) on this inner
product. As in the previous chapter, a superscript such as (x) on an operator refers
to the R
d
variable on which it acts.
218 Spherical Harmonics Associated with Reection Groups
Proposition 7.2.1 If p P
d
n
then K
n
(x, D
(y)
)p(y) = p(x) for all x R
d
, where
K
n
(x, D
(y)
) is the operator formed by replacing y
i
by D
i
in K
n
(x, y).
Proof If p P
d
n
then p(x) = (x,
(y)
)
n
/n!)p(y). Applying V
(x)
leads to
V
(x)
p(x) = K
n
(x,
(y)
)p(y). The left-hand side is independent of y, so applying
V
(y)
to both sides gives V
(x)
p(x) = K
n
(x, D
(y)
)V
(y)
p(y). Thus the desired iden-
tity holds for all V p with p P
d
n
, which completes the proof since V is one
to one.
Denition 7.2.2 The bilinear formp, q)
h
(known as an h inner product) equals
p(D
(x)
)q(x) for p, q P
d
n
and n N
d
0
and, since homogenous polynomials of
different degrees are orthogonal to each other, this extends by linearity to all
polynomials.
We remark that if = 0, that is, h

(x) = 1, then p, q)
h
is the same as p, q)

.
Theorem 7.2.3 For p, q P
d
n
,
p, q)
h
= K
n
(D
(x)
, D
(y)
)p(x)q(y) =q, p)
h
.
Proof By Proposition 7.2.1, p(x) =K
n
(x, D
(y)
)p(y). The operators D
(x)
and D
(y)
commute and thus
p, q)
h
= K
n
(D
(x)
, D
(y)
)p(y)q(x) = K
n
(D
(y)
, D
(x)
)p(y)q(x)
by part (iv) of Lemma 6.6.2. The latter expression equals q, p)
h
.
Theorem 7.2.4 If p P
d
n
and q H
d
n
(h
2

) then
p, q)
h
= c
h
_
R
d
pqh
2

d = 2
n
_

+
d
2
_
n
c
/
h
_
S
d1
pqh
2

d.
Proof Since p(D)q(x) is a constant, it follows from Theorem 7.1.18 that
p, q)
h
= c
h
_
R
d
p(D)q(x)h
2

(x)d
= c
h
_
R
d
q(x)
_
p(

D

)1

h
2

(x)d
= c
h
_
R
d
q(x)
_
p(x) +s(x)

h
2

(x)d
for a polynomial s of degree less than n, where using the relation

D

i
g(x) =
x
i
g(x) D
i
g(x) repeatedly and the fact that the degree of D
i
g is lower than that
of g shows that p(

D

)1 = p(x) +s(x). But since q H


d
n
(h
2

), it follows from
the polar integral that
_
R
d q(x)s(x)h
2

(x)d =0. This gives the rst equality in the


statement. The second follows from the use of polar coordinates as in the formula
below equations (7.1.11).
7.2 Inner Products on Polynomials 219
Thus p, q)
h
is positive denite. We can also give an integral representation of
p, q)
h
for all p, q
d
. We need two lemmas.
Lemma 7.2.5 Let p, q P
d
n
and write
p(x) =
n/2|

j=0
|x|
2 j
p
n2 j
(x) and q(x) =
n/2|

j=0
|x|
2 j
q
n2 j
(x),
with p
n2 j
, q
n2 j
H
d
n2 j
(h
2

). Then
p, q)
h
=
n/2|

j=0
4
j
j!(n2 j +

+1)
j
p
n2 j
, q
n2 j
)
h
.
Proof The expansions of p and q in h-harmonics are unique by Theorem
7.1.7. Using the denition of
h
(see the text before Denition 7.1.2) and
p
n2j
, q
n2 j
)
h
,
p, q)
h
=
n/2|

j=0
n/2|

i=0

i
h
p
n2i
(D)
_
|x|
2 j
q
n2 j
(x)

.
By the second identity of (7.1.3),

i
h
_
|x|
2 j
q
n2 j
(x)
_
= 4
i
(j)
i
(n + j)
i
|x|
2 j2i
q
n2 j
(x),
which is zero if i > j. If i < j then |x|
2 j
q
n2j
(x), |x|
2i
p
n2i
(x))
h
= 0 by the
same argument and the fact that the pairing is symmetric (Theorem 7.2.3). Hence,
the only remaining terms are those with j = i, which are given by 4
j
j!(n +
j)
j
p
n2 j
(D)q
n2 j
(x).
Lemma 7.2.6 Let p H
d
n
(h
2

), n, m N
d
0
; then
e

h
/2
|x|
2m
p(x) = (1)
m
m!2
m
L
n+

m
_
1
2
|x|
2
_
p(x).
Proof We observe that, for p P
d
n
, e

h
/2
p is a nite sum. From (7.1.3),
e

h
/2
|x|
2m
p(x) =
m

j=0
(1)
j
2
j
j!
(m)
j
(nm)
j
|x|
2m2 j
p(x).
Using the fact that (1)
j
(1 m a)
j
= (a)
m
/(a)
mj
with a = n + + 1,
the stated identity follows from the expansion of the Laguerre polynomial in
Subsection 1.4.2.
Theorem 7.2.7 For polynomials p, q
p, q)
h
= c
h
_
R
d
_
e

h
/2
p
__
e

h
/2
q
_
h
2

d.
220 Spherical Harmonics Associated with Reection Groups
Proof By Lemma 7.2.5, it sufces to establish this identity for p, q of the form
p(x) =|x|
2 j
p
m
(x), q(x) =|x|
2 j
q
m
(x), with p
m
, q
m
H
d
m
(h
2

).
By the second identity of (7.1.3),
p, q)
h
= 4
j
j!
_
m+ +
d
2
_
j
p
m
, q
m
)
h
= 4
j
j!(m+ +1)
j
2
m
( +1)
j
c
/
h
_
S
d1
p
m
q
m
h
2

d
= 2
m+2 j
j!( +1)
m+j
c
/
h
_
S
d1
p
m
q
m
h
2

d.
The right-hand side of the stated formula, by Lemma 7.2.6, is seen to equal
c
h
_
R
d
( j!2
j
)
2
[L
m+
j
_
1
2
|x|
2
_
]
2
p
m
(x)q
m
(x)h
2

(x)d
= 2


_
d
2
_
( +1)
c
/
h
( j!)
2
2
m++2 j
( +1+m+ j)
j!
_
d
2
_
_
S
d1
p
m
q
m
h
2

d
upon using polar coordinates in the integral and the relation between c
h
and c
/
h
.
These two expressions are clearly the same.
As an application of Theorem 7.2.4, we state the following result.
Proposition 7.2.8 For p P
d
n
and q H
d
n
, where H
d
n
is the space of ordinary
harmonics of degree n, we have
c
/
h
_
S
d1
pVqh
2

d =
_
d
2
_
n
_

+
d
2
_
n

d1
_
S
d1
pqd.
Proof Since
h
V = V, q H
d
n
implies that Vq H
d
n
(h
2

). Apply Theorem
7.2.4 with Vq in place of q; then
2
n
_
+
d
2
_
n
c
/
h
_
S
d1
pVqh
2

d = p(D)Vq(x)
=V p()q(x) = p()q(x)
= 2
n
_
d
2
_
n

d1
_
S
d1
pqd,
where the rst equality follows from Theorem 7.2.4, the second follows from
the intertwining property of V, the third follows from the fact that p()q(x) is a
constant and V1 = 1 and the fourth follows from Theorem 7.2.4 with = 0.
An immediate consequence of this theorem is the following biorthogonal
relation.
Corollary 7.2.9 Let S

be an orthonormal basis of H
d
n
. Then VS

and
S

are biorthogonal; more precisely,


7.3 Reproducing Kernels and the Poisson Kernel 221
_
S
d1
(VS

)S

h
2

d =
_
d
2
_
n
_

+
d
2
_
n
c
/
h

d1

,
.
In particular, VS

is a basis of H
d
n
(h
2

).
The corollary follows from setting p = S

and q = S

in the proposition. It
should be pointed out that the biorthogonality is restricted to elements of H
d
n
and H
d
n
(h
2

) for the same n. In general S

is not orthogonal to VS

with respect
to h
2

d if [[ <[[.
7.3 Reproducing Kernels and the Poisson Kernel
If f L
2
(h
2

d) then we can expand f as an h-harmonic series. Its nth component


is the orthogonal projection dened by
S
n
(h
2

; f , x) := c
/
h
_
S
d1
P
n
(h
2

; x, y) f (y)h
2

(y)d(y), (7.3.1)
where P
n
(h
2

; x, y) denotes the reproducing kernel of H


d
n
(h
2

) and is consistent
with the notation (4.2.4). Recall K
n
(x, y) dened in Denition 6.6.1.
Theorem 7.3.1 For n N
0
and for x, y R
d
,
P
n
(h
2

; x, y) =

0jn/2
(

+1)
n
2
n2 j
_
1n

)
j
j!
|x|
2 j
|y|
2 j
K
n2 j
(x, y).
Proof The kernel P
n
(h
2

; x, y) is uniquely dened by the fact that it repro-


duces the space H
d
n
(h
2

). Let f H
d
n
(h
2

). Then by Proposition 7.2.1 f (y) =


K
n
(D
(x)
, y) f (x). Fix y and let p(x) = K
n
(x, y), so that f (y) = p, f )
h
. Expand
p(x) as p(x) =
0jn/2
|x|
2 j
p
n2 j
(x) with p
n2 j
H
d
n2 j
(h
2

); it then follows
from Theorem 7.2.4 that
f (y) =p, f )
h
=p
n
, f )
h
= 2
n
( +1)
n
c
/
h
_
S
d1
p
n
f h
2

d.
Thus, by the reproducing property, P
n
(h
2

; x, y) = 2
n
( + 1)
n
p
n
(x) with p
n
=
proj
n,h
p. Hence, the stated identity follows from Theorem 7.1.15 and the fact
that
j
h
p(x) =
j
h
K
n
(x, y) =|y|
2 j
K
n2 j
(x, y); see (v) in Proposition 6.6.2.
Corollary 7.3.2 For n N
0
and |y| |x| = 1,
P
n
(h
2

; x, y) =
n+

V
_
C

n
__
,
y
|y|
___
(x)|y|
n
.
Proof Since the intertwining operator V is linear and , y)
m
is homogeneous of
degree m in y, for |x| = 1 write
222 Spherical Harmonics Associated with Reection Groups
P
n
(h
2

, x, y) =V
_
[n/2]

j=0
()
n
2
n2 j
(1n)
j
j!(n2 j)!
_
,
y
|y|
_
n2 j
_
(x)|y|
n
.
Set t = , y/|y|) and denote the expression inside the braces by L
n
(t). Since
(1 n )
j
= (1)
j
()
n
/()
nj
, the proof of Proposition 1.4.11 shows that
L
n
(t) = [( +1)
n
/()
n
]C

n
(t), which gives the stated formula.
The Poisson or reproducing kernel P(h
2

; x, y) is dened by the property


f (y) = c
/
h
_
S
d1
f (x)P(h
2

; x, y)h
2

(x)d(x)
for each f H
d
n
(h
2

), n N
0
and |y| < 1.
Theorem 7.3.3 Fix y B
d
; then
P(h
2

; x, y) =V
_
1|y|
2
(12, y) +|y|
2
)

+1
_
(x)
for |y| < 1 =|x|.
Proof Denote by f
y
the function which is the argument of V in the state-
ment. We claim that f
y
A(B
d
) (see Denition 6.5.19) with |f |
A
= (1 |y|
2
)
(1|y|)
22
. Indeed,
f
y
(x) = (1|y|
2
)(1+|y|
2
)
1
_
1
2x, y)
1+|y|
2
_
d/2
= (1|y|
2
)(1+|y|
2
)
1

n=0
(1+)
n
2
n
n!(1+|y|
2
)
n
x, y)
n
.
But |x, y)
n
|

=|y|
n
and 0 2|y| < 1+|y|
2
for |y| < 1, so that
| f
y
|
A
= (1|y|
2
)(1+|y|
2
)
1
_
1
2|y|
1+|y|
2
_
1
= (1|y|
2
)(1|y|)
22
.
Thus, V f
y
is dened and continuous for |x| 1. Since |y| < 1, the stated iden-
tity follows from P(h
2

; x, y) =

n=0
P
n
(h
2

; x, y), Corollary 7.3.2 and the Poisson


kernel of the Gegenbauer polynomials.
The Poisson kernel P(h
2

; x, y) satises the following two properties:


0 P(h
2

; x, y), |y| |x| = 1,


c
/
h
_
S
d1
P(h
2

; x, y)h
2

(y)d(y) = 1.
The rst follows from the fact that V is a positive operator (R osler [1998]), and
the second is a consequence of the reproducing property.
7.3 Reproducing Kernels and the Poisson Kernel 223
The formula in Corollary 7.3.2 also allows us to prove a formula that is the
analogue of the classical FunkHecke formula for harmonics. Denote by w

the
normalized weight function
w

(t) = B( +
1
2
,
1
2
)
1
(1t
2
)
1/2
, t [1, 1],
whose orthogonal polynomials are the Gegenbauer polynomials. Then the Funk
Hecke formula for h-harmonics is as follows.
Theorem 7.3.4 Let f be a continuous function on [1, 1]. Let Y
h
n
H
d
n
(h
2

).
Then
c
/
h
_
S
d1
V f (x, ))(y)Y
h
n
(y)h
2

(y)d =
n
( f )Y
h
n
(x), x S
d1
,
where
n
( f ) is a constant dened by

n
( f ) =
1
C

n
(1)
_
1
1
f (t)C

n
(t)w

(t)dt.
Proof First assume that f is a polynomial of degree m. Then we can write f in
terms of the Gegenbauer polynomials as
f (t) =
m

k=0
a
k
+k

k
(t) =
m

k=0
a
k

k
(1)

k
(t),
where

C

k
stands for a Gegenbauer polynomial that is orthonormal with respect
to w

. Using this orthonormality of the



C

k
, the coefcients a
k
are given by the
formula
a
k
=
1
C

k
(1)
_
1
1
f (t)C

k
(t)w

(t)dt,
where we have used the fact that [

k
(1)]
2
= [(k +)/]C

k
(1). For x, y S
d1
, it
follows from the formula for P
n
(h
2

) in Corollary 7.3.2 that


V f (x, ))(y) =
m

k=0
a
k
+k

VC

k
(x, ))(y) =
m

k=0
a
k
P
h
k
(x, y).
Since P
h
n
is the reproducing kernel of the space H
d
n
(h
2

) we have, for any Y


h
n

H
d
n
(h
2

),
_
S
d1
V f (x, ))(y)Y
h
n
(y)h
2

(y)d =
n
Y
h
n
(x), x S
d1
,
where if m < n then
n
= 0, so that both sides of the equation become zero.
Together with the formula for
n
( f ) this gives the stated formula for f a
polynomial.
If f is a continuous function on [1, 1] then choose P
m
to be a sequence of
polynomials such that P
m
converges to f uniformly on [1, 1]. The intertwin-
ing operator V is positive and [V(g)(x)[ supg(y) : [y[ 1 for [x[ 1. Let
224 Spherical Harmonics Associated with Reection Groups
g
x
(y) = f (x, y)) P
m
(x, y)). It follows that, for m sufciently large, [g
x
(y)[
for [y[ 1 and [V(g
x
)(y)[ for all x S
d1
, from which we deduce from the
dominated convergence theorem that the stated result holds for f .
If = 0 then V = id and the theorem reduces to the classical Funk-Hecke
formula.
7.4 Integration of the Intertwining Operator
Although a compact formula for the intertwining operator is not known in the
general case, its integral over S
d1
can be computed. This will have several
applications in the later development. We start with
Lemma 7.4.1 Let m 0 and S
m
H
d
m
be an ordinary harmonic. Then
c
/
h
_
S
d1
V(| |
2 j
S
m
)(x)h
2

(x)d =
(
d
2
)
j
(

+1)
j

m,0
.
Proof Since |x|
2 j
S
m
is a homogeneous polynomial of degree m + 2j, so is
V(| |
2 j
S
m
). Using the projection operator proj
k,h
: P
d
k
H
h
k
(h
2

) and (7.1.7),
we obtain
V(| |
2 j
S
m
) =
[m/2]+j

i=0
_
|x|
2i
1
4
i
i!
1
( +m+2 j 2i +1)
i
proj
m+2 j2i,h
[
i
h
V(| |
2 j
S
m
)]
_
.
Since H
d
k
(h
2

) 1 for k > 0 with respect to h


2

d, it follows that the integral


of the function proj
m+2 j2i,h

i
h
V(| |
2 j
S
m
) is zero unless m+2 j 2i = 0, which
shows that m is even. In the case where 2i = m+2 j, we have
_
S
d1
V(| |
2 j
S
m
)(x)h
2

(x)d
=
1
4
j+m/2
( j +
m
2
)!
( +1)
j+m/2
_
S
d1

j+m/2
h
V(| |
2 j
S
m
)h
2

(x)d,
since now proj
0,h
= id. By the intertwining property of V, it follows that

j+m/2
h
V(| |
2 j
S
m
) =V
j+m/2
(| |
2 j
S
m
).
Using the identity
(|x|
2 j
g
m
) = 4 j(m+ j 1+
d
2
)|x|
2 j2
g
m
+|x|
2 j
g
m
for g
m
P
d
m
, m = 0, 1, 2, . . . and the fact that S
m
= 0, we see that

j+m/2
(| |
2 j
S
m
)(x) = 4
j
j!
_
d
2
_
j

m/2
S
m
(x),
which is zero if m>0. For m=0, use the fact that S
0
(x) =1 and put the constants
together to nish the proof.
7.4 Integration of the Intertwining Operator 225
Recall from Section 5.2 the normalized weight function W

on the unit ball B


d
and the notation V
d
n
(W

) for the space of polynomials orthogonal with respect to


W

on B
d
.
Lemma 7.4.2 For P
n
V
d
n
(W
1/2
), =

and n N
0
and P
0
(x) = 1,
c
/
h
_
S
d1
V(P
n
)(x)h
2

(x)d =
0,n
.
Proof Recall that an orthonormal basis for the space V
d
n
(W
1/2
) was given in
Proposition 5.2.1; it sufces to show that
c
/
h
_
S
d1
V
_
P
(1,n2 j+(d2)/2)
j
(2| |
2
1)Y
n2 j,
_
(x)h
2

(x)d =
0,n
for 0 2j n andY
n2 j,
H
d
n2 j
. If 2j <n, this follows fromLemma 7.4.1. For
the remaining case, 2 j = n, use the fact that P
(a,b)
j
(t) = (1)
j
P
(b,a)
j
(t), together
with
(1)
j
P
((d2)/2,1)
j
(12t
2
) = (1)
j
(
d
2
)
j
j!
2
F
1
_
j, j +
d
2
;t
2
_
,
and the expansion of
2
F
1
to derive that
c
/
h
_
S
d1
V
_
P
(1,n2 j+(d2)/2)
j
(2| |
2
1)S
n2 j,
_
(x)h
2

(x)d
= (1)
j
(
d
2
)
j
j!
j

i=0
(j)
i
( j +)
i
(
d
2
)
i
_
S
d1
V(| |
2 j
)h
2

d
= (1)
j
(
d
2
)
j
j!
2
F
1
_
j, j +
+1
; 1
_
= (1)
j
(
d
2
)
j
(1 j)
j
j!( +1)
j
,
by the ChuVandermonde identity. This is zero if j > 0 or n = 2 j > 0. For n = 0
use the facts that P
(a,b)
0
(x) = 1 and V1 = 1.
Theorem 7.4.3 Let V be the intertwining operator. Then
_
S
d1
V f (x)h
2

(x)d = A

_
B
d
f (x)(1|x|
2
)

1
dx
for f L
2
(h

; S
d1
) such that both integrals are nite; here A

= w

/c
/
h
and w

is the normalization constant of W


1
.
226 Spherical Harmonics Associated with Reection Groups
Proof Expand f into an orthogonal series with respect to the orthonormal basis
P
n
j,
(W
1/2
) in Proposition 5.2.1. Since V is a linear operator,
_
S
d1
V f (x)h
2

(x)d =

n=0

, j
a
n
j,
( f )
_
S
d1
V
_
P
n
j,
(W
1/2
)

(x)h
2

(x)d.
By Lemma 7.4.2, only the constant term is nonzero; hence
c
/
h
_
S
d1
V f (x)h
2

(x)d = a
0
0,0
( f ) = w

_
B
d
f (x)(1|x|
2
)
1
dx,
which is the stated result.
Lemma 7.4.4 For an integrable function f : R R,
_
S
d1
f (x, y))d(y) =
d2
_
1
1
f (s|x|)(1s
2
)
(d3)/2
ds, x R
d
.
Proof The left-hand side is evidently invariant under rotations, which means that
it is a radial function, that is, a function that depends only on r =|x|. Hence, we
may assume that x = (r, 0, . . . , 0), so that
_
S
d1
f (x, y))d(y) =
d2
_

0
f (r cos)(sin)
d2
d
where y = (cos, sin y
/
), y
/
S
d2
, which gives the stated formula.
The following special case of Theorem 7.4.3 is of interest in itself.
Corollary 7.4.5 Let g : R R be a function such that all the integrals below
are dened. Then
_
S
d1
Vg(x, ))(y)h
2

(y)d(y) = B

_
1
1
g(t|x|)(1t
2
)

1
dt,
where B

= [B(

+
1
2
,
1
2
)c
/
h
]
1
.
Proof Using the formulae in Lemma 7.4.4 and Theorem 7.4.3,
I(x) =
_
S
d1
Vg(x, ))(y)h
2

(y)d = A

_
B
d
g(x, y))(1|y|
2
)
1
dy
= A

d2
_
1
0
r
d1
_
S
d1
g(rx, y
/
))d(y
/
)(1r
2
)
1
dr
= A

d2
_
1
0
r
d1
_
1
1
g(sr|x|)(1s
2
)
(d3)/2
ds(1r
2
)
1
dr.
Making the change of variable s t/r in the last formula and interchanging the
order of the integrations, we have
7.4 Integration of the Intertwining Operator 227
I(x) = A

d2
_
1
1
g(t|x|)
_
1
[t[
(r
2
t
2
)
(d3)/2
r(1r
2
)
1
dr dt
= A

d2

1
2
_
1
0
u
(d3)/2
(1u)
1
du
_
1
1
g(t|x|)(1t
2
)
+(d3)/2
dt,
where the last step follows from setting u = (1 r
2
)/(1 t
2
). This is the stated
formula. Instead of keeping track of the constant, we can determine it by setting
g = 1 and using the fact that V1 = 1.
These formulae will be useful for studying the convergence of the h-harmonic
series, as will be seen in Chapter 9. The integration formula for V in Theorem
7.4.3 also implies the following formula.
Theorem 7.4.6 Suppose that f is a polynomial and is a function such that
both the integrals below are nite. Then
_
R
d
V f (x)(|x|)h
2

(x)dx = A

_
R
d
f (x)(|x|)dx,
where (t) =
_

t
r(r
2
t
2
)

1
(r)dr and A

is as in Theorem 7.4.3.
Proof Write f in terms of its homogeneous components, f =

n=0
f
n
, where
f
n
P
d
n
. Then, using polar coordinates x = rx
/
,
_
R
d
V f (x)(|x|)h
2

(x)dx
=

n=0
_

0
r
d1+2+n
(r)dr
_
S
d1
V f
n
(x
/
)h
2

(x
/
)d
= A

n=0
_

0
r
d1+2+n
(r)dr
_
B
d
f
n
(x)(1|x|
2
)
1
dx
= A

_

0
r
d1+2
(r)
_
B
d
f (rx)(1|x|
2
)
1
dxdr,
where we have used the integral formula in Theorem 7.4.3. Using the polar coor-
dinates x = x
/
in the integral over B
d
and setting t = r, we see that the last
integral is equal to
A

_

0
r(r)
_
r
0
t
d1
(r
2
t
2
)
1
_
S
d1
f (tx
/
)ddt dr
= A

_

0
t
d1
_
S
d1
f (tx
/
)d
_
_

t
r(r
2
t
2
)
1
(r)dr
_
dt,
which can be seen to be the stated formula upon using the polar coordinates again.
Corollary 7.4.7 Let f be a polynomial. Then
_
R
d
V f (x)h
2

(x)e
|x|
2
/2
dx = b

_
R
d
f (x)e
|x|
2
/2
dx,
where b

= 2

1
(

)A

.
228 Spherical Harmonics Associated with Reection Groups
Proof Taking (t) = e
t
2
/2
in Theorem 7.4.3 we have
(t) =
_

t
2
/2
(u
1
2
t
2
)
1
e
u
du = 2
1
e
t
2
/2
_

0
v
1
e
v
dv,
from which the stated formula follows.
7.5 Example: Abelian Group Z
d
2
Here we consider h-harmonics for the abelian group Z
d
2
, the group of sign
changes. The weight function is dened by
h

(x) =
d

i=1
[x
i
[

i
,
i
0, x R
d
, (7.5.1)
and it is invariant under the sign changes of the group Z
d
2
. This case serves as an
example of the general results. Moreover, its simple structure allows us to make
many results more specic than those available for a generic reection group.
This is especially true in connection with an orthogonal basis and the intertwining
operator.
For the weight function h

dened in (7.5.1), we have

=[[ :=
1
+ +
d
and

=[[ +
d2
2
.
The normalization constant c
/
h,d
dened in (7.1.11) is given by
c
/
h
=

d/2
(
d
2
)
([[ +
d
2
)
(
1
+
1
2
) (
d
+
1
2
)
which follows from
_
S
d1
[x
1
[
2
1
[x
d
[
2
d
d = 2
(
1
+
1
2
) (
d
+
1
2
)
([[ +
d
2
)
.
7.5.1 Orthogonal basis for h-harmonics
Associated with h
2

in (7.5.1) are the Dunkl operators, given by


D
j
f (x) =
j
f (x) +
j
f (x) f (x
1
, . . . , x
j
, . . . , x
d
)
x
j
(7.5.2)
for 1 j d and the h-Laplacian, given by

h
f (x) =f (x) +
d

j=1

j
_
2
x
j
f
x
j

f (x) f (x
1
, . . . , x
j
, . . . , x
d
)
x
2
j
_
. (7.5.3)
We rst state an orthonormal basis for H
d
n
(h
2

) in terms of the general-


ized Gegenbauer polynomials C
(,)
n
dened in Subsection 1.5.2. For d = 2, the
orthonormal basis is given as follows.
7.5 Example: Abelian Group Z
d
2
229
Theorem 7.5.1 Let d = 2 and h

(x
1
, x
2
) = c
/
2,h
[x
1
[

1
[x
2
[

2
. A mutually orthog-
onal basis for H
2
n
(h
2

) is given by
Y
1
n
(x) = r
n
C
(
2
,
1
)
n
(cos),
Y
2
n
(x) = r
n
sin C
(
2
+1,
1
)
n1
(cos),
(7.5.4)
where we use the polar coordinates x = r(cos, sin) and set Y
2
0
(x) = 0.
Proof Since Y
1
n
is even in x
2
and Y
2
m
is odd in x
2
, we see that Y
1
n
and Y
2
n
are
orthogonal. The integral of a function f (x
1
, x
2
) can be written as
_
S
1
f (x
1
, x
2
)d =
_

f (cos, sin)d,
which can be converted to an integral over [1, 1] if f (x
1
, x
2
) is even in x
2
.
Clearly, Y
1
n
Y
1
m
is even in x
2
and so is Y
2
n
Y
2
m
. The orthogonality of Y
1
n
and Y
2
m
follows
from that of C
(
2
,
1
)
n
(t) and C
(
2
+1,
1
)
n
(t), respectively.
To state a basis for d 3, we use the spherical coordinates in (4.1.1) and the
notation in Section 5.2. Associated with = (
1
, . . . ,
d
), dene

j
= (
j
, . . . ,
d
), 1 j d.
Since
d
consists of only the last element of , write
d
=
d
. Dene
j
for
N
d1
0
similarly.
Theorem 7.5.2 For d > 2 and N
d
0
, dene
Y

(x) := (h

)
1
r
[[
g

(
1
)
d2

j=1
(sin
dj
)
[
j+1
[
C
(
j
,
j
)

j
(cos
dj
); (7.5.5)
g

() equals C
(
d
,
d1
)

d1
(cos) for
d
=0 and sin C
(
d
+1,
d1
)

d1
1
(cos) for
d
=1;
[
j
[ =
j
+ +
d
, [
j
[ =
j
+ +
d
,
j
=[
j+1
[ +[
j+1
[ +
dj1
2
and
[h
n

]
2
=
a

([[ +
d
2
)
n
d1

j=1
h
(
j
,
j
)

j
(
j
+
j
)

j
, a

=
_
1 if
d
= 0,

d
+
1
2
if
d
= 1.
Here h
(,)
n
denotes the normalization constant of C
(,)
n
. Then Y

: [[ =n,
d
=
0, 1 is an orthonormal basis of H
d
n
(h
2

).
Proof These formulae could be veried directly by the use of the spherical coor-
dinates. We choose, however, to give a different proof, making use of
h
. We start
with the following decomposition of P
d
n
:
P
d
n
=
n

l=0
x
nl
1
H
d1
l
(h
2

/
) +r
2
P
d
n2
,
230 Spherical Harmonics Associated with Reection Groups
where
/
= (
2
, . . . ,
d
). This follows from the fact that f P
d
n
can be written as
f (x) =|x|
2
F(x) +x
1

1
(x
/
) +
2
(x
/
), x = (x
1
, x
/
),
where F P
d
n2
,
1
P
d
n1
and
2
P
d
n
. We then apply the canonical decom-
position in Theorem 7.1.7 to
i
and collect terms according to the power of
x
1
, after replacing |x
/
|
2
by |x|
2
x
2
1
. For any p H
d
n
(h
2

), the decomposition
allows us to write
p(x) =
n

l=0
x
nl
1
p
l
(x
/
) +|x|
2
q(x), q P
d
n2
.
Therefore, using the formula for the harmonic projection operator proj
n,h
in
(7.1.6), it follows that
p(x) =
n

l=0
proj
n,h
[x
nl
1
p
l
(x
/
)] =
n

l=0
[n/2]

j=0
|x|
2 j

j
h
[x
nl
1
p
l
(x
/
)]
4
j
j!(

n+1)
j
.
Write
h,d
for
h
to indicate the dependence on d. Since our h

is a product
of mutually orthogonal factors, it follows from the denition of
h
that
h,d
=
D
2
1
+
h,d1
, where
h,d1
acts with respect to x
/
= (x
2
, . . . , x
d
). Therefore, since
p
l
H
d1
l
(h
2

/
), we have that

h,d
[x
nl
1
p
l
(x
/
)] =D
2
1
(x
nl
1
)p
l
(x
/
).
By the denition of D
1
, see (7.5.2), it follows easily that
D
2
1
(x
m
1
) =
_
m+[1(1)
m
]
1
__
m1+[1(1)
m1
]
1
_
x
m2
1
.
Using this formula repeatedly, for m = nl even we obtain

j
h,d
(x
nl
1
p
l
(x
/
)) = 2
2 j
_

nl
2
_
j
_

nl1
2

1
_
j
x
nl2 j
1
p
l
(x
/
).
Therefore, for nl even,
proj
n,h
(x
nl
1
p
l
(x
/
))
=
[nl/2]

j=0
(
nl
2
)
j
(
nl1
2

1
)
j
(n

+1)
j
j!
|x|
2 j
x
nl2 j
1
p
l
(x
/
)
= p
l
(x
/
)x
nl
1
2
F
1
_

nl
2
,
nl1
2

1
n

+1
;
|x|
2
x
2
1
_
=
_
n+[[ 2+
d
2
nl
2
_1
p
l
(x
/
)|x|
nl
P
(l+

1
1/2,
1
1/2)
(nl)/2
_
2
x
2
1
|x|
2
1
_
.
A similar equation holds for n l odd. By the denition of the generalized
Gegenbauer polynomials,
proj
n,h
[x
nl
1
p
l
(x
/
)] = cp
l
(x
/
)|x|
nl
C
(l+[[
1
+(d2)/2,
1
)
nl
(cos
1
),
7.5 Example: Abelian Group Z
d
2
231
where c is a constant; since p
l
H
d
l
(h
2

/
) it admits a similar decomposition to F

.
This process can be continued until we reach the case of h-harmonic polynomials
of two variables, x
1
and x
2
, which can be written as linear combinations of the
spherical h-harmonics in (7.5.4). Therefore, taking into account that in spherical
coordinates x
j
/r
j
= cos
dj
and r
j+1
/r
j
= sin
j1
where r
2
j
= x
2
j
+ +x
2
d
, we
conclude that any polynomial in H
d
n
(h
2

) can be uniquely presented as a linear


combination of functions of the form |x|
n
Y

. The value of h

is determined by
h
2

= c
/
h
_
S
d1
_
g

(
1
)
d2

j=1
(sin
dj
)
[
j+1
[
C
(
j
,
j
)

j
(cos
dj
)
_
2
h
2

(x)d,
where the integrand is given in spherical coordinates; this allows us to convert
into a product of integrals, which can be evaluated by the L
2
norm of C
(,)
n
in
Subsection 1.5.2 and gives a formula for the constant h

.
Let us look at the case d = 2 again. According to Theorem 7.5.1, both the
functions r
n
C
(
1
,
2
)
n
(cos) and r
n
sin C
(
1
+1,
2
)
n1
(cos) are h-harmonic polyno-
mials. Hence, they satisfy the same second order differentialdifference equation,

h
f (x) = 0. Furthermore, since r
n
C
(
1
,
2
)
2n
(cos) is even in both x
1
and x
2
it is
invariant under Z
2
2
; it follows that the equation when applied to this function
becomes a differential equation,
f +
2
1
x
1
f
x
1
+
2
2
x
2
f
x
2
= 0.
Changing variables to polar coordinates by setting x
1
= r cos and x
2
= r sin
leads to
f
x
1
=cos
f
r

sin
r
f

,
f
x
1
= sin
f
r
+
cos
r
f

,
f =
1
r
2

2
f

2
+
1
r

r
_
r
f
r
_
,
from which we conclude that the differential equation takes the form
1
r
2

2
f

2
+
1
r

r
_
r
f
r
_
+
2(
1
+
2
)
r
f
r
+
2
r
2
_

2
cos
sin

1
sin
cos
_
f

= 0.
Consequently, applying this equation to f = r
2n
g(cos) we end up with a dif-
ferential equation satised by the polynomial C
(
1
,
2
)
2n
(cos), which is a Jacobi
polynomial by denition. We summarize the result as the following.
Proposition 7.5.3 The Jacobi polynomial P
(
1
1/2,
2
1/2)
n
(cos2) satises the
differential equation
d
2
g
d
2
+2
_

2
cos
sin

1
sin
cos
_
dg
d
4n(n+
1
+
2
)g = 0.
232 Spherical Harmonics Associated with Reection Groups
The idea of deriving a differential equation in this manner will be used in the
case of several variables in Section 8.1, where the differential equations satised
by the orthogonal polynomials on the ball will be derived from the eigenequations
of the h-LaplaceBeltrami operator.
7.5.2 Intertwining and projection operators
For the weight function h

invariant under Z
d
2
in (7.5.1), the intertwining operator
V enjoys an explicit formula, given below:
Theorem 7.5.4 For the weight function h

associated with Z
d
2
,
V f (x) =
_
[1,1]
d
f (x
1
t
1
, . . . , x
d
t
d
)
d

i=1
c

i
(1+t
i
)(1t
2
i
)

i
1
dt,
where c

= [B(
1
2
, )]
1
. If any
i
= 0, the formula holds under the limit (1.5.1).
Proof The facts that V1 = 1 and that V : P
d
n
P
d
n
are evident from the de-
nition of V in Section 6.5. Thus, we need only verify that D
i
V = V
i
. From the
denition of D
i
, write
D
i
f =
i
f +

D
i
f ,

D
i
f (x) =
i
f (x) f (x 2x
i

i
)
x
i
.
Taking i = 1, for example, we consider

D
1
V f (x) =
1
_
[1,1]
d
f (x
1
t
1
, . . . , x
d
t
d
) f (x
1
t
1
, x
2
t
2
, . . . , x
d
t
d
)
x
1

i=1
(1+t
i
)
d

i=1
c

i
(1t
2
i
)

i
1
dt.
Since the difference in the integral is an odd function of t
1
, it follows from
integration by parts that

D
1
V f (x) =
2
1
x
1
_
[1,1]
d
f (x
1
t
1
, . . . , x
d
t
d
)t
1
d

i=2
(1+t
i
)
d

i=1
c

i
(1t
2
i
)

i
1
dt
=
_
[1,1]
d

1
f (x
1
t
1
, . . . , x
d
t
d
)(1t
1
)
d

i=1
(1+t
i
)
d

i=1
c

i
(1t
2
i
)

i
1
dt.
Furthermore, directly from the denition of V, we have

1
V f (x) =
_
[1,1]
d

1
f (x
1
t
1
, . . . , x
d
t
d
)t
1
d

i=1
(1+t
i
)
d

i=1
c

i
(1t
2
i
)

i
1
dt.
The stated result follows from adding the last two equations together.
7.5 Example: Abelian Group Z
d
2
233
As an immediate consequence of the explicit formula for V in Theorem 7.5.4
and of Corollary 7.3.2, we obtain an integral formula for the reproducing kernel
of H
d
n
(h
2

) when h

is invariant under Z
d
2
.
Theorem 7.5.5 For h
2

d on S
d1
and

=[[ +
d2
2
,
P
n
(h
2

; x, y) =
n+

_
[1,1]
d
C

n
(x
1
y
1
t
1
+ +x
d
y
d
t
d
)

i=1
(1+t
i
)
d

i=1
c

i
(1t
2
i
)

i
1
dt.
In the case d = 2, the dimension of H
2
n
(h
2

) is 2 and the orthonormal basis is


that given in (7.5.4). The theorem when restricted to polynomials that are even in
x
2
gives
Corollary 7.5.6 For , 0,
C
(,)
n
(cos)C
(,)
n
(cos)
=
n+ +
+
C
(,)
n
(1)c

_
1
1
_
1
1
C
+
n
(t cos cos +ssin sin)(1+t)
(1t
2
)
1
(1s
2
)
1
dt ds.
In particular, as 0 the above equation becomes the classical product
formula for the Gegenbauer polynomials,
C

n
(x)C

n
(y)
C

n
(1)
= c

_
1
1
C

n
(xy +s
_
1x
2
_
1y
2
)(1s
2
)
1
ds.
As another consequence of the corollary, set =0 in the formula to conclude that
C
(,)
n
(x) =
n+ +
+
c

_
1
1
C
+
n
(xt)(1+t)(1t
2
)
1
dt.
Comparing with the formulae in Subsection 1.5.2, we see that the above expres-
sion is the same as VC
+
n
(x) =C
(,)
n
(x), and this explains why we denoted the
integral transform in Subsection 1.5.2 by V.
Many identities and transforms for Jacobi and Gegenbauer polynomials can be
derived from these examples. Let us mention just one more. By Proposition 7.2.8
and Corollary 7.2.9,
r
n
C
(
1
,
2
)
n
(cos) = const V
_
(x
2
1
+x
2
2
)
n
T
n
_
x
1
/
_
x
2
1
+x
2
2
__
,
since V maps ordinary harmonics to h-harmonics; here T
n
is a Chebyshev
polynomial of the rst kind. Using the formula for V, the above equation becomes
234 Spherical Harmonics Associated with Reection Groups
C
(
1
,
2
)
n
(cos)
= const
_
1
1
_
1
1
(t
2
1
sin
2
+t
2
2
cos
2
)
n/2
T
n
_
t
2
cos
(t
2
2
cos
2
+t
2
1
sin
2
)
1/2
_
(1+t
2
)(1t
2
1
)

1
1
(1t
2
2
)

2
1
dt
1
dt
2
,
where the constant can be determined by setting = 0. In particular, for
2
= 0
this formula is a special case of an integral due to Feldheim and Vilenkin (see, for
example, Askey [1975, p. 24]).
The integral formula for V in Theorem 7.5.4 also gives an explicit formula for
the Poisson kernel, which can be used to study the Abel means dened by
S( f , rx
/
) = c
/
h
_
S
d1
f (y)P(h
2

; rx
/
, y)h
2

(y)d(y), x = rx
/
, x
/
S
d1
,
where r < 1 for integrable functions f on S
d1
. We have the following theorem.
Theorem 7.5.7 If f is continuous on S
d1
then
lim
r1
S( f , rx
/
) = f (x
/
), x = rx
/
, x
/
S
d1
.
Proof Let A

=y S
d1
: |x
/
y| < in this proof. Since P(h
2

) 0 and its
integral over S
d1
is 1,
[S( f , rx
/
) f (x
/
)[ = c
/
h
_
S
d1
_
f (x) f (y)

P(h
2

; rx
/
, y)h
2

(y)d(y)
c
/
h
_
_
A

+
_
S
d1
A

_
[ f (x) f (y)[P(h
2

; rx
/
, y)h
2

(y)d(y)
sup
|x
/
y|
[ f (rx) f (y)[ +2| f |

c
/
h
_
S
d1
A

P
h
(h
2

; rx
/
, y)h
2

(y)d(y),
where | f |

is the maximum of f over S


d1
. Since f is continuous over S
d1
,
we need to prove only that the last integral converges to zero when r 1. By the
explicit formula for P(h
2

) in Theorem 7.3.3, it sufces to show that


limsup
r1
_
S
d1
_
[0,1]
d
1
[12r(x
/
1
y
1
t
1
+ +x
/
d
y
d
t
d
) +r
2
]
[[+d/2

i=1
c

i
(1t
2
i
)

i
1
dt h
2

(y)dy
is nite for every x
/
S
d1
. Let B

=t = (t
1
, . . . , t
d
) : t
i
> 1, 1 i d with
=
1
4

2
. For y S
d1
A

and t B

, use the fact that x


/
, y) = 1
1
2
|x
/
y|
2
to get
[x
/
1
y
1
t
1
+ +x
/
d
y
d
t
d
[ =[x
/
, y) x
/
1
y
1
(1t
1
) x
/
d
y
d
(1t
d
)[
1
1
2

2
+max
i
(1t
i
) 1
1
4

2
,
7.5 Example: Abelian Group Z
d
2
235
from which it follows readily that
12r(x
/
1
y
1
t
1
+ +x
/
d
y
d
t
d
)+r
2
1+r
2
2r
_
1
1
4

2
_
= (1r)
2
+
1
2
r
2

1
4

2
.
For t [0, 1]
d
B

, we have that t
i
1 for at least one i. Assume that t
1

1 . We can assume also that x


/
1
,= 0, since otherwise t
1
does not appear in
the integral and we can repeat the above argument for (t
2
, . . . , t
d
) [0, 1]
d1
. It
follows that
12r(x
/
1
y
1
t
1
+ +x
/
d
y
d
t
d
) +r
2
= r
2
x
/
2
1
(1t
2
1
) +
d

i=1
(y
i
rx
/
i
t
i
)
2
+r
2
_
1x
/
2
1

i=2
x
/
2
i
t
2
i
_
r
2
x
/
2
1
(1t
2
1
) r
2
x
/
2
1
> 0.
Therefore, for each x = rx
/
the denominator of the integrand is nonzero and the
expression is nite as r 1.
7.5.3 Monic orthogonal basis
Our denition of the monomial orthogonal polynomials is an analogue of that
for the generating function of the Gegenbauer polynomials, and it uses the
intertwining operator V.
Denition 7.5.8 Dene polynomials

R

(x) by
V
_
1
(12b, ) +|b|
2
|x|
2
)

_
(x) =

N
d
0
b

(x), x R
d
.
Let F
B
be the Lauricella hypergeometric series of type B dened in Section 1.2.
Write 1 = (1, . . . , 1). We derive the properties of

R

in what follows.
Proposition 7.5.9 The polynomials

R

satisfy the following properties:


(i)

R

P
d
n
and

(x) =
2
[[
(

)
[[
!

2
)

(
+1
2
)

([[

+1)
[[
!
|x|
2[[
V

(x
2
),
where the series terminates, as the summation is over all such that


2
;
(ii)

R

H
d
n
(h
2

) and

(x) =
2
[[
(

)
[[
!
V

[S

()](x) for |x| = 1,


236 Spherical Harmonics Associated with Reection Groups
where
S

(y) = y

F
B
_

2
,
+1
2
; [[

+1;
1
y
2
1
, . . . ,
1
y
2
d
_
.
Furthermore,

[[=n
b

(x) =

n+

P
n
(h
2

; b, x), |x| = 1,
where P
n
(h
2

; y, x) is the reproducing kernel of H


d
n
(h
2

).
Proof Using the multinomial and binomial formulae, we write
(12a, y) +|a|
2
)

= [1a
1
(2y
1
a
1
) a
d
(2y
d
a
d
)]

()
[[
!
a

(2y
1
a
1
)

1
(2y
d
a
d
)

d
=

()
[[
!

(
1
)

1
(
d
)

d
!
2
[[[[
y

a
+
.
Changing summation indices according to
i
+
i
=
i
and using the expressions
()
mk
=
(1)
k
()
m
(1 m)
k
and
(m+k)
k
(mk)!
=
(1)
k
(m)
2k
m!
as well as 2
2k
(m)
2k
= (
m
2
)
k
(
1
2
(1m))
k
, we can rewrite the formula as
(12a, y) +|a|
2
)

2
[[
()
[[
!

2
)

(
+1
2
)

([[ +1)
[[
!
y
2
=

2
[[
()
[[
!
y

F
B
_

2
,
1
2
; [[ +1;
1
y
2
1
, . . . ,
1
y
2
d
_
.
Using the left-hand side of the rst equation of this proof with the function
(12b, y) +|x|
2
|b|
2
)

= (12|x|b, y/|x|) +
_
_
|x|b
_
_
2
)

and applying V with respect to y gives the expression for



R

in (i). If |x| = 1
then the second equation gives the expression for

R

in (ii). We still need to


show that

R

H
d
n
(h
2

). Let |x| = 1. For |y| 1 the generating function of the


Gegenbauer polynomials gives
(12b, y) +|b|
2
)

n=0
|b|
n
C

n
(b/|b|, y)).
Applying V

to y in the above equation gives

[[=n
b

(x) =|b|
n
V

[C

n
(b/|b|, ))](x), |x| = 1.
7.5 Example: Abelian Group Z
d
2
237
Using Corollary 7.3.2 we can see that

[[=n
b

(x) is a constant multiple


of P
n
(h
2

; x, b). Consequently, for any b, b

(x) is an element in H
d
n
(h
2

);
therefore, so is

R

.
In the following let

2
| denote (

1
2
|, . . . ,

d
2
|) for N
d
0
.
Proposition 7.5.10 For N
d
0
, let :=
+1
2
|. Then

(x) =
2
[[
(

)
[[
!
(
1
2
)

( +
1
2
)

(x),
where R

is given by
R

(x) = x

F
B
_
, + +
1
2
; [[

+1;
|x|
2
x
2
1
, . . . ,
|x|
2
x
2
d
_
.
In particular, R

(x) = x

+|x|
2
Q

(x) for Q

P
d
n2
.
Proof By considering m even and m odd separately, it is easy to verify that
c

_
1
1
t
m2k
(1+t)(1t
2
)
1
dt =
(
1
2
)
(m+1)/2|
( +
1
2
)
(m+1)/2|
(
m+1
2
| +
1
2
)
k
(
m+1
2
| +
1
2
)
k
for 0. Hence, again using the explicit formula for V, the formula for

R

in
part (i) of Proposition 7.5.9 becomes

(x) =
2
[[
()
[[
!
(
1
2
)
(+1)/2|
( +
1
2
)
(+1)/2|

2
)

(
+1
2
)

([[ +1)
[[
!
(
+1
2
| +
1
2
)

(
+1
2
| +
1
2
)

|x|
2[[
x
2
.
Using the fact that
_

2
_

_
+1
2
_

=
_
+
_
+1
2
__

_
+1
2
_
+
1
2
_

,
the above expression for

R

can be written in terms of F


B
as stated in the propo-
sition. Note that the function F
B
is a nite series, since (n)
m
= 0 if m > n, from
which the last assertion of the proposition follows.
If all components of are even then we can write R

using a Lauricella
function of type A, denoted by F
A
(c, ; ; x) in Section 1.2.
238 Spherical Harmonics Associated with Reection Groups
Proposition 7.5.11 Let N
d
0
. Then
R
2
(x) =(1)
[[
( +
1
2
)

(n+

)
[[
|x|
[2[
F
A
_
, [[ +

; +
1
2
;
x
2
1
|x|
2
, . . . ,
x
2
d
|x|
2
_
.
Proof For = 2 the formula in terms of F
B
becomes
R
2
(x) =

()

( +
1
2
)

(2[[ +1)
[[
!
|x|
2[[
x
22
,
where means that
1
<
1
, . . . ,
d+1
<
d+1
; note that ()

= 0 if >.
Changing the summation index according to
i

i
and using the formula
(a)
nm
= (1)
m
(a)
n
/(1 n a)
m
to rewrite the Pochhammer symbols, so that
we have, for example,
( +
1
2
)

=
(1)

( +
1
2
)

( +
1
2
)

, ( )! = (1)

=
(1)
[[
!
()

,
we can rewrite the summation as the stated formula in F
A
.
The restriction of R

on the sphere is monic: R

(x) = x

+Q

(x). Recall the


h-harmonics H

dened in Denition 7.1.11.


Proposition 7.5.12 The polynomials R

satisfy the following:


(i) R

(x) = proj
n,h
x

, n =[[ and R

(x) =
(1)
n
2
n
(

)
n
H

(x);
(ii) |x|
2
D
i
R

(x) =2(n+

)[R
+e
i
(x) x
i
R

(x)] ;
(iii) R

: [[ = n,
d
= 0, 1 is a basis of H
d
n
(h
2

).
Proof Since R

H
d
n
(h
2

) and R

(x) = x

|x|
2
Q(x), where Q P
d
n2
, it
follows that R

(x) = proj
n
x

. The relation to H

follows from Denition 7.1.14.


The second and third assertions are reformulations of the properties of H

.
In the following we compute the L
2
norm of R

. Let
|f |
2,
:=
_
c
/
h
_
S
d1
[R

(x)[
2
d
_
1/2
.
Theorem 7.5.13 For N
d
0
, let = +
1
2
|. Then
|R

|
2
2,
=

_
+
1
2
_

)
[[

()

_
+ +
1
2
_

_
+
1
2
_

!([[ [[ +

)
= 2

!
_
+
1
2
_

)
[[
_
1
0
d

i=1
C
(1/2,
i
)

i
(t)t
[[+2

1
dt. (7.5.6)
7.5 Example: Abelian Group Z
d
2
239
Proof Using a beta-type integral,
c
/
h
_
S
d
x
2
h
2

(x)d =
(

+1
([[ +

+1)
d+1

i=1
(
i
+
i
+
1
2
)
(
i
+
1
2
)
=
( +
1
2
)

( +1)
[[
,
it follows from the explicit formula for R

in Proposition 7.5.10 that


c
/
h
_
S
d
[R

(x)[
2
h
2

(x)d = c
/
h
_
S
d
R

(x)x

h
2

(x)d
=

()

( + +
1
2
)

( +
1
2
)

([[ +1)
[[
!( +1)
[[[[
.
Nowusing (a)
nm
=(1)
m
(a)
n
/(1na)
m
and (a)
n
/(a+1)
n
=a/(an) to
rewrite the sum gives the rst equation in (7.5.6). To derive the second equation,
we show that the sum in the rst equation can be written as an integral. We dene
a function
F(r) =

()

( + +
1
2
)

( +
1
2
)

!([[ [[ +)
r
[[[[+
.
Evidently, F(1) is the sum in the rst equation in (7.5.6). Moreover, the latter
equation is a nite sum over as ()

= 0 for > ; it follows that


F(0) = 0. Hence the sum F(1) is given by F(1) =
_
1
0
F
/
(r)dr. The derivative
of F can be written as
F
/
(r) =

()

( + +
1
2
)

( +
1
2
)

!
r
[[[[+1
= r
[[+1
d

i=1

i
(
i
)

i
(
i
+
i

i
+
1
2
)

i
(
i

i
+
1
2
)

i
!
r

i
= r
[[+1
d

i=1
2
F
1
_

i
,
i
+
i

i
+
1
2

i
+
1
2
;
1
r
_
.
The Jacobi polynomial P
(a,b)
n
can be written in terms of the hypergeometric series
2
F
1
, (4.22.1) of Szeg o [1975]:
P
(a,b)
n
(t) =
_
2n+a+b
n
_
_
t 1
2
_
n
2
F
1
_
n, na
2nab
;
2
1t
_
.
Using this formula with n =
i
, a =
i
2
i
+
i

1
2
, b =0 and r =
1
2
(1t), and
then using P
(a,b)
n
(t) = (1)
n
P
(b,a)
n
(t), we conclude that
F
/
(r) =
( +
1
2
)

!
( +
1
2
)

r
[[[[+1
d

i=1
P
(0,
i
2
i
+
i
1/2)

i
(2r 1).
240 Spherical Harmonics Associated with Reection Groups
Consequently, it follows that
F(1) =
( +
1
2
)

!
( +
1
2
)

_
1
0
d

i=1
P
(0,
i
2
i
+
i
1/2)

i
(2r 1)r
[[[[+1
dr.
By the denition of C
(,)
n
(see Denition 1.5.5), P
(0,
i
2
i
+
1
1/2)

i
(2t
2
1) =
C
(1/2,
i
)

(t) if
i
is even and tP
(0,
i
2
i
+
1
1/2)

i
(2t
2
1) =C
(1/2,
i
)

(t) if
i
is odd.
Hence, making the change of variables r t
2
in the above integral leads to the
second equation in (7.5.6).
Since R

is monic, x

(x) = Q

is a polynomial of lower degree when


restricted to the sphere, and R

is orthogonal to lower-degree polynomials, it


follows from standard Hilbert space theory that Q

is the best approximation to


x

among all polynomials of lower degree. In other words R

has the smallest L


2
norm among all polynomials of the form x

P(x), P
d
n1
on S
d1
:
|R

|
2
= min
P
d+1
n1
|x

P|
2
, [[ = n.
Corollary 7.5.14 Let N
d
0
and n =[[. Then
inf
P
d
n1
|x

P(x)|
2
2
=
2
_
+
1
2
_

()
[[
_
1
0
d

i=1
C
(1/2,
i
)

i
(t)
k
(1/2,
i
)

i
t
[[+21
dt,
where k
(,)
n
is the leading coefcient of C
(,)
n
(t).
7.6 Example: Dihedral Groups
We now consider the h-harmonics associated with the dihedral group, denoted
by I
k
, k 2, which is the group of symmetries of the regular k-gon with root
system I
2
(k) given in Subsection 6.3.4. We use complex coordinates z = x
1
+i x
2
and identify R
2
with C. For a xed k, dene = e
i/k
. Then the rotations in I
k
consist of z z
2 j
and the reections in I
k
consist of z z
2j
, 0 j k 1.
The associated weight function is a product of powers of the linear functions
whose zero sets are the mirrors of the reections in I
k
. For parameters , 0,
or 0 and = 0 when k is odd, dene
h
,
(z) =

z
k
z
k
2i

z
k
+ z
k
2

.
As a function in (x
1
, x
2
), this is positively homogeneous of degree = k( +).
Since z
k
z
k
=

k1
j=0
(z z
2 j
) and z
k
+ z
k
=

k1
j=0
(z z
2 j+1
), the reection
in the line z z
j
= 0 is given by z z
j
, 0 j 2k 1. The corresponding
group is I
2k
when > 0 and I
k
when = 0.
7.6 Example: Dihedral Groups 241
The normalization constant of h
2
,
is determined by
c
,
_

h
2
,
(e
i
)d = 1 with c
,
=
_
2B
_
+
1
2
, +
1
2
__
1
.
From z = x
1
+i x
2
, it follows that

z
=
1
2
_

x
1
i

x
2
_
,

z
=
1
2
_

x
1
+i

x
2
_
.
Denition 7.6.1 For a dihedral group W on R
2

=Cand associated weight func-


tion h
,
, let D =
1
2
(D
1
iD
2
) and D =
1
2
(D
1
+iD
2
). Also, let K
h
denote the
kernel of D.
Note that
h
=D
2
1
+D
2
2
= (D
1
+iD
2
)(D
1
iD
2
) = 4DD.
Lemma 7.6.2 For C, [[ = 1, let v = (Im, Re) R
2
; then, for a
polynomial in x or z,
1
2
f (x) f (x
v
)
x, v)
(Im +i Re) =
f (z) f ( z
2
)
z z
2
,
where = 1 for = 1 and =
2
for =1.
Proof An elementary calculation shows that 2x, v)(Im +i Re) = z z
2
and x
v
= x 2x, v)v = z
2
, which yields the stated formula.
As a consequence of the factorization of z
k
z
k
and Lemma 7.6.2, we have
D f (z) =
f
z
+
k1

j=0
f (z) f ( z
2 j
)
z z
2 j
+
k1

j=0
f (z) f ( z
2 j+1
)
z z
2 j+1
,
D f (z) =
f
z

k1

j=0
f (z) f ( z
2 j
)
z z
2 j

2j

k1

j=0
f (z) f ( z
2 j+1
)
z z
2 j+1

2 j+1
.
We denote the space of h-harmonics associated with h
2
,
by H
n
(h
2
,
).
7.6.1 An orthonormal basis of H
n
(h
2
,
)
Let Q
h
denote the closed span of of ker D
2
in L
2
(h
2
,
(e
i
)d). For = =
0, the usual orthogonal basis for H
n
(h
2
,
), n 1, is z
n
, z
n
; the members of
the basis are annihilated by / z and /z, respectively. This is not in general
possible for > 0. Instead, we have
Proposition 7.6.3 If
n
H
n
(h
2
,
)Q
h
, n 0, then z

n
(z) H
n+1
(h
2
,
) and
z

n
(z) is orthogonal to ker D in L
2
(h
2
,
(e
i
)d).
242 Spherical Harmonics Associated with Reection Groups
Proof By the above denition of D and Theorem 7.1.17, the adjoint D

satises
D

f (z) = (1+n+)
_
z f (z) (n+)
1
[z[
2
D f (z)

if f H
n
(h
2
,
) and D

f H
n+1
(h
2
,
). Since D

n
(z) =
_
D
n
(z)
_

= 0, this
shows that D

n
(z) = (1+n+) z

n
(z). By the denition of an adjoint, the latter
function is orthogonal to ker D.
Corollary 7.6.4 If
n
H
n
(h
2
,
) Q
h
then
n
(z), z

n
(z) : n = 0, 1, 2. . . is
an orthogonal basis for L
2
(h
2
,
(e
i
)d).
Proof This follows from the facts that dimH
0
(h
2
,
) = 1, dimH
n
(h
2
,
) = 2,
n 1 and L
2
(h
2
,
(e
i
)d) =

n=0
H
n
(h
2
,
).
Since Dp = 0 implies that
h
p = 0, the corollary shows that we only need to
nd homogeneous polynomials in K
h
= ker D. Most of the work in describing
K
h
reduces to the case k = 1; indeed, we have
Proposition 7.6.5 Let f (z) = g(z
k
) and = z
k
. Then
D f (z) = kz
k1
_
g

+
g() g(

+
g() g(

)
+

_
,
D f (z) = k z
k1
_
g


g() g(

+
g() g(

)
+

_
.
Proof Substituting g into the formulae for D f (z) and D f (z) and making use of
_
z
2 j
_
k
= z
k
=

,
_
z
2 j+1
_
k
=

,
we use the following two formulae to evaluate the sum of the resulting equations:
k1

j=0
1
t
2 j
=
kt
k1
t
k
1
,
k1

j=0

2 j
t
2 j
=
k
t
k
1
, t C.
These formulae can be veried as follows. Multiplying the rst formula by t
k
1,
the sum on the left-hand side becomes the Lagrange interpolation polynomial of
kt
k1
based on the points
2 j
, 0 j k 1, which is equal to the expression
kt
k1
on the right-hand side by the uniqueness of the interpolation problem. The
second formula follows from the rst on using
2 j
= t (t
2 j
). We then use
these formulae with t = z/ z and t = z/( z), respectively, to nish the proof.
Moreover, the following proposition shows that if g(z
k
) K
h
then z
j
g(z
k
)
K
h
for 0 j < k, which allows the complete listing of the homogeneous
polynomials in K
h
.
7.6 Example: Dihedral Groups 243
Proposition 7.6.6 If Dg(z
k
) = 0 then D
_
z
j
g(z
k
)

= 0 for 0 j k 1.
Proof We use induction, based on the formula
D[z f (z)] = zD f (z)
k1

j=0

2 j
f ( z
2 j
)
k1

j=0

2 j
f ( z
2 j+1
),
which is veried by noting that a typical term in D is of the form

z f (z) z f ( z
j
)
z z
j
= z
j
f (z) f ( z
j
)
z z
j
+
j
f ( z
j
).
Let f (z) = z
j
g(z
k
) with 0 j k 2 and assume that D f = 0. Then the above
formula becomes
D[z
j+1
g(z
k
)] = zD f (z)
_
z
j
g(z
k
) +z
j
g(z
k
)
_
k1

l=0

2l( j+1)
.
The latter sum is zero if 0 j +1 k 1 since
2
is a kth root of unity. Thus,
D(z
j+1
g(z
k
)) = 0 by the induction hypothesis.
We are now ready to state bases for K
h
, and thus bases for H (h
2
,
).
The groups I
1
=Z
2
and I
2
=Z
2
2
Set k = 1 and , 0. In polar coordinates the corresponding measure on the
circle is (sin
2
)

(cos
2
)

d, associated with the group I


2
. If = 0 then the
measure is associated with the group I
1
.
The basis of H
n
(h
2
,
) is given in Theorem 7.5.1 in terms of the generalized
Gegenbauer polynomials dened in Subsection 1.5.2. For each n, some linear
combination of the two elements of the basis is in K
h
. Dene
f
n
(z) = r
n
_
n+2 +
n
2 +2
C
(,)
n
(cos) +i sin C
(+1,)
n1
(cos)
_
, (7.6.1)
where
n
= 2 if n is even and
n
= 0 if n is odd, and z = re
i
. We will show that
f
n
is in K
h
. For this purpose it is convenient to express f
n
in terms of products of
powers of
1
2
(z + z) and
1
2
(z z).
Proposition 7.6.7 Let =
1
2
(z + z), =
1
2
(z z) and let
g
2n
=
( +
1
2
)
n
( + +1)
n
f
2n
and g
2n+1
=
( +
1
2
)
n+1
( + +1)
n
f
2n+1
.
Then, for n 0,
g
2n
(z) =(1)
n
n

j=0
_
n+
1
2

_
nj
_
n+
1
2

_
j
(n j)! j!

2n2 j

2 j
+(1)
n1
n1

j=0
_
n+
1
2

_
n1j
_
n+
1
2

_
j
(n1 j)! j!

2n12 j

2 j+1
244 Spherical Harmonics Associated with Reection Groups
and
g
2n+1
(z) =(1)
n+1
n

j=0
_
n
1
2

_
n+1j
_
n
1
2

_
j
(n j)! j!

2n+12j

2 j
+(1)
n+1
n

j=0
_
n
1
2

_
nj
_
n
1
2

_
j+1
(n j)! j!

2n2 j

2 j+1
.
Proof By the denition of generalized Gegenbauer polynomials, write g
n
in
terms of Jacobi polynomials. Recall from the proof of Proposition 1.4.14 that
P
(a,b)
n
(t) = (1)
n
n

j=0
(na)
nj
(nb)
j
(n j)! j!
_
1+t
2
_
nj
_
t 1
2
_
j
.
Set t = (z
2
+ z
2
)/(2z z); then
1+t
2
=
1
z z
_
z + z
2
_
2
and
t 1
2
=
1
z z
_
z z
2
_
2
.
Hence, for parameters a, b and z = re
i
,
r
2n
P
(a,b)
n
(cos2) = (1)
n
n

j=0
(na)
nj
(nb)
j
(n j)! j!
_
z + z
2
_
2n2 j
_
z z
2
_
2 j
.
Writing g
2n
and g
2n+1
in terms of Jacobi polynomials and applying the above
formula gives the stated results.
From Proposition 7.6.5 with k = 1 we have
D[(z +z)
n
(z z)
m
] =[n+(1(1)
n
)] (z + z)
n1
(z z)
m
+[m+(1(1)
m
)] (z + z)
n
(z z)
m1
;
D[(z + z)
n
(z z)
m
] =[n+(1(1)
n
)] (z + z)
n1
(z z)
m
[m+(1(1)
m
)] (z + z)
n
(z z)
m1
.
Thus D +D and D D have simple expressions on these basis elements (indi-
cating the abelian nature of the group I
2
). We can now prove some properties of
f
n
, (7.6.1).
Proposition 7.6.8 Let , 0. Then we have the following.
(i) D f
n
(z) = 0, n = 0, 1, . . . .
(ii) D f
2n
(z) = 2
( + +1)
n
( +
1
2
)
n
f
2n1
(z) and
D f
2n+1
(z) = 2(n+ +
1
2
)
( + +1)
n
( +
1
2
)
n
f
2n
(z).
7.6 Example: Dihedral Groups 245
(iii) Let h
n
=
_
2B( +
1
2
, +
1
2
)

1
_

[ f
n
(e
i
)[
2
(sin
2
)

(cos
2
)

d; then
h
2n
=
[ + +1]
n
( +
1
2
)
n
n!( +
1
2
)
n
and h
2n+1
=
( + +1)
n
( +
1
2
)
n+1
n!( +
1
2
)
n+1
.
Proof Observe that the symbolic involution given by , z + z z z leaves
each f
n
invariant and interchanges D +D with D D. We will show that
_
D+D
_
g
2n
= 2g
2n1
, apply the involution to show that
_
DD
_
g
2n
= 2g
2n1
also and then use the same method for g
2n+1
. This will prove (i) and (ii). Thus we
write
_
D+D
_
g
2n
(z)
= (1)
n
n

j=0
_
_
n+
1
2

_
nj
_
n+
1
2

_
j
(n j)! j!
(2n2j)

_
z + z
2
_
2n12 j
_
z z
2
_
2 j
_
+(1)
n1
n1

j=0
_
_
n+
1
2

_
n1j
_
n+
1
2

_
j
(n1 j)! j!
(2n12 j +2)

_
z + z
2
_
2n22 j
_
z z
2
_
2 j+1
_
= 2g
2n1
(z).
The rst sum loses the j = n term; in the second sum we use
_
n+
1
2

_
j
(2n12 j +2) =2
_
n+
1
2

_
j+1
.
Now we have
_
D+D
_
g
2n+1
(z)
= (1)
n+1
n

j=0
_
_
n
1
2

_
n+1j
_
n
1
2

_
j
(n j)! j!
(2n+12 j +2)
_
z + z
2
_
2n2 j
_
z z
2
_
2 j
_
+(1)
n+1
n

j=0
_
_
n
1
2

_
nj
_
n
1
2

_
j+1
(n j)! j!
(2n2 j)
_
z + z
2
_
2n12 j
_
z z
2
_
2 j+1
_
= 2
_
n+ +
1
2
__
n+ +
1
2
_
g
2n
.
246 Spherical Harmonics Associated with Reection Groups
Here the second sum loses the j = n term, and we used
_
n
1
2

_
nj
_
n
1
2

_
j+1
=
_
n+ +
1
2
__
n+ +
1
2
__
n+
1
2

_
n1j
_
n+
1
2

_
j
.
In the rst sumwe used
_
n
1
2

_
j
(2n+12 j +2) =2
_
n
1
2

_
j+1
.
Like the second sum, the rst sum equals 2
_
n+ +
1
2
__
n+ +
1
2
_
times the
corresponding sum in the formula for g
2n
.
The proof of (iii) follows from the formula
_

g(cos)(sin
2
)

(cos
2
)

d = 2
_
1
1
g(t)[t[
2
(1t
2
)
1/2
dt,
and the norm of the generalized Gegenbauer polynomials.
Note that if = = 0 then f
n
(z) = z
n
, which follows from using the for-
mula for the generalized Gegenbauer polynomials and taking limits after setting
( + +1)
n
/( + +n) = ( +)
n
/( +).
The real and imaginary parts of f
n
comprise an orthogonal basis for H
n
(h
2
,
).
That they are orthogonal to each other follows from their parity in x
2
. However,
unlike the case of = = 0, f
n
(z) and

f
n
(z) are not orthogonal. Corollary 7.6.4
shows that in fact f
n
(z) and z

f
n
are orthogonal.
Proposition 7.6.9 For n 0,
z

f
2n1
(z) =
1
2
+
n+ +
f
2n
(z) +
1
2
2n+ +
n+ +

f
2n
(z),
z

f
2n
(z) =
1
2n+2 +1
_
( ) f
2n+1
(z) +(2n+ + +1)

f
2n+1
(z)

.
Proof By Corollary 7.6.4 we can write z

f
n1
(z) as a linear combination of f
n
(z)
and

f
n
(z). From the formula in Proposition 7.6.7, it follows that the leading
coefcients of z
n
and z
n
are given by
f
2n
(z) =
( + +1)
2n1
2
2n
( +
1
2
)
n
n!
_
(2n+ +)z
2n
( +) z
2n

+ ,
f
2n+1
(z) =
( + +1)
2n
2
2n+1
( +
1
2
)
n+1
n!
_
(2n+ + +1)z
2n+1
+( ) z
2n+1

+ ,
where the rest of the terms are linear combinations of powers of z and z, each
homogeneous of degree n in z and z. Using these formulae to compare the coef-
cients of z
n
and z
n
in the expression z

f
n1
(z) =af
n
(z) +b

f
n
(z) leads to the stated
formulae.
7.6 Example: Dihedral Groups 247
These relations allow us to derive some interesting formulae for classical
orthogonal polynomials. For example, the coefcient f
n
satises the following
integral expression.
Proposition 7.6.10 For n 0,
c

_
1
1
_
1
1
_
scos +it sin
_
n
(1+s)(1+t)(1s
2
)
1
(1t
2
)
1
dsdt
= a
n
f
n
(e
i
) = a
n
_
n+2 +
n
2 +2
C
(,)
n
(cos) +i sin C
(+1,)
n1
(cos)
_
,
where
a
2n
=
(2n)!
2
2n
( + +1)
n
( +
1
2
)
n
,
a
2n+1
=
(2n+1)!
2
2n+1
( + +1)
n
( +
1
2
)
n+1
.
Proof Let V denote the intertwining operator associated with h
2
,
. Since the real
and imaginary parts of z
n
are ordinary harmonics, the real and imaginary parts
of V(z
n
) are elements of H
n
(h
2
,
). It follows that V(z
n
) = a
n
f
n
(z) +b
n
z

f
n1
(z).
Since f
n
(z) and z

f
n
(z) are orthogonal,
a
n
h
n
=
_

(Vz
n
)(e
i
)

f
n
(e
i
)d
,
=
n!
( + +1)
n
2
_
S
1
f
n
(z) z
n
d,
where h
n
is as in part (iii) of Proposition 7.6.8 and we have used Proposition
7.2.8 in the second equation. Using Theorem 7.2.4 and the expression for p, q)
h
in Denition 7.2.2, we conclude that
a
n
h
n
=
1
2
n
( + +1)
n
_

x
1
i

x
2
_
n
f
n
(z)
=
1
2
n
( + +1)
n
_
2

z
_
n
f
n
(z).
The value of (/z)
n
f
n
(z)[n!]
1
is equal to the coefcient of z
n
in f
n
, which is
given in the proof of Proposition 7.6.9. This gives the value of a
n
. In a similar way
we can compute b
n
: it is given by a constant multiple of (/z)
n
z

f
n
(z). Hence,
we conclude that Vz
n
= a
n
f
n
(z). The stated formula then follows from the closed
formula for V in Theorem 7.5.4.
In particular, if 0 then the formula in Proposition 7.6.10 becomes a
classical formula of the Dirichlet type:
c

_
1
1
_
cos +it sin
_
n
(1+t)(1t
2
)
1
dt
=
n!
(2)
n
C

n
(cos) +i
n!
(2 +1)
n
sin C
+1
n1
(cos)
for n = 1, 2, . . . This is a result of Erd elyi [1965] for functions that are even in .
248 Spherical Harmonics Associated with Reection Groups
The group I
k
If k is an odd integer then the weight function h

associated with the group I


k
is
the same as the weight function h
,
associated with the group I
2k
with = 0.
Thus, we only need to consider the case of I
k
for even k.
Set k = 2m. By Proposition 7.6.6, the homogeneous polynomials of degree
mn + j in K
h
are given by z
j
f
n
(z
m
), 0 j m1. Their real and imaginary
parts comprise an orthogonal basis of H
mn+j
(h
2
,
); that is, if we dene
p
mn+j
() =
n+2 +
n
2 +2
cos j C
(,)
n
(cosm) sin sin j C
(+1,)
n1
(cosm),
q
mn+j
() =
n+2 +
n
2 +2
sin j C
(,)
n
(cosm) +sin cos j C
(+1,)
n1
(cosm),
where
n
= 2 if n is even and = 0 if n is odd, then
Y
n,1
(x
1
, x
2
) = r
mn+j
p
mn+j
() and Y
n,2
(x
1
, x
2
) = r
mn+j
q
mn+j
(),
with polar coordinates x
1
= r cos and x
2
= r sin, form an orthogonal basis
for the space H
mn+j
(h
2
,
), 0 j m1. The two polynomials are indeed
orthogonal since Y
n,1
is even in x
2
and Y
n,2
is odd in x
2
.
Let us mention that in terms of the variable t = cos, the restriction of h
2
,
associated with the group I
2m
on the circle can be transformed into a weight
function
w
,
(t) =

sinm[
2
[ cosm[
2
=

U
m1
(t)
_
1t
2

2
[T
m
(t)[
2
(7.6.2)
dened on [1, 1], where T
m
and U
m
are Chebyshev polynomials of the rst and
the second kind, respectively. In particular, under the map cos t, p
mn+j
is the
orthogonal polynomial of degree mn+ j with respect to w
,
.
7.6.2 Cauchy and Poisson kernels
Let
n
, z

n0
denote an orthonormal basis for L
2
(h
2
,
(e
i
)d) associated
with the group I
k
; see Corollary 7.6.4. The Cauchy kernel is related to the
projection onto a closed span of
n
: n 0. It is dened by
C(I
k
; z, w) =

n=0

n
(z)
n
(w), [zw[ < 1.
For any polynomial f K
h
= ker D,
f (z) = c
,
_

f (e
i
)C(I
k
; z, e
i
)h
2
,
(e
i
)d, [z[ < 1.
7.6 Example: Dihedral Groups 249
The Poisson kernel which reproduces any h-harmonics in the disk was dened in
Section 7.3. In terms of
n
it is given by
P(I
k
; z, w) =

n=0

n
(z)
n
(z) +

n=0
z
n
(z)w
n
(w)
=C(I
k
; z, w) + zwC(I
k
; w, z).
The following theorem shows that nding closed formulae for these kernels
reduces to the cases k = 1 and k = 2.
Theorem 7.6.11 For each weight function h
,
associated with the group I
2k
,
P(I
2k
; z, w) =
1[z[
2
[w[
2
1 zw
C(I
2k
; z, w).
Moreover
P(I
2k
; z, w) =
1[z[
2
[w[
2
1[z
k
[
2
[ w
k
[
2
[1z
k
w
k
[
2
[1 zw[
2
P(I
2
; z
k
, w
k
), (7.6.3)
where the Poisson kernel P(I
2
; z, w) associated with h(x + i y) = [x[

[y[

is
given by
P(I
2
; z, w) = (1[zw[
2
)c

_
1
1
_
1
1
_
1+2(Imz)(Imw)s +2(Rez)(Rew)t +[zw[
2

1
(1+s)(1+t)(1s
2
)
1
(1t
2
)
1
dsdt.
Proof Let C(I
k
; z, w) =C
Re
(z, w)+iC
Im
(z, w) with C
Re
and C
Im
real and symmet-
ric. Then
Re P(I
k
; z, w) = (1+Re z w)C
Re
(z, w) Im zwC
Im
(z, w),
Im P(I
k
; z, w) = Im zwC
Re
(z, w) (1Re zw)C
Im
(z, w).
Since Im P(z, w) =0, we have C
Im
(z, w) = [(Im zw)/(1Re zw)] C
Re
(z, w) and
thus P(I
k
; z, w) = [(1[z[
2
[w[
2
)/(1 zw)]C(I
k
; z, w).
Now let
n
: n 0 be an orthonormal basis of ker D associated with
h(x +i y) =[x[

[y[

. By Proposition 7.6.6 we conclude that the polynomials


z
j

n
(z
k
) are in K
h
and, in fact, are orthonormal h-harmonics in H
n
(h
2
,
). Thus
C(I
k
; z, w) =

n=0

n
(z
k
)
n
(w
k
)
k1

j=0
z
j
w
j
=
1z
k
w
k
1z w
C(I
2
; z
k
, w
k
).
Hence, using P(I
2
; z, w) = [(1[z[
2
[w[
2
)/(1z w)]C(I
2
; z, w) and these formulae,
the relation (7.6.3) between P(I
2k
; z, w) and P(I
2
; z, w) follows. The integral rep-
resentation of P(I
2
; z, w) follows from Theorem 7.3.3 and the explicit formula for
the intertwining operator V for I
2
=Z
2
2
in Theorem 7.5.4.
250 Spherical Harmonics Associated with Reection Groups
In particular, if = 0 then the kernel P(I
2
; z, w) becomes the kernel P(I
1
; z, w)
for the weight function [x[

associated with I
1
=Z
2
:
P(I
1
; z, w)
= c

_
1
1
1[zw[
2
(1+2(Imz)(Imw) +2(Rez)(Rew)t +[zw[
2
)
+1
(1+t)(1t
2
)
1
dt.
For the general odd-k dihedral group, we can use the formula
P(I
k
; z, w) =
1[z[
2
[w[
2
1[z
k
[
2
[ w
k
[
2
[1z
k
w
k
[
2
[1z w[
2
P(I
1
; z
k
, w
k
)
to write down an explicit formula for P(I
k
; z, w).
In terms of the Lauricella function F
A
of two variables (also called Appells
function),
F
A
_
a
1
, a
2
b
1
, b
2
; c; x, y
_
=

m=0

n=0
(c)
m+n
(a
1
)
m
(a
2
)
n
(b
1
)
m
(b
2
)
n
m!n!
x
m
y
n
(see Section 1.2), the Poisson kernel P(I
2
; w, z) can be written as
P(I
2
; w, z) =
1
[1+z w[
2(++1)
F
A
_
+1, +1
2 +1, 2 +1
; + +1;
4(Imz)(Imw)
[1+z w[
2
,
4(Rez)(Rew)
[1+z w[
2
_
.
This follows from a formula in Bailey [1935, p. 77].
On the one hand the identity (7.6.3) gives an explicit formula for the Poisson
kernel associated with the dihedral groups. On the other hand, the Poisson kernel
is given in terms of the intertwining operator V in Theorem 7.3.3; the two for-
mulae taken together suggest that the intertwining operator V might constitute an
integral transform for any dihedral group.
7.7 The Dunkl Transform
The Fourier transform plays an important role in analysis. It is an isometry of
L
2
(R
d
, dx) onto itself. It is a remarkable fact that there is a Fourier transform for
reection-invariant measures which is an isometry of L
2
(R
d
, h
2

dx) onto itself.


Recall that K(x, y) =V
(x)
e
x,y)
. Since V is a positive operator, [K(x, i y)[ 1. It
plays a role similar to that of e
ix,y)
in classical Fourier analysis. Also recall that
d(x) = (2)
d/2
e
|x|
2
/2
dx.
Denition 7.7.1 For f L
1
(h
2

dx), y R
d
, let

f (y) = (2)
d/2
c
h
_
R
d
f (x)K(x, i y)h
2

(x)dx
denote the Dunkl transform of f at y.
7.7 The Dunkl Transform 251
By the dominated convergence theorem

f is continuous on R
d
. Denition 7.7.1
was motivated by the following proposition.
Proposition 7.7.2 Let p be a polynomial on R
d
and (y) =
d
j=1
y
2
j
for y C
d
;
then
c
h
_
R
d
_
e

h
/2
p(x)

K(x, y)h
2

(x)d(x) = e
(y)/2
p(y).
Proof Let m be an integer larger than the degree of p, x y C
d
and let
q
m
(x) =
m
j=0
K
j
(x, y). Breaking p up into homogeneous components, it follows
that q
m
, p)
h
= p(y). By the formula in Theorem 7.2.7,
q
m
, p)
h
= c
h
_
R
d
_
e

h
/2
p
__
e

h
/2
q
m
_
h
2

d.
However,
(x)
h
K
n
(x, y) =(y)K
n2
(x, y) and so
e

h
/2
q
m
(x) =
m

j=0

lj/2
1
l!
_
(y)
2
_
l
K
j2l
(x, y)
=

lm/2
1
l!
_
(y)
2
_
l
m2l

s=0
K
s
(x, y).
Now let m . The double sum converges to e
(y)/2
K(x, y) since it is
dominated termwise by

l=0
|y|
2l
l!2
l

s=0
|x|
s
|y|
s
s!
= exp
_
|y|
2
2
+|x||y|
_
,
which is integrable with respect to d(x). By the dominated convergence
theorem,
p(y) = e
(y)/2
c
h
_
R
d
[e

h
/2
p(x)]K(x, y)h

(x)
2
d(x).
Multiplying both sides by e
(y)
completes the proof.
An orthogonal basis for L
2
(R
d
, h
2

dx) is given as follows (compare with


(7.1.10)).
Denition 7.7.3 For m, n =0, 1, 2, . . . and p H
d
n
(h
2

), let

+
d2
2
. Then

m
(p; x) = p(x)L
n+

m
(|x|
2
)e
|x|
2
/2
, x R
d
.
Proposition 7.7.4 For k, l, m, n N
0
, p H
d
n
(h
2

) and q H
d
l
(h
2

),
(2)
d/2
c
h
_
R
d

m
(p; x)
k
(q; x)h
2

(x)dx
=
mk

nl
2

1
(

+1)
n+m
m
c
/
h
_
S
d1
pqh
2

d.
252 Spherical Harmonics Associated with Reection Groups
Proof Using spherical polar coordinates and Denition 7.7.3, the integral on the
left-hand side equals
c
h
_

0
L
n+

m
(r
2
)L
l+

k
(r
2
)e
r
2
r
n+l+2

+d1
dr
2
1d/2
(
d
2
)
_
S
d1
pqh
2
d.
The integral on the right-hand side is zero if n ,= l. Assume n = l and make
the change of variable r
2
= t. The rst integral then equals
1
2

mk
(n +

+1 +
m)/m!.
By Theorem 3.2.18 the linear span of
m
(p) : m, n 0, p H
d
n
(h
2

) is dense
in L
2
(R
d
, h
2

dx). Moreover,
m
(p) is a basis of eigenfunctions of the Dunkl
transform.
Theorem 7.7.5 For m, n = 0, 1, 2, . . . , p H
d
n
(h
2

) and y R
d
,
_

m
(p)
_
(y) = (i)
n+2m

m
(p; y).
Proof For brevity, let A = n+

. By Proposition 7.7.2 and Lemma 7.2.6,


(2)
d/2
c
h
_
R
d
L
A
j
(
1
2
|x|
2
)p(x)K(x, y)h
2

(x)e
|x|
2
/2
dx
= (1)
j
( j!2
j
)
1
e
(y)/2
(y)
j
p(y), y C.
We can change the argument in the Laguerre polynomial by using the identity
L
A
m
(t) =
m

j=0
2
j
(A+1)
m
(A+1)
j
(1)
mj
(m j)!
L
A
j
(
t
2
), t R,
which can be proved by writing the generating function for the Laguerre
polynomials as
(1r)
A1
exp
_
xr
1r
_
= (1r)
A1
exp
_

1
2
xt
1t
_
with t = 2r/(1+r). Use this expression together with the above integral to obtain
(2)
d/2
c
h
_
R
d
L
A
m
(|x|
2
)p(x)e
|x|
2
/2
K(x, y)h
2

(x)dx
= e
(y)/2
p(y)(1)
m
(A+1)
m
m!
m

j=0
(m)
j
(A+1)
j
[(y)]
j
j!
.
Now replace y by i y with y R
d
; then (y) becomes |y|
2
and p(y) becomes
(1)
m
(i)
n
p(y), and the sum yields a Laguerre polynomial. The integral equals
(1)
m
(i)
n
p(y)L
A
m
(|y|
2
)e
|y|
2
/2
.
Since the eigenvalues of the Dunkl transform are powers of i, this proves its
isometry properties.
7.7 The Dunkl Transform 253
Corollary 7.7.6 The Dunkl transform extends to an isometry of L
2
(R
d
, h
2

dx)
onto itself. The square of the transform is the central involution; that is, if
f L
2
(R
d
, h
2

dx),

f = g, then g(x) = f (x) for almost all x R
d
.
The following proposition gives another interesting integral formula, as well
as a method for nding the eigenfunctions of
h
. First, recall the denition of the
Bessel function of index A >1/2:
J
A
(t) =
(
t
2
)
A

(A+
1
2
)
_
1
1
e
its
(1s
2
)
A1/2
ds
=
(
t
2
)
A
(A+1)

m=0
1
m!(A+1)
m
(
1
4
t
2
)
m
, t > 0.
Proposition 7.7.7 Let f H
d
n
(h
2

), n = 0, 1, 2, . . . and y C
d
; then
c
/
h
_
S
d1
f (x)K(x, y)h
2

(x)d = f (y)

m=0
(y)
m
2
m+n
m!(

+1)
m+n
.
Moreover, if y R
d
, > 0, then the function
g(y) = c
/
h
_
S
d1
f (x)K(x, i y)h
2

(x)d(x)
satises
h
g =
2
g and
g(y) = (i)
n
(

+1)
_
|y|
2
_

f
_
y
|y|
_
J
n+

(|y|).
Proof Since f is h-harmonic, e

h
/2
f = f . In the formula of Proposition 7.7.2,
c
h
_
R
d
f (x)K(x, y)h
2

(x)d(x) = e
(y)/2
f (y),
the part homogeneous of degree n+2m in y, m = 0, 1, 2, . . ., yields the equation
c
h
_
R
d
f (x)K
n+2m
(x, y)h
2

(x)d(x) =
(y)
m
2
m
m!
f (y).
Then, using the integral formula in Theorem 7.2.4 and the fact that
_
S
d1
f (x)K
j
(x, y)h
2

(x)d(x) = 0
if j < n or j ,n modulo 2, we conclude that
c
/
h
_
S
d1
f (x)K(x, y)h
2

(x)d(x)
=

m=0
1
2
n+m
(
d
2
+)
n+m
c
h
_
R
d
f (x)K
n+2m
(x, y)h
2

(x)d(x),
which gives the stated formula.
254 Spherical Harmonics Associated with Reection Groups
Now replace y by iy for > 0, y R
d
. Let A = n +

. This leads to
an expression for g in terms of the Bessel function J
A
. To nd
h
g we can
interchange the integral and
(y)
h
, because the resulting integral of the series

n=0

(y)
h
K
n
(x, i y) converges absolutely. Indeed,

(y)
h
K(x, iy) =
(y)
h
K(ix, y)
=
N

j=1
(ix
j
)
2
K(ix, y) =
2
|x|
2
K(x, iy);
but |x|
2
= 1 on S
d1
and so
h
g =
2
g.
Proposition 7.7.8 Suppose that f is a radial function in L
1
(R
d
, h
2

dx); f (x) =
f
0
(|x|) for almost all x R
d
. The Dunkl transform

f is also radial and has the
form

f (x) = F
0
(|x|) for all x R
d
, with
F
0
(|x|) = F
0
(r) =
1

d1
r

_

0
f
0
(s)J

(rs)s

+1
ds.
Proof Using polar coordinates and Corollary 7.4.5 we get

f (y) =
c
h
(2)
d/2
_

0
f
0
(r)r
d1+2
_
S
d1
K(rx
/
, y)h
2

(x
/
)d(x
/
)dr
=
c
h
(2)
d/2
B

_

0
f
0
(r)r
2

+1
_
1
1
e
ir|y|t
(1t
2
)

1/2
dt dr,
from which the stated result follows from the denition of J
A
(r) and putting the
constants together.
The Dunkl transform diagonalizes each D
i
(just as the FourierPlancherel
transform diagonalizes /x
i
). First we prove a symmetry relation with some
technical restrictions.
Lemma 7.7.9 Let f D
i
g and gD
i
f be in L
1
(R
d
, h
2

), and suppose that f g tends


to zero at innity. Then
_
R
d
(D
j
f )gh
2

dx =
_
R
d
f (D
j
g)h
2

dx, j = 1, . . . , d.
Proof The following integration by parts is justied by the assumption on f g at
innity. We also require
v
1 for each v R
+
so that 1/x, v) is integrable for
h
2

dx. After the formula is established, it can be extended to


i
0 by analytic
continuation. Now
7.7 The Dunkl Transform 255
_
R
d
(D
j
f )gh
2

dx =
_
R
d
f (x)

x
j
[g(x)h

(x)
2
] dx
+

vR
+

j
(v)
_
R
d
f (x) f (x
v
)
x, v)
g(x)h
2

(x)dx
=
_
R
d
_
f (x)

x
j
g(x) +2f (x)

vR
+

v
v
j
x, v)
g(x)
_
h
2

(x)dx
+

vR
+

v
v
j
_
R
d
f (x)
g(x) +g(x
v
)
x, v)
h
2

(x)dx
=
_
R
d
f (D
j
g)h
2

dx.
In the above the substitution x x
v
, for which x, v) becomes x
v
, v) =
x, v
v
) =x, v), was used to show that
_
R
d
f (x
v
)g(x)
x, v)
h
2

(x)dx =
_
R
d
f (x)g(x
v
)
x, v)
h
2

(x)dx
since h
2

dx is W-invariant.
Theorem 7.7.10 For f , D
j
f L
1
(R
d
, h
2

dx) and y R
d
, we have

D
j
f (y) =
i y
j

f (y) for j = 1, . . . , d. The operator iD


j
is densely dened on L
2
(R
d
, h
2

dx)
and is self-adjoint.
Proof For xed y R
d
put g(x) = K(x, i y) in Lemma 7.7.9. Then D
j
g(x) =
i y
j
K(x, i y) and

D
j
f (y) = (1)(i y
j
)

f (y). The multiplication operator


dened by M
j
f (y) = y
j
f (y), j = 1, . . . , d, is densely dened and self-adjoint
on L
2
(R
d
, h
2

dx). Further, iD
j
is the inverse image of M
j
under the Dunkl
transform, an isometric isomorphism.
Corollary 7.7.11 For f L
1
(R
d
), j = 1, . . . , d and g
j
(x) = x
j
f (x), the trans-
form g
j
(y) = iD
j

f (y), y R
d
.
Let us consider the example of Z
2
2
on R
2
, for which the weight function is
h
,
= [x[

[y[

. Recall the formula for the intertwining operator V in Theorem


7.5.4. Let us dene
E

(x) = c

_
1
1
e
isx
(1+s)(1s
2
)
1
ds
for > 0. Integrating by parts,
E

(x) = c

_
1
1
e
isx
(1s
2
)
1
ds
i x
2
c

_
1
1
e
isx
(1s
2
)

ds
=( +
1
2
)
_
1
2
[x[
_
+1/2
_
J
1/2
([x[) i sign(x)J
+1/2
([x[)

.
256 Spherical Harmonics Associated with Reection Groups
Using the denition of K and the formula for V, we then obtain
K(x, i y) =Ve
ix,y)
= E

(x
1
y
1
)E

(x
2
y
2
).
If = 0, so that we are dealing with the weight function h

(x) =[x[

associated
with Z
2
, then K(x, i y) = E

(x
1
y
1
). In particular, recall the -Bessel function
dened by
K
W
(x, y) =
1
[W[

wW
K(x, yw).
Then, in the case of Z
2
, we obtain
K
W
(x, i y) =
1
2
_
K(x, i y) +K(x, i y)

=( +
1
2
)
_
1
2
[x
1
y
1
[
_
+1/2
J
1/2
([x
1
y
1
[),
so that the transform

f specializes to the classical Hankel transform on R
+
.
Let us mention one interesting formula that follows from this set-up. By
Corollary 7.4.5,
c
/
h
_
S
1
K(x, i y)h
,
(y)d(y) = c
++1/2
_
1
1
e
i|x|s
(1s
2
)
+1/2
ds.
Using the denition of the Bessel function and verifying the constants, we end up
with the following formula:
_
|x|
2
_
(++1/2)
J
++1/2
(|x|) =
1
2
_
S
1
E

(x
1
y
1
)E

(x
2
y
2
)h
,
(y
1
, y
2
)d,
which gives an integral transform between Bessel functions.
7.8 Notes
If

=
v
= 0, h-harmonics reduce to ordinary harmonics. For ordinary har-
monics, the denition of H

in (7.1.4) is called Maxwells representation (see, for


example, M uller [1997, p. 69]). For h-harmonics, (7.1.4) is studied in Xu [2000a].
The proofs of Theorems 7.1.15 and 7.1.17 are different from the original proof in
Dunkl [1988].
A maximum principle for h-harmonic polynomials was given in Dunkl [1988]:
if f is a real non-constant h-harmonic polynomial then it cannot have an abso-
lute minimum in B
d
; if, in addition, f is W-invariant then f cannot have a local
minimum in B
d
.
The inner products were studied in Dunkl [1991]. The idea of using e

h
/2
given
in Theorem 7.2.7 rst appeared in Macdonald [1982] in connection with p, q)

.
The integration of the intertwining operator in Section 7.4 was proved and
used in Xu [1997a] to study the summability of orthogonal expansions; see
Chapter 9.
7.8 Notes 257
The Z
d
2
case was studied in Xu [1997b]. The explicit expression for the inter-
twining operator V in Theorem7.5.4 is essential for deeper analysis on the sphere;
see Dai and Xu [2013] and its references. It remains essentially the only case for
which a useful integral expression is known. The monic h-harmonics were studied
in Xu [2005c].
An interesting connection with the orthogonal polynomials of the Z
d
2
case
appears in Volkmer [1999]. See also Kalnins, Miller and Tratnik [1991] for
related families of orthogonal polynomials on the sphere.
The dihedral case was studied in Dunkl [1989a, b]. When =0, the polynomi-
als f
n
in Section 7.6 are a special case ( =+1) of the Heisenberg polynomials
E
(,)
n
(z), dened in Dunkl [1982] through the generating function

n=0
E
(,)
n
(z)t
n
= (1t z)

(1tz)

.
Many properties of the polynomials f
n
can be derived using this generating
function, for example, the properties in Proposition 7.6.8.
The weight functions in (7.6.2) are special cases of the so-called gener-
alized Jacobi polynomials; see Badkov [1974] as well as Nevai [1979]. The
dihedral symmetry allows us to write down explicit formulae for the orthogo-
nal polynomials, which are available only for some generalized Jacobi weight
functions.
The Dunkl transform for reection-invariant measures was dened and stud-
ied in Dunkl [1992]. The positivity of V is not necessary for the proof. When
the Dunkl transform is restricted to even functions on the real line, it becomes
the Hankel transform. For the classical result on Fourier transforms and Hankel
transforms, see Stein and Weiss [1971] and Helgason [1984]. One can dene a
convolution structure on the weighted space L
2
(h
2
, R
d
) and study harmonic anal-
ysis in the Dunkl setting; most deeper results in analysis, however, have been
established only the case of Z
d
2
using the explicit integral formula of V; see
Thangavelu and Xu [2005] and Dai and Wang [2010].
The only other case for which an explicit formula for the intertwining operator
V is known is that for the symmetric group S
3
, discovered in Dunkl [1995]. It is
natural to speculate that V is an integral transform in all cases. However, even in
the case of dihedral groups we do not know the complete answer. A partial result
in the case of the dihedral group I
4
was given in Xu [2000e]. For the case B
2
,
partial results can be found in Dunkl [2007].
8
Generalized Classical Orthogonal Polynomials
In this chapter we study orthogonal polynomials that generalize the classical
orthogonal polynomials. These polynomials are orthogonal with respect to weight
functions that contain a number of parameters, and they reduce to the classical
orthogonal polynomials when some parameters go to zero. Further, they satisfy
a second-order differentialdifference equation which reduces to the differential
equation for the classical orthogonal polynomials. The generalized orthogonal
polynomials that we consider are those on the unit ball and the simplex and the
generalized Hermite and Laguerre polynomials.
8.1 Generalized Classical Orthogonal Polynomials
on the Ball
The classical orthogonal polynomials on the ball B
d
were discussed in Sec-
tion 5.2. We study their generalizations by multiplying the weight function by
a reection-invariant function.
8.1.1 Denition and differentialdifference equations
Denition 8.1.1 Let h

be a reection-invariant weight function as in Denition


7.1.1. Dene a weight function on B
d
by
W
B
,
(x) = h
2

(x)(1|x|
2
)
1/2
, >
1
2
.
The polynomials that are orthogonal to W
B
,
are called generalized classical
orthogonal polynomials on B
d
.
Recall that c
/
h
is the normalization constant for h
2

over S
d1
; it can be seen by
using polar coordinates that the normalization constant of W
B
,
is given by
w
B
,
=
_
_
B
d
W
B
,
(x)dx
_
1
=
2c
/
h

d1
B(

+
d
2
, +
1
2
)
,
8.1 Orthogonal Polynomials on the Ball 259
where
d1
is the surface area of S
d1
and

=

vR
+

v
is the homogeneous
degree of h

.
If = 0, i.e., W is the orthogonal group then h

(x) = 1 and the weight func-


tion W
B
,
(x) is reduced to W
B

(x) = (1 |x|
2
)
1/2
, the weight function of the
classical orthogonal polynomials on the ball.
We will study these polynomials by relating them to h-harmonics. Recall The-
orem 4.2.4, which gives a correspondence between orthogonal polynomials on
B
d
and those on S
d
; using the same notation for polar coordinates,
y = r(x, x
d+1
) where r =|y|, (x, x
d+1
) S
d
,
we associate with W
B
,
a weight function
h
,
(x, x
d+1
) = h

(x)[x
d+1
[

=

vR
+
[x, v)[

v
[x
d+1
[

(8.1.1)
dened on S
d
, which is invariant under the reection group W Z
2
. Throughout
this section we shall set

,
=
k
+ +
d 1
2
(8.1.2)
and will write and for

and
,
in the proof sections.
The weight function h
,
is homogeneous and is S-symmetric on R
d+1
(Denition 4.2.1). By Theorem 4.2.4 the orthogonal polynomials
Y

(y) = r
n
P

(x), P

V
d
n
(W
B
,
)
(denoted Y
(1)

in Section 4.2) are h-harmonics associated with h


,
. Conse-
quently they satisfy the equation
WZ
2
h
Y

=0, where we denote the h-Laplacian


associated with h
,
by
WZ
2
h
to distinguish it from the usual h-Laplacian
h
.
For v R
+
(the set of positive roots of W), write v = (v, 0) R
d+1
, which is a
vector in R
d+1
. By Theorem 6.4.9, for y R
d+1
,

WZ
2
h
f (y) =f (y) +

vR
+
k
v
_
2f (y), v)
y, v)

f (y) f (y
v
)
y, v)
2
| v|
2
_
+
_
2
y
d+1
f (x)
y
d+1

f (y) f (y
1
, . . . , y
d
, y
d+1
)
y
2
d+1
_
(8.1.3)
for h
,
, where = (
1

d+1
)
T
is the gradient operator and the reection

v
acts on the rst d variables of y. We use
WZ
2
,
to derive a second-order
differentialdifference equation for polynomials orthogonal with respect to W
,
,
by a change of variables.
Since the polynomials Y

dened above are even in y


d+1
, we need to deal only
with the upper half space y R
d+1
: y
d+1
0. In order to write the operator
for P
n

(W
B
,
) in terms of x B
d
, we choose the following mapping:
y (r, x) : y
1
= rx
1
, . . . , y
d
= rx
d
, y
d+1
= r
_
1x
2
1
x
2
d
,
260 Generalized Classical Orthogonal Polynomials
which is one-to-one from y R
d+1
: y
d+1
0 to itself. We rewrite the
h-Laplacian
h
in terms of the new coordinates (r, x).
Proposition 8.1.2 Acting on functions on R
d+1
that are even in y
d+1
, the
operator
WZ
2
h
takes the form

WZ
2
h
=

2
r
2
+
2
,
+1
r

r
+
1
r
2

WZ
2
h,0
in the coordinates (r, x) in y R
d+1
: y
d+1
0, where the spherical part
WZ
2
h,0
,
acting on functions in the variables x, is given by

WZ
2
h,0
=
(x)
h
x,
(x)
)
2
2
,
x,
(x)
),
in which
h
denotes the h-Laplacian for h

in W
B
,
.
Proof We make the change of variables y (r, x) on y R
d+1
: y
d+1
0.
Then, by Proposition 4.1.6, we conclude that
=

2
r
2
+
d
r

r
+
1
r
2

0
,
where the spherical part
0
is given by

0
=
(x)
x,
(x)
)
2
(d 1)x,
(x)
); (8.1.4)
here
(x)
and
(x)
denote the Laplacian and the gradient with respect to x,
respectively. As shown in the proof of Proposition 4.1.6, the change of variables
implies that

y
i
= x
i

r
+
1
r
_

x
i
x
i
x,
(x)
)
_
, 1 i d +1,
with /
d+1
= 0. Since we are assuming that the operator acts on functions that
are even in y
d+1
, it follows from (8.1.1) that the last term in
WZ
2
h
, see (8.1.3),
becomes
2
1
y
d+1

y
d+1
= 2
_
1
r

1
r
2
x,
x
)
_
,
where we have used the fact that y
d+1
= rx
d+1
and x
2
d+1
= 1 x
2
1
x
2
d
.
Moreover, under the change of variables y (r, x) we can use (8.1.1) to verify that
v, )
y, v)
=
1
r

r
+
1
r
2
v,
(x)
)
x, v)

1
r
2
x,
(x)
).
Since
v
acts on the rst d variables of y
d+1
, the difference part in the sum in
equation (8.1.3) becomes r
2
(1
v
)/x, v). Upon summing over v R
+
, we
conclude from (8.1.3) that
WZ
2
h
takes the stated form, with
WZ
2
0,h
given by
8.1 Orthogonal Polynomials on the Ball 261

WZ
2
h,0
=
0
2( +)x,
(x)
) +

vR
+

v
_
2v,
(x)
)
x, v)

1
v
x, v)
2
|v|
2
_
.
The desired formula for
WZ
2
0,h
then follows from the formula for
0
in (8.1.4)
and from the formula for the h-Laplacian in Theorem 6.4.9.
Recall that V
d
n
(W
B
,
) stands for the space of orthogonal polynomials of degree
n with respect to the weight function W
B
,
on B
d
.
Theorem 8.1.3 The orthogonal polynomials in the space V
d
n
(W
B
,
) satisfy the
differentialdifference equation
_

h
x, )
2
2
,
x, )
_
P =n(n+2
,
)P,
where is the ordinary gradient operator acting on x R
d
.
Proof Let P

be an orthogonal polynomial in V
d
n
(W
B
,
). The homogeneous
polynomial Y

(y) = r
n
P

(x) satises the equation


WZ
2
h
Y

= 0. Since Y

is
homogeneous of degree n, using Proposition 8.1.2 we conclude that
0 =
WZ
2
h
Y

(y) = r
n2
_
n(n+d +2 +2 1)P

(x) +
WZ
2
h,0
P

(x)

and the stated result follows from the formula for


WZ
2
h,0
.
Written explicitly, the equation in Theorem 8.1.3 takes the form

h
P
d

j=1

x
j
x
j
_
(2

+2 1)P+
d

i=1
x
i
P
x
i
_
=(n+d)(n+2

+2 1)P.
In particular, if W is the rotation group O(d) or h

(x) = 1 then
h
= and
the difference part in the equation disappears. In that case the weight function
W
B
,
becomes the classical weight function W

(x) = (1 |x|
2
)
1/2
, and the
differentialdifference equation becomes the second-order differential equation
(5.2.3) satised by the classical orthogonal polynomials in Section 5.2.
Corollary 8.1.4 Under the correspondence Y(y) = r
n
P(x), the partial differen-
tial equation satised by the classical orthogonal polynomials on B
d
is equivalent
to
h
Y = 0 for h

(y) =[y
d+1
[

.
The weight function given below is invariant under an abelian group and is
more general than W

.
The abelian group Z
d
2
The weight function is
W
B
,
(x) =
d

i=1
[x
i
[
2
i
(1|x|
2
)
1/2
. (8.1.5)
262 Generalized Classical Orthogonal Polynomials
The normalization constant of this weight function is given by
_
B
d
W
B
,
(x)dx =
(
1
+
1
2
) (
d
+
1
2
)( +
1
2
)
([[ + +
d+1
2
)
.
The h-Laplacian associated with h
2

(x) :=

d
i=1
[x
i
[

i
is given by

h
=+
d

i=1

i
_
2
x
i

1
i
x
2
i
_
, (8.1.6)
where
i
is dened by x
i
= x 2x
i

i
and
i
is the ith coordinate vector. We just
give
h
here, since the differentialdifference equation can be easily written down
using
h
.
Two other interesting cases are the symmetric group S
d
and the hyperoctahe-
dral group W
d
. We state the corresponding operators and the weight functions
below.
The symmetric group S
d
The weight function is
W
B
,
(x) =

1i<jd
[x
i
x
j
[
2
(1|x|
2
)
1/2
(8.1.7)
and the corresponding h-Laplacian is given by

h
=+2

1i<jd
1
x
i
x
j
_
(
i

j
)
1(i, j)
x
i
x
j
_
, (8.1.8)
where (i, j) f (x) means that x
i
and x
j
in f (x) are exchanged.
The hyperoctahedral group W
d
The weight function is
W
B
,
(x) =
d

i=1
[x
i
[
2
/

1ij
[x
2
i
x
2
j
[
2
(1|x|
2
)
1/2
, (8.1.9)
where and
/
are two nonnegative real numbers.
Since the group W
d
has two conjugacy classes of reections, see Subsec-
tion 6.2.2, there are two parameters for h

. The corresponding h-Laplacian is


given by

h
=+
/
d

i=1
_
2
x
i

1
i
x
2
i
_
+2

1i<jd
_

j
x
i
x
j

1
i j
(x
i
x
j
)
2
_
+2

1i<jd
_

i
+
j
x
i
+x
j

1
i j
(x
i
+x
j
)
2
_
, (8.1.10)
where
i
,
i j
and
i j
denote the reections in the hyperoctahedral group W
d
dened by x
i
= x2x
i

i
, x
i j
= x(i, j) and
i j
=
i

i j

i
, and the same notation
is used to denote the operators dened by the action of these reections.
8.1 Orthogonal Polynomials on the Ball 263
Note that the equations on B
d
are derived from the corresponding h-Laplacian
in a similar way to the deduction of Proposition 7.5.3 for Jacobi polynomials.
8.1.2 Orthogonal basis and reproducing kernel
Just as in Section 5.2 for the classical orthogonal polynomials, an orthonormal
basis of V
d
n
(W
B
,
) can be given in terms of the h-harmonics associated with h

on S
d1
and the generalized Gegenbauer polynomials (Subsection 1.5.2) using
polar coordinates. Let

+
d1
2
, so that
,
=
k
+.
Proposition 8.1.5 For 0 j
n
2
let Y
h
,n2 j
denote an orthonormal basis of
H
d
n2 j
(h
2

); then the polynomials


P
n
, j
(W
B
,
; x) = (c
B
j,n
)
1
P
(1/2,n2 j+

1/2)
j
(2|x|
2
1)Y
h
,n2 j
(x)
form an orthonormal basis of V
d
n
(W
B
,
); the constant is given by
[c
B
j,n
]
2
=
( +
1
2
)
j
(
k
+
1
2
)
nj
(n j +
,
)
j!(
,
+1)
nj
(n+
,
)
.
The proof of this proposition is almost identical to that of Proposition 5.2.1.
Using the correspondence between orthogonal polynomials on B
d
and on S
d
in (4.2.2), an orthogonal basis of V
d
n
(W
B
,
) can also be derived from the h-
harmonics associated with h
,
on S
d+1
. For a general reection group, however,
it is not easy to write down an explicit orthogonal basis. We give one explicit
basis in a Rodrigues-type formula where the partial derivatives are replaced by
Dunkl operators.
Denition 8.1.6 For N
d
0
, dene
U

(x) := (1|x|
2
)
+1/2
D

(1|x|
2
)
[[+1/2
.
When all the variables in are 0, D

and we are back to orthogonal


polynomials of degree n in d variables, as seen in Proposition 5.2.5.
Proposition 8.1.7 Let N
d
0
and [[ = n. Then U

is a polynomial of degree
n, and we have the recursive formula
U

+e
i
(x) =(2 +1)x
i
U
+1

(x) +(1|x|
2
)D
i
U
+1

(x).
Proof Since (1|x|
2
)
a
is invariant under the reection group, a simple compu-
tation shows that
264 Generalized Classical Orthogonal Polynomials
D
i
U
+1

(x) =
i
_
(1|x|
2
)
1/2
_
D

(1|x|
2
)
[[++1/2
+(1|x|
2
)
1/2
D
+e
i
(1|x|
2
)
[[++1/2
=(2 +1)x
i
(1|x|
2
)
1
U
+1

(x) +(1|x|
2
)
1
U

+e
i
(x),
which proves the recurrence relation. That U

is a polynomial of degree [[ is a
consequence of this relation, which can be proved by induction if necessary.
The following proposition plays a key role in proving that U

is in fact an
orthogonal polynomial with respect to W
,
on B
d
.
Proposition 8.1.8 Assume that p and q are continuous functions and that p
vanishes on the boundary of B
d
. Then
_
B
d
D
i
p(x)q(x)h
2

(x)dx =
_
B
d
p(x)D
i
q(x)h
2

(x)dx.
Proof The proof is similar to the proof of Theorem 7.1.18. We shall be brief.
Assume that
v
1. Analytic continuation can be used to extend the range of
validity to
v
0. Integration by parts shows that
_
B
d

i
p(x)q(x)h
2

(x)dx =
_
B
d
p(x)
i
[q(x)h
2

(x)] dx
=
_
B
d
p(x)
i
q(x)h
2

(x)dx
2
_
B
d
p(x)q(x)h

(x)
i
h

(x)dx.
Let D
i
denote the difference part of D
i
: D
i
=
i
+D
i
. For a xed root v, a simple
computation shows that
_
B
d
D
i
p(x)q(x)h
2

(x)dx =
_
B
d
p(x)

vR
+

i
v
i
q(x) +q(x
v
)
x, v)
h
2

(x)dx.
Adding these two equations and using the fact that
h

(x)
i
h

(x) =

vR
+

v
v
i
1
v, x)
h
2

(x)
completes the proof.
Theorem 8.1.9 For N
d
0
and [[ = n, the polynomials U

are elements of
V
d
n
(W
,
).
Proof For each N
d
0
and [[ < n, we claim that
D

(1|x|
2
)
n+1/2
= (1|x|
2
)
n[[+1/2
Q

(x) (8.1.11)
for some Q


d
n
. This follows from induction. The case = 0 is trivial, with
Q
0
(x) = 1. Assume that (8.1.11) holds for [[ < n1. Then
8.1 Orthogonal Polynomials on the Ball 265
D
+e
i
(1|x|
2
)
n+1/2
=D
i
_
(1|x|
2
)
n[[+1/2
Q

(x)
_
=
i
_
(1|x|
2
)
n[[+1/2
_
Q

(x) +(1|x|
2
)
n[[+1/2
D
i
Q

(x)
= (1|x|
2
)
n[[+3/2
_
2(n[[ +
1
2
)x
i
Q

(x) +(1|x|
2
)D
i
Q

(x)

= (1|x|
2
)
n[+e
i
[+1/2
Q
+e
i
(x),
where Q
+e
i
, as dened above, is clearly a polynomial of degree [[ +1, which
completes the inductive proof. The identity (8.1.11) shows, in particular, that
D

(1|x|
2
)
n+1/2
is a function that vanishes on the boundary of B
d
if [[ <n.
For any polynomial p
d
n1
, using Proposition 8.1.8 repeatedly gives
_
B
d
U

(x)p(x)h
2

(x)(1|x|
2
)
1/2
dx
=
_
B
d
D

_
(1|x|
2
)
n+1/2
_
p(x)h
2

(x)dx
= (1)
n
_
B
d
D

p(x)(1|x|
2
)
n+1/2
h
2

(x)dx = 0,
since D
i
: P
d
n
P
d
n1
, which implies that D

p(x) = 0 since p
d
n1
.
There are other orthogonal bases of V
d
n
(W
,
), but few can be given explicitly
for a general reection group. For d 2, the only exceptional case is Z
d
2
, which
will be discussed in the next subsection.
The correspondence in (4.2.2) allows us to state a closed formula for the repro-
ducing kernel P
n
(W
B
,
) of V
d
n
(W
B
,
) in terms of the intertwining operator V

associated with h

.
Theorem 8.1.10 Let V

be the intertwining operator associated with h

. Then,
for the normalized weight function W
B
,
in Denition 8.1.1,
P
n
(W
B
,
; x, y)
=
n+
,

,
c

_
_
1
1
C

,
n
_
x, ) +t
_
1|x|
2
_
1|y|
2
_
(1t
2
)
1
dt
_
(y),
where c

= [B(
1
2
, )]
1
.
Proof The weight function h
,
in (8.1.1) is invariant under the group W Z
2
.
Its intertwining operator, denoted by V
,
, satises the relation V
,
=V

V
Z
2

,
where V

is the intertwining operator associated with h

and V
Z
2

is the intertwin-
ing operator associated with h

(y) = [y
d+1
[

. Hence, it follows from the closed


formula for V
Z
2

in Theorem 7.5.4 that


V
,
f (x, x
d+1
) = c

_
1
1
V

[ f (, x
d+1
t)](x)(1+t)(1t
2
)
1
dt.
266 Generalized Classical Orthogonal Polynomials
Using the fact that P
n
(W
B
,
) is the kernel corresponding to h-harmonics that are
even in y
d+1
, as in Theorem 4.2.8, the formula follows from Corollary 7.3.2.
8.1.3 Orthogonal polynomials for Z
d
2
-invariant weight functions
In this subsection we consider the orthogonal polynomials associated with the
weight function W
B
,
dened in (8.1.5), which are invariant under Z
d
2
. Here

=[[ +
d 1
2
and
,
=

+.
We start with an orthonormal basis for V
d
n
(W
B
,
) derived from the basis of h-
harmonics in Theorem 7.5.2. Use the same notation as in Section 5.2: associated
with x = (x
1
, . . . , x
d
) R
d
dene x
j
= (x
1
, . . . , x
j
) for 1 j d and x
0
=0. Asso-
ciated with = (
1
, . . . ,
d
) dene
j
= (
j
, . . . ,
d
), and similarly for N
d
0
.
Theorem 8.1.11 For d 2, N
d
0
and [[ = n, dene
P
n

(W
B
,
; x) = (h
B
,n
)
1
d

j=1
_
(1|x
j1
|
2
)

j
/2
C
(+
j
,
j
)

j
_
x
j
_
1|x
j1
|
2
__
,
where [
j
[ =
j
+ +
d
,
j
=[
j+1
[ +[
j+1
[ +
dj
2
and
_
h
B
,n

2
=
1
([[ +
d+1
2
)
n
d

j=1
h
(+
j
,
j
)

j
(
j
+
j
)

j
,
h
(,)
n
being the normalization constant of C
(,)
n
. Then P
n

(W
B
,
) : [[ = n is
an orthonormal basis of V
n
(W
B
,
) for W
B
,
in (8.1.5).
Proof Using the correspondence in (4.2.2), the statement follows from Theorem
7.5.2 if we replace d by d +1 and take
d+1
=; we need only consider
d+1
=0.
In particular, when =0 this basis reduces to the basis in Proposition 5.2.2 for
classical orthogonal polynomials.
The correspondence in Theorem 4.2.4 can also be used for dening monic
orthogonal polynomials. Since x
2
d+1
= 1 |x|
2
for (x, x
d+1
) S
d
, the monic h-
harmonics

R

(y) in Denition 7.5.8, with d replaced by d +1, become monic


polynomials in x if = (
1
, . . . ,
d
, 0) and y = r(x, x
d+1
) by the correspondence
(4.2.2). This suggests the following denition:
Denition 8.1.12 For b R
d
such that max
j
[b
j
[ 1, dene polynomials

R
B

(x),
N
d
0
and x B
d
by
c

_
[1,1]
d
1
[12z(b, x, t) +|b|
2
]

i=1
(1+t
i
)(1t
2
i
)

i
1
dt =

N
d
0
b

R
B

(x),
where z(b, x, t) = (b
1
x
1
t
1
+ +b
d
x
d
t
d
).
8.1 Orthogonal Polynomials on the Ball 267
The polynomials

R
B

with [[ = n form a basis of the subspace of orthogonal


polynomials of degree n with respect to W
B

. It is given by an explicit formula, as


follows.
Proposition 8.1.13 For N
d
0
, let =
+1
2
|. Then

R
B

(x) =
2
[[
(
,
)
[[
!
(
1
2
)

( +
1
2
)

R
B

(x),
where R

(x) is dened by
R

(x) = x

F
B
_
, + +
1
2
; [[
,
+1;
1
x
2
1
, . . . ,
1
x
2
d
_
.
In particular, R
B

(x) = x

(x), Q


d
n1
, is the monic orthogonal polyno-
mial with respect to W
B

on B
d
.
Proof Setting b
d+1
=0 and |x| =1 in the generating function in Denition 7.5.8
shows that the generating function for

R
B

is the same as that for



R
(,0)
(x). Conse-
quently

R
B

(x) =

R
(,0)
(x, x
d+1
) for (x, x
d+1
) S
d
. Since

R
(,0)
(x, x
d+1
) is even in
its (d +1)th variable, the correspondence in (4.2.2) shows that

R
B

is orthogonal
and that its properties can be derived from those of

R

.
In particular, if
i
= 0 for i = 1, . . . , d and
d+1
= , so that W
B

becomes the
classical weight function (1 |x|
2
)
1/2
, then the limit relation (1.5.1) shows
that in the limit the generating function becomes the generating function (5.2.5),
so that R
B

coincides with Appells orthogonal polynomials V

.
The L
2
norm of R
B

can also be deduced from the correspondence (4.2.2).


Recall that w
B
,
is the normalization constant of the weight function W
B
,
in
(8.1.5). We denote the normalized L
2
norm with respect to W
B
,
by
| f |
2,B
=
_
w
B
,
_
B
d
[ f (x)[
2
W
B
,
(x)dx
_
1/2
.
As in the case of h-spherical harmonics, the polynomial x

R
B

= Q

is the
best approximation to x

from the polynomial space


d
n1
in the L
2
norm. A
restatement is the rst part of the following theorem.
Theorem 8.1.14 The polynomial R
B

has the smallest | f |


2,B
norm among all
polynomials of the form x

P(x), P
d
n1
. Furthermore, for N
d
0
,
|R
B

|
2
2,B
=
2

d
i=1
_

i
+
1
2
_

i
(
,
)
[[
_
1
0
d

i=1
C
(1/2,
i
)

i
(t)
k
(1/2,
i
)

i
t
[[+2
,
1
dt.
Proof Since the monic orthogonal polynomial R
B

is related to the h-harmonic


polynomial R
(,0)
in Subsection 7.5.3 by the formula R
B

(x) = R
(,0)
(x, x
d+1
),
268 Generalized Classical Orthogonal Polynomials
(x, x
d+1
) S
d
, it follows readily from Lemma 4.2.3 that the norm of R

can be
deduced from that of R
(,0)
in Theorem 7.5.13 directly.
For the classical weight function W

(x) = (1|x|
2
)
1/2
, the norm of R

can
be expressed as the integral of the product Legendre polynomials P
n
(t) =C
1/2
n
(t).
Equivalently, as the best approximation in the L
2
norm, it gives the following:
Corollary 8.1.15 Let

= +
d1
2
> 0 and n = [[ for N
d
0
. For the
classical weight function W

(x) = (1|x|
2
)
1/2
on B
d
,
min
Q
d
n1
|x

Q(x)|
2
2,B
=

!
2
n1
(

)
n
_
1
0
d

i=1
P

i
(t)t
n+2

1
dt.
Proof Set
i
= 0 for 1 i d and =
d+1
in the formula in Theorem 8.1.14.
The stated formula follows from (1)
2n
= 2
2n
(
1
2
)
n
(1)
n
, n! = (1)
n
and the fact that
C
(1/2,0)
m
(t) =C
1/2
m
(t) = P
m
(t).
8.1.4 Reproducing kernel for Z
d
2
-invariant weight functions
In the case of the space V
d
n
(W
B
,
) associated with Z
d
2
, we can use the explicit
formula for V in Theorem 7.5.4 to give an explicit formula for the reproducing
kernel.
Theorem 8.1.16 For the normalized weight function W
B
,
in (8.1.5) associated
with the abelian group Z
d
2
,
P
n
(W
B
,
; x, y)
=
n+
,

,
_
1
1
_
[1,1]
d
C

,
n
_
t
1
x
1
y
1
+ +t
d
x
d
y
d
+s
_
1|x|
2
_
1|y|
2
_

i=1
c

i
(1+t
i
)(1t
2
i
)

i
1
dt c

(1s
2
)
1
ds.
Proof This is a consequence of Theorem 4.2.8 and the explicit formula in
Theorem 7.5.5, in which we again replace d by d +1 and let
d+1
= .
In particular, taking the limit
i
0 for i = 1, . . . , d and using (1.5.1) repeat-
edly, the above theorem becomes Theorem 5.2.8, which was stated without proof
in Section 5.2.
In the case of Z
d
2
, we can also derive an analogue of the FunkHecke formula
for W
B
,
on the ball. Let us rst recall that the intertwining operator associated
with the weight function h
,
(x) =

d
i=1
[x
i
[

i
[x
d+1
[

is given by
V f (x) =
_
[1,1]
d+1
f (t
1
x
1
, . . . , t
d+1
x
d+1
)
d+1

i=1
c

i
(1+t
i
)(1t
2
i
)

i
1
dt,
8.1 Orthogonal Polynomials on the Ball 269
with =
d+1
. Associated with this operator, dene
V
B
f (x) =
_
[1,1]
d
f (t
1
x
1
, . . . , t
d
x
d
)
d

i=1
c

i
(1+t
i
)(1t
2
i
)

i
1
dt.
Then the FunkHecke formula on the ball B
d
is given by
Theorem 8.1.17 Let W
B
,
be dened as in (8.1.5). Let f be a continuous
function on [1, 1] and P
n
V
d
n
(W
B
,
). Then
w
B
,
_
B
d
V
B
f (x, ))(y)P
n
(y)W
B
,
(y)dy =
n
( f )P
n
(x), |x| = 1,
where
n
( f ) is the same as in Theorem 7.3.4 with

replaced by
,
.
Proof From the denition of V
B
f and the denition of V f with
d+1
= , it
follows that
V
B
f
_
x, )
_
(y) =V f
_
(x, x
d+1
), )
_
(y, 0) =V f
_
(x, 0), )
_
(y, y
d+1
),
where we take x B
d
and x
d+1
=
_
1|x|
2
so that (x, x
d+1
) S
d
. In the fol-
lowing we also write y B
d
and (y, y
d+1
) S
d
. Then, using the relation between
P
n
and the h-harmonic Y
n
in (4.2.2) and the integration formula in Lemma 4.2.3,
it follows that
_
B
d
V
B
f (x, ))(y)P
n
k
(y)W
B
,
(y)dy
=
_
S
d
V f ((x, 0), ))(y, y
d+1
)Y
n
(y, y
d+1
)h
2

(y, y
d+1
)d
=
n
( f )Y
n
(x, 0) =
n
( f )P
n
(x),
where the second equality follows from the FunkHecke formula of Theo-
rem 7.3.4, which requires that |(x, 0)| =|x| = 1.
The most interesting case of the formula occurs when = 0, that is, in the
case of the classical weight function W
B

on B
d
. Indeed, taking the limit
i
0,
we have that V
B
= id for W
B

. Hence, we have the following corollary, in which

= +
d1
2
.
Corollary 8.1.18 Let f be a continuous function on [1, 1]. For W
B

(x) = (1
|x|
2
)
1/2
, let P
n
V
d
n
(W
B

). Then
w
B

_
B
d
f (x, y))P
n
(y)W
B

(y)dy =
n
( f )P
n
(x), |x| = 1,
where
n
( f ) is the same as in Theorem 7.3.4 but with

replaced by

.
An interesting consequence of this corollary and Theorem 5.2.8 is the follow-
ing proposition.
270 Generalized Classical Orthogonal Polynomials
Proposition 8.1.19 Let satisfy || = 1; then C

n
(x, )) is an element of
V
d
n
(W
B

). Further, if also satises || = 1 then


_
B
d
C

n
(x, ))C

n
(x, ))W
B

(x)dx =

n+

n
(, )).
Proof Taking y = in the formula of Theorem 5.2.8 gives
P
n
_
W
B

; x,
_
=
n+

n
(x, )),
which is an element of V
d
n
(W
B

) since P
n
(W
B

) is the reproducing kernel.


The displayed equation follows from choosing f (x) = C

n
(x, )) and P
n
(x) =
C

n
(x, )) in Corollary 8.1.18. The constant is

n
=
_
C

n
(1)

1
_
1
1
_
C

n
(t)

2
w

(t)dt,
which can be shown from the structure constants for C

n
in Subsection 1.4.3 to
equal

/(n+

).
One may ask how to choose a set
i
S
d
such that the polynomials
C

m
u
n
(x,
i
)) form a basis for V
d
n
(W
B

) for a given n.
Proposition 8.1.20 The set C

n
(x,
i
)) : [
i
[ = 1, 1 i r
d
n
is a basis for
V
d
n
(W
B

) if the matrix A

n
=
_
C

n
(
i
,
j
))
_
r
d
n
i, j=1
is nonsingular.
Proof This set is a basis for V
n
(W
B

) if and only if C

n
(x, )) is linearly inde-
pendent. If A

n
is nonsingular then

k
c
k
C

n
(x,
i
)) = 0, obtained upon setting
x =
j
, implies that c
k
= 0 for all k.
Choosing
i
such that A

n
is nonsingular has a strong connection with the dis-
tribution of points on the sphere, which is a difcult problem. One may further
ask whether it is possible to choose
i
such that the basis is orthonormal. Propo-
sition 8.1.20 shows that the polynomials C

n
(x,
i
)), 1 i N, are mutually
orthogonal if and only if C

n
(
i
,
j
)) =0 for every pair of i, j with i ,= j, in other
words, if the
i
,
j
) are zeros of the Gegenbauer polynomial C

n
(t). While it is
unlikely that such a system of points exists in S
d
for d > 2, the following special
result in d = 2 is interesting. Recall that U
n
(t) denotes the Chebychev polyno-
mial of the second kind, which is the Gegenbauer polynomial of index 1; see
Subsection 1.4.3.
Proposition 8.1.21 For the Lebesgue measure on B
2
, an orthonormal basis for
the space V
2
n
is given by the polynomials
8.2 Generalized Classical Orthogonal Polynomials on the Simplex 271
U
n
k
(x
1
, x
2
) =
1

U
n
_
x
1
cos
k
n+1
+x
2
sin
k
n+1
_
, 0 k n.
Proof Recall that U
n
(t) = sin(n + 1)/sin with t = cos; the zeros of U
n
are t
k
= cosk/ (n +1) for k = 1, . . . , n. Let
i
= (cos
i
, sin
i
) with
i
= i/
(n +1) for i = 0, 1, . . . , n. Then
i
,
j
) = cos(
i

j
) and
i

j
=
ij
for
i > j. Hence, by Proposition 8.1.19, U
n
(x,
i
)) is orthogonal with respect to
the weight function W
1/2
(x) = 1/.
The same argument shows that there does not exist a system of points
i
: 1
i n +1 on S
1
such that the polynomials C
+1/2
n
(x,
i
)) are orthonormal for
,=
1
2
.
8.2 Generalized Classical Orthogonal Polynomials
on the Simplex
As was shown in Section 4.4, the orthogonal polynomials on the simplex are
closely related to those on the sphere and on the ball. The classical orthogonal
polynomials on the simplex were discussed in Section 5.3.
8.2.1 Weight function and differentialdifference equation
Here we consider orthogonal polynomials with respect to weight functions
dened below.
Denition 8.2.1 Let h

be a reection-invariant weight function and assume


that h

is also invariant under Z


d
2
. Dene a weight function on T
d
by
W
T
,
(x) := h
2

x
1
, . . . ,

x
d
)(1[x[)
1/2
/

x
1
. . . x
d
, >
1
2
.
We call polynomials that are orthogonal to W
T
,
generalized classical orthogonal
polynomials on T
d
.
In Denition 4.4.2 the weight function W
T
=W
T
,
corresponds to the weight
function W
B
,
in Denition 8.1.1. Let the normalization constant of W
T
,
be
denoted by w
T
,
. Then Lemma 4.4.1 implies that w
T
,
= w
B
,
.
The requirement that h

is also Z
d
2
-invariant implies that the reection group G,
under which h

is dened to be invariant, is a semi-product of another reection


group, G
0
, and Z
d
2
. Essentially, in the indecomposable case this limits G to two
classes, Z
d
2
itself and the hyperoctahedral group. We list the corresponding weight
functions below.
The abelian group Z
d
2
The weight function is
W
T
,
(x; Z
d
2
) = x

1
1/2
1
x

d
1/2
d
(1[x[)
1/2
, (8.2.1)
272 Generalized Classical Orthogonal Polynomials
where
i
0 and 0. The orthogonal polynomials associated with W
T
,
are the classical orthogonal polynomials in Section 5.3. This weight function
corresponds to W
B
,
in (8.1.5).
The hyperoctahedral group The weight function is
W
T
,
(x) =
d

i=1
x

/
1/2
i

1i<jd
[x
i
x
j
[

(1[x[)
1/2
, (8.2.2)
where
/
0 and 0. This weight function corresponds to W
B
,
(x) in (8.1.9).
The relation described in Theorem 4.4.4 allows us to derive differential
difference equations for orthogonal polynomials on T
d
. Denote the orthogonal
polynomials with respect to W
T
,
by P
n

(W
T
,
; x). By (4.4.1) these polynomials
correspond to the orthogonal polynomials P
2n

(W
B
,
; x) of degree 2n on B
d
, which
are even in each of their variables. Making the change of variables x
i

z
i
gives

x
i
= 2

z
i

z
i
and

2
x
2
i
= 2
_

z
i
+2z
i

2
z
2
i
_
.
Using these formulae and the explicit formula for the h-Laplacian given in Sub-
section 8.1.1, we can derive a differentialdifference equation for the orthogonal
polynomials on T
d
from the equation in Theorem 8.1.3.
In the case of Z
d
2
, the Dunkl operator D
j
, see (7.5.2), takes a particularly sim-
ple form; it becomes a purely differential operator when acting on functions that
are even in each of their variables. Hence, upon making the change of variables
x
i

x
i
and using the explicit formula for
h
in (7.5.3), we can derive from
the equation in Theorem 8.1.3 a differential equation for orthogonal polynomials
with respect to W
T
,
.
Theorem 8.2.2 The classical orthogonal polynomials in V
d
n
(W
T
,
) satisfy the
partial differential equation
d

i=1
x
i
(1x
i
)

2
P
x
2
i
2

1i<jd
x
i
x
j

2
P
x
i
x
j
+
d

i=1
[
_

i
+
1
2
_

_
[[ + +
d+1
2
_
x
i
]
P
x
i
=
n
P, (8.2.3)
where
n
=n(n+[[ + +
d1
2
) and
i
0, 0 for 1 i d.
This is the same as the equation, (5.3.4), satised by the classical orthogonal
polynomials on the simplex. Although the above deduction requires
i
0 and
0, the analytic continuation shows that it holds for all
i
>
1
2
and >
1
2
.
The above derivation of (8.2.3) explains the role played by the symmetry of T
d
.
In the case of the hyperoctahedral group, we use the explicit formula for
h
given
8.2 Generalized Classical Orthogonal Polynomials on the Simplex 273
in (8.1.10). The fact that the equation in Theorem 8.1.3 is applied to functions
that are even in each variable allows us to drop the rst difference part in
h
and
combine the other two difference parts. The result is as follows.
Theorem 8.2.3 The orthogonal polynomials in V
d
n
(W
T
,
) associated with the
hyperoctahedral group satisfy the differentialdifference equation
_
d

i=1
x
i
(1x
i
)

2
x
2
i
2

1i<jd
x
i
x
j

2
x
i
x
j
+
d

i=1
_

/
+12(2

+2 +d)x
i
_

x
i
+

1i<jd
1
x
i
x
j
_
2
_
x
i

x
i
x
j

x
j
_

x
i
+x
j
x
i
x
j
_
1
i, j
_
_
_
P
=n(n+

+ +
d1
2
)P,
where

=
_
d
2
_
+d
/
.
8.2.2 Orthogonal basis and reproducing kernel
According to Theorem 4.4.6 the orthogonal polynomials associated with W
T
,
are
related to the h-harmonics associated with h
,
in (8.1.1) as shown in Proposition
4.4.8. In the case of the classical weight function W
T
,
in (8.2.1) the orthonormal
basis derived from Theorems 4.4.4 and 8.1.11 turns out to be exactly the set of
the classical orthogonal polynomials given in Section 5.3. There is another way
of obtaining an orthonormal basis, based on the following formula for changing
variables.
Lemma 8.2.4 Let f be integrable over T
d
. Then
_
T
d
f (x)dx =
_
1
0
s
d1
_
[u[=1
f (su)duds.
The formula is obtained by setting x =su, with 0 s 1 and [u[ =1; these can
be called the
1
-radial coordinates of the simplex. Let us denote by Q
m

[[=m
a
sequence of orthonormal homogeneous polynomials of degree m associated with
the weight function h
2
(

u
1
, . . . ,

u
d
)/

u
1
. . . u
d
on the simplex in homogeneous
coordinates, as in Proposition 4.4.8.
Proposition 8.2.5 Let Q
m

be dened as above. Then the polynomials


P
n
,m
(x) = b
m,n
p
(

+2m+(d2)/2,1/2)
nm
(2s 1)Q
m

(x)
274 Generalized Classical Orthogonal Polynomials
for 0 m n and [[ = m, where
[b
m,n
]
2
= (

+
d
2
)
2m
/(

+
d
2
+ +
1
2
)
2m
,
form an orthonormal basis of V
d
n
(W
T
,
) and the P
n
,m
are homogeneous in the
homogeneous coordinates of T
d
.
Proof For brevity, let = +
d2
2
. It follows from Lemma 8.2.4 that
_
T
d
P
n
,m
(x)P
n

/
,m
/
(x)h
2

x
1
, . . . ,

x
d
)(1[x[)
1/2
dx

x
1
x
d
= c
_
1
0
p
(+2m,1/2)
nm
(2s 1)p
(+2m
/
,1/2)
nm
/
(2s 1) s
+m+m
/
(1s)
1/2
ds

_
[u[=1
Q
m

(u)Q
m
/

/
(u)h
2

u
1
, . . . ,

u
d
)
du

u
1
u
d
,
where c = b
m,n
b
m
/
,n
, from which the pairwise orthogonality follows from that
of Q
m

and of the Jacobi polynomials. It follows from Lemma 8.2.4 that the
normalization constant w
T
,
of W
T
,
is also given by
1
w
T
,
=
_
T
d
W
,
(x)dx
=
1
2
B
_
+1, +
1
2
_
_
[u[=1
h
2

u
1
, . . . ,

u
d
)
du

u
1
u
d
.
Hence, multiplying the integral of [P
n
,m
(x)]
2
by w
T
,
, we can nd b
m,n
from the
equation 1 = b
2
m,n
B( +1, +
1
2
)/B( +2m+1, +
1
2
).
Using the relation to orthogonal polynomials on B
d
and Theorem 4.4.4, an
orthonormal basis of V
d
n
(W
T
,
) can be derived from those given in Proposition
8.1.5, in which

+
d1
2
.
Proposition 8.2.6 For 0 j 2n let Y
h
,2n2 j
denote an orthonormal basis
of H
d
2n2 j
(h
2

, Z
d
2
); then the polynomials
P
n
, j
(W
T
,
; x) =[c
T
j,n
]
1
P
(1/2,n2 j+

1/2)
j
(2[x[ 1)
Y
h
,2n2 j
(

x
1
, . . . ,

x
d
)
form an orthonormal basis of V
d
n
(W
T
,
), where c
T
j,n
= c
B
j,2n
.
The relation between the orthogonal polynomials on B
d
and those on T
d
also
allows us to derive an explicit formula for the reproducing kernel P
n
(W
T
,
) of
V
d
n
(W
T
,
). Let us introduce the notation x
1/2
= (

x
1
, . . . ,

x
d
) for x T
d
.
Theorem 8.2.7 Let V

be the intertwining operator associated with h

and
=

+ +
d1
2
. Then
8.2 Generalized Classical Orthogonal Polynomials on the Simplex 275
P
n
(W
T
,
; x, y)
=
2n+

_
_
1
1
C

2n
_
x
1/2
, ) +t
_
1[x[
_
1[y[
_
(1t
2
)
1
dt
_
(y
1/2
).
Proof Using Theorem 4.4.4, we see that the reproducing kernel P
n
(W
T
,
) is
related to the reproducing kernel P
n
(W
B
,
) on the unit ball by
P
n
(W
T
,
; x, y) =
1
2
d

Z
d
2
P
2n
_
W
B
,
; (
1

x
1
, . . . ,
d

x
d
), y
1/2
_
.
In fact, since the reproducing kernel on the left-hand side is unique, it is sufcient
to show that the right-hand side has the reproducing property; that is, if we denote
the right-hand side temporarily by Q
n
(x, y) then
_
T
d
P(y)Q
n
(x, y)W
T
,
(y)dy = P(x)
for any polynomial P V
d
n
(W
T
,
). We can take P = P
n
, j
(W
T
,
) in Proposition
8.2.6. Upon making the change of variables y
i
y
2
i
, 1 i d, and using Lemma
4.4.1 it is easily seen that this equation follows from the reproducing property
of P
2n
(W
B
,
). Hence, we can use Theorem 8.1.10 to get an explicit formula for
P
n
(W
T
,
). Since the weight function is invariant under both the reection group
and Z
d
2
, and V

commutes with the action of the group, it follows that R()V

=
V

R() for Z
d
2
, where R() f (x) = f (x). Therefore the summation over Z
d
2
does not appear in the nal formula.
In the case of Z
d
2
, we can use the formula for the intertwining operator V =V

in Theorem 7.5.4 to write down an explicit formula for the reproducing kernel.
The result is a closed formula for the classical orthogonal polynomials on T
d
.
Theorem 8.2.8 For the classical orthogonal polynomials associated with the
normalized weight function W
T
,
in (8.2.1) with
,
=[[ + +
d1
2
,
P
n
(W
T
,
, x, y)
=
2n+
,

,
_
1
1
_
[1,1]
d
C

,
2n
(z(x, y, t, s))
d

i=1
c

i
(1t
2
i
)

i
1
dt(1s
2
)
1
ds,
(8.2.4)
where z(x, y, t, s) =

x
1
y
1
t
1
+ +

x
d
y
d
t
d
+s
_
1[x[
_
1[y[, and if a par-
ticular
i
or is zero then the formula holds under the limit relation (1.5.1).
Using the fact that C

2n
(t) = [()
n
/(
1
2
)
n
]P
1/2,1/2
n
(2t
2
1) and setting =

d+1
, the identity (8.2.4) becomes (5.3.5) and proves the latter. Some applica-
tions of this identity in the summability of orthogonal expansions will be given in
Chapter 9.
276 Generalized Classical Orthogonal Polynomials
8.2.3 Monic orthogonal polynomials
The correspondence between the orthogonal polynomials on the simplex and
those on the sphere and on the ball can be used to dene monic orthogonal poly-
nomials on the simplex. In this subsection we let =
d+1
and write W
T
,
as W
T

.
In particular,
,
becomes

dened by

k
=[[ +
d1
2
, [[ =
1
+ +
d+1
.
Since the simplex T
d
is symmetric in (x
1
, . . . , x
d
, x
d+1
), x
d+1
= 1[x[, we use
the homogeneous coordinates X := (x
1
, . . . , x
d
, x
d+1
). For the monic h-harmonics
dened in Denition 7.5.8, with d replaced by d +1 the polynomial

R
2
is even
in each of its variables, and this corresponds to, using (4.4.1), monic orthogonal
polynomials

R
T

in V
d
n
(W
T

) in the homogeneous coordinates X. This leads to the


following denition.
Denition 8.2.9 For b R
d
with max
j
[b
j
[ 1 and N
d+1
0
, dene polyno-
mials

R
T

(x) by
c

_
[1,1]
d+1
1
[12(b
1
x
1
t
1
+ +b
d+1
x
d+1
t
d+1
) +|b|
2
]

d+1

i=1
(1t
2
i
)

i
1
dt =

N
d+1
0
b
2

R
T

(x), x T
d
.
The main properties of

R
T

are given in the following proposition.


Proposition 8.2.10 For each N
d+1
0
with [[ = n, the polynomials

R
T

(x) =
2
2[[
(

)
2[[
(2)!
(
1
2
)

( +
1
2
)

R
T

(x),
where
R
T

(x) = X

F
B
_
, +
1
2
; 2[[

+1;
1
x
1
, . . . ,
1
x
d+1
_
= (1)
n
( +
1
2
)

(n+[[ +
d1
2
)
n
F
A
_
[[ +[[ +
d1
2
, ; +
1
2
; X
_
are polynomials orthogonal with respect to W
T

on the simplex T
d
. Moreover,
R
T

(x) = X

(x), where Q

is a polynomial of degree at most n 1 and


R
T

, = (
/
, 0), [[ = n is a basis for V
d
n
(W
T

).
Proof We return to the generating function for the h-harmonics

R

in Deni-
tion 7.5.8, with d replaced by d +1. The explicit formula for

R

(x) shows that


it is even in each variable only if each
i
is even. Let 1, 1
d+1
. Then

(x) =

R

(
1
x
1
, . . . ,
d+1
x
d+1
) =

(x). It follows that on the one hand


8.2 Generalized Classical Orthogonal Polynomials on the Simplex 277

N
d+1
0
b
2

R
2
(x) =
1
2
d+1

N
d+1
0
b

1,1
d+1

(x).
On the other hand, using the explicit formula for V

, the generating function gives


1
2
d+1

1,1
d+1

N
d+1
0
b

(x)
= c

_
[1,1]
d+1

1,1
d+1

d+1
i=1
(1+t
i
)(1t
2
i
)

i
1
[12(b
1
x
1
t
1

1
+ +b
d+1
x
d+1
t
d+1

d+1
) +|b|
2
]

dt
for |x| = 1. Making the change of variables t
i
t
i

i
in the integral and using the
fact that

d+1
i=1
(1+
i
t
i
) = 2
d+1
, we see that the generating function for

R
2
(x)
agrees with the generating function for

R
T

(x
2
1
, . . . , x
2
d+1
) in Denition 8.2.9. Con-
sequently, the formulae for R
T

follow from the corresponding ones for R


2
in
Propositions 7.5.10 and 7.5.11, both with d replaced by d +1. The polynomial
R
T

is homogeneous in X. Using the correspondence between the orthogonal poly-


nomials on S
d
and on T
d
, we see that the R
T

are orthogonal with respect to


W
T

. If
d+1
= 0 then R
T

(x) = x

, which proves the last assertion of the


proposition.
In the case
d+1
= 0, the explicit formula of R
T

shows that R
T
(,0)
(x) = x

(x), so that it agrees with the monic orthogonal polynomial V


T

in Section 5.3.
Setting b
d+1
= 0 in Denition 8.2.9 gives the generating function of R
T
(,0)
=
V
T

(x).
In the case of a simplex, the polynomial R
T

is the orthogonal projection of


X

= x

1
1
x

d
d
(1[x[)

d+1
. We compute its L
2
norm. Let
| f |
2,T
:=
_
w
T

_
T
d
[ f (x)[
2
W
T

(x)dx
_
1/2
.
Theorem 8.2.11 Let N
d+1
0
. The polynomial R
T

has the smallest | |


2,T
norm among all polynomials of the form X

P, P
d
[[1
, and the norm is
given by
|R

|
2,T
=

_
+
1
2
_
2
(

)
2[[

()

_
+
1
2
_

_
2 +
1
2
_

!(2[[ [[ +

)
=

!
_
+
1
2
_

)
2[[
_
1
0
d+1

i=1
P
(0,
i
1/2)

i
(2r 1)r
[[+

1
dr.
Proof By (4.4.4) the norm of R
T

is related to the norm of the h-harmonic R

on
S
d
. Since R
T

(x
2
1
, . . . , x
2
d+1
) = R
2
(x
1
, . . . , x
d+1
), the norm of R
T

can be derived
from the norm of R
2
. Using the fact that C
(1/2,
i
)
2
i
(t) = P
(0,
i
1/2)

i
(2t
2
1), the
278 Generalized Classical Orthogonal Polynomials
norm |R
T

|
2,T
follows from making the change of variable t
2
r in the integral
in Theorem 7.5.13 and replacing d by d +1.
In particular, if
d+1
=0 then the norm of R
(,0)
(x) is the smallest norm among
all polynomials of the form x

P, P
d
n1
.
Corollary 8.2.12 Let N
d
0
and n =[[. Then
inf
Q
d
n1
|x

Q(x)|
2
2,T
=

!
d
i=1
_

i
+
1
2
_

i
(

)
2[[

_
1
0
d

i=1
P
(0,
i
1/2)

i
(2r 1)r
[[+

1
dr.
We note that this is a highly nontrivial identity. In fact, even the positivity of
the integral on the right-hand side is not obvious.
8.3 Generalized Hermite Polynomials
The multiple Hermite polynomials are discussed in Subection 5.1.3.
Denition 8.3.1 Let h

be a reection-invariant weight function. The general-


ized Hermite weight function is dened by
W
H

(x) = h
2

(x)e
|x|
2
, x R
d
.
Again recall that c
/
h
is the normalization constant of h
2

over S
d1
. Using polar
coordinates, it follows that the normalization constant of W
H

is given by
w
H

=
_
_
R
d
W
H

(x)dx
_
1
=
2c
/
h

d1
(

+
d
2
)
.
If = 0 or W is the orthogonal group then h

(x) = 1 and the weight function


W
H

(x) reduces to the classical Hermite weight function e


|x|
2
.
An orthonormal basis for the generalized Hermite weight function can be given
in terms of h-harmonics and the polynomials H

n
on R in Subsection 1.5.1. We
call it the spherical polar Hermite polynomial basis. Denote by

H

n
the normalized
H

n
and again let

+
d1
2
.
Proposition 8.3.2 For 0 2j n, let Y
h
,n2 j
denote an orthonormal basis of
H
d
n2j
(h
2

); then the polynomials


P
n
, j
(W
H

; x) = [c
H
j,n
]
1

H
n2 j+

2 j
(|x|)Y
h
,n2 j
(x)
form an orthonormal basis of V
d
n
(W
H

), where [c
H
j,n
]
2
= (

+
1
2
)
n2 j
.
8.3 Generalized Hermite Polynomials 279
In the case where h

is invariant under Z
d
2
, an orthonormal basis is given by the
product of the generalized Hermite polynomials. We call it the Cartesian Hermite
polynomial basis.
Proposition 8.3.3 For W
H
,
(x) = w
H
,

d
i=1
[x
i
[
2
i
e
|x|
2
, an orthonormal basis
of V
d
n
(W
H

) is given by H

(x) =

H

1
(x
1
)

H

d
(x
d
) for N
d
0
and [[ = n.
In particular, specializing to = 0 leads us back to the classical multiple
Hermite polynomials.
There is a limit relation between the polynomials in V
d
n
(W
H

) and the
generalized classical orthogonal polynomials on B
d
.
Theorem 8.3.4 For P
n
, j
(W
B
,
) in Proposition 8.1.5 and P
n
, j
(W
H

) in Proposi-
tion 8.3.2,
lim

P
n
, j
_
W
B
,
; x/

_
= P
n
, j
(W
H

; x).
Proof First, using the denition of the generalized Gegenbauer polynomials, we
can rewrite P
n
, j
(W
B
,
) as
P
n
, j
(W
B
,
; x) = [b
B
j,n
]
1

C
(,n2 j+

)
2 j
(|x|)Y
h
,n2 j
(x),
where

C
2j
(,)
are the orthonormal polynomials and
[b
B
j,n
]
1
=
(

+
1
2
)
n2 j
(

+ +1)
n2 j
.
Using the limit relation in Proposition 1.5.7 and the normalization constants of
both H

n
and C
(,)
n
, we obtain

n
(x) = lim

C
(,)
n
(x/

). (8.3.1)
Since h-harmonics are homogeneous, Y
h
, j
(x/

) =
j/2
Y
h
, j
(x). It can also be
veried that
(n2 j)/2
b
B
j,n
c
H
j,n
as . The stated result follows from these
relations and the explicit formulae for the bases.
In the case where h

is invariant under Z
d
2
, a similar limit relation holds for the
other orthonormal basis, see Theorem 8.1.1. The proof uses (8.3.1) and will be
left to the reader.
Theorem 8.3.5 For P
n

(W
B
,
) in Theorem 8.1.11,
lim

P
n

_
W
B
,
; x/

_
=

H

1
(x
1
)

d
(x
d
).
280 Generalized Classical Orthogonal Polynomials
As a consequence of the limiting relation, we can derive a differential
difference equation for the polynomials in V
d
n
(W
H

) from the equation satised


by the classical orthogonal polynomials on B
d
.
Theorem 8.3.6 The polynomials in V
d
n
(W
H

) satisfy the differentialdifference


equation
(
h
2x, ))P =2nP. (8.3.2)
Proof Since the polynomials P
n
, j
(W
H

) form a basis for V


d
n
(W
H

), it sufces to
prove that these polynomials satisfy the stated equation. The proof is based on the
limit relation in Theorem 8.3.4. The P
n
, j
(W
B
,
) satisfy the equation in Theorem
8.1.3. Making the change of variables x
i
x
i
/

, it follows that
h
becomes

h
and the equation in Theorem 8.1.3 becomes
_

i=1
d

j=1
x
i
x
j

j
(2 +2 +d)x, )
_
P(x/

)
=n(n+d +2 +2 1)P(x/

).
Taking P = P
n
, j
(W
B
,
) in the above equation, dividing the equation by and
letting , we conclude that P
n
, j
(W
H

) satises the stated equation.


Several special cases are of interest. First, if h

(x) = 1 then
h
reduces to the
ordinary Laplacian and equation (8.3.2) becomes the partial differential equation
(5.1.7) satised by the multiple Hermite polynomials. Two other interesting cases
are the following.
The symmetric group The weight function is
W
H

(x) =

1i<jd
[x
i
x
j
[
2
e
|x|
2
. (8.3.3)
The hyperoctahedral group The weight function is
W
B

(x) =
d

i=1
[x
i
[
2
/

1ijd
[x
2
i
x
2
j
[

e
|x|
2
. (8.3.4)
These weight functions should be compared with those in (8.1.7) and (8.1.9). In
the case of the hyperoctahedral group, equation (8.3.2) is related to the Calogero
Sutherland model in mathematical physics; see the discussion in Section 11.6.
The limiting relation also allows us to derive other properties of the generalized
Hermite polynomials, for example, the following Mehler-type formula.
Theorem 8.3.7 Let P
n

(W
H

), [[ = n and n N
0
, denote an orthonormal
basis with respect to W
H

on R
d
. Then, for 0 < z < 1 and x, y R
d
,
8.3 Generalized Hermite Polynomials 281

n=0

[[=n
P
n

(W
H

; x)P
n

(W
H

; y)z
n
=
1
(1z
2
)

+d/2
exp
_

z
2
(|x|
2
+|y|
2
)
1z
2
_
V

_
exp
_
2zx, )
1z
2
__
(y).
Proof The inner sum over of the stated formula is the reproducing kernel of
V
d
n
(W
H

), which is independent of the particular basis. Hence, we can work with


the basis P
n
, j
(W
H

) in Proposition 8.3.2. Recall that =


,
= + +
d1
2
.
Then the explicit formula for the reproducing kernel P
n
(W
B
,
) in Theorem 8.1.10
can be written as
P
n
(W
B
,
; x, y) =
n+

_
1
1
C

n
(z)V

[E

(x, y, , z)](y)(1z
2
)
1/2
dz,
where z =x, ) +t
_
1|x|
2
_
1|y|
2
and E

is dened by
E

(x, y, u, v)
=
_
1
(v x, u))
2
(1|x|
2
)(1|y|
2
)
_
1
1
_
1|x|
2
_
1|y|
2
(1v
2
)
1/2
if v satises (v x, u))
2
(1|x|
2
)(1|y|
2
) and E

(x, y, u, v) = 0 otherwise.
In particular, expanding V

[E

(x, y, , z)] as a Gegenbauer series in C

n
(z), the
above formula gives the coefcients of the expansion. Taking into consideration
the normalization constants,
c
+1/2
c

n=0
(2)(n+1)
(n+2)
P
n
(W
B
,
; x, y)C

n
(z) =V

[E

(x, y, (), z)].


Replace x and y by x/

and y/

, respectively, in the above equation. From


the fact that c
+1/2
/c

1 as and that
n
C

n
(z) 2
n
z
n
/n!, we can
use the limit relation in Theorem 8.3.4 to conclude that the left-hand side of the
formula displayed above converges to the left-hand side of the stated equation.
Moreover, from the denition of E

we can write
E

_
x

,
y

,
u

, z
_
= (1z
2
)
d/2
_
1
|x|
2

_
+1

_
1
|y|
2

_
+1
_
1
|x|
2
+|y|
2
2zx, u)
(1z
2
)
+O(
2
)
_
1
,
which implies, upon taking limit , that
lim

_
x

,
y

,
u

, z
_
=
1
(1z
2
)
+d/2
exp
_

z
2
(|x|
2
+|y|
2
)
1z
2
_
exp
_
2zx, u)
1z
2
_
.
282 Generalized Classical Orthogonal Polynomials
Since V

is a bounded linear operator, we can take the limit inside V

. Hence,
using
V

_
E

_
x

,
y

, , z
___
y

_
=V

_
E

_
x

,
y

,
()

, z
__
(y),
it follows that V

[E

] converges to the right-hand side of the stated formula, which


concludes the proof.
In particular, if h

(x) = 1 then V

= id and the formula reduces to

n=0

[[=n

(x)

(y)z
n
=
d

i=1
1
(1z
2
)
1/2
exp
_
2x
i
y
i
z z
2
(x
2
i
+y
2
i
)
1z
2
_
,
which is a product of the classical Mehler formula.
In the case of the Cartesian Hermite polynomials H

, we can derive an explicit


generating function. Let us dene b

= b

1
b

d
, where b

n
is the normalization
constant of H

n
, which was dened in Subsection 1.5.1.
Theorem 8.3.8 For all x B
d
and y R
d
,

[[=n
H

(y)
b

=
1

2
n
n!
_

_
[1,1]
d
H
n
(z(x, y, t, s))

i=1
c

i
(1+t
i
)(1t
2
i
)

i
1
dt e
s
2
ds,
where z(x, y, t, s) =t
1
x
1
y
1
+ +t
d
x
d
y
d
+s
_
1|x|
2
.
Proof We use the limit relation in Theorem 8.3.5 and the relation
lim

n/2

C
(,)
n
(x) = 2
n
[b

n
]
1/2
x
n
,
which can be veried by using Denition 1.5.5 of C
(,)
n
(x), the series expansion
in terms of the
2
F
1
formula for the Jacobi polynomial, and the formula for the
structure constants. Using the formula for P
n

(W
B
,
) in Theorem 8.1.11 and the
fact that h
B
,n
1 as , we obtain
lim

n/2
P
n

(W
B
,
; x) = 2
n
[b

]
1/2
x

,
which implies, together with Theorem 8.3.5, the limit relation
lim

n/2

[[=n
P
n

(W
B
,
; x)P
n

(W
B
,
; y/

) = 2
n

[[=n
H

(y)
x

.
The sum on the left-hand side is P
n
(W
B
,
, x, y/

), which satises the explicit


formula in Theorem 8.1.16. Now, multiply the right-hand side of the explicit
8.4 Generalized Laguerre Polynomials 283
formula by
n/2
, replace y by y/

and then make the change of variables


t s/

; its limit as becomes, using Proposition 1.5.7, the right-hand


side of the stated formula.
In the case = 0, Theorem 8.3.8 yields the following formula for the classical
Hermite polynomials:

[[=n
H

(y)
!
x

=
1
n!

H
n
(x
1
y
1
+ +x
d
y
d
+s
_
1|x|
2
)e
s
2
ds
for all x B
d
and y R
d
. Furthermore, it is the extension of the following well-
known formula for the classical Hermite polynomials (Erd elyi et al. [1953, Vol.
II, p. 288]):

[[=n
H

(y)
!
x

=
1
n!
H
n
(x
1
y
1
+ +x
d
y
d
), |x| = 1.
In general, in the case |x| = 1 we have
Corollary 8.3.9 For |x| = 1 and y R
d
,

[[=n
H

(y)
b

=
1
2
n
n!
_
[1,1]
d
H
n
(t
1
x
1
y
1
+ +t
d
x
d
y
d
)

i=1
c

i
(1+t
i
)(1t
2
i
)

i
1
dt.
8.4 Generalized Laguerre Polynomials
Much as for the relation between the orthogonal polynomials on T
d
and those on
B
d
(see Section 4.4), there is a relation between the orthogonal polynomials on
R
d
+
and those on R
d
. In fact, since
_
R
d
f (y
2
1
, . . . , y
2
d
)dy =
_
R
d
+
f (x
1
, . . . , x
d
)
dx

x
1
x
d
in analogy to Lemma 4.4.1, the analogue of Theorem 4.4.4 holds for W
H
(x) =
W(x
2
1
, . . . , x
2
d
) and W
L
(y) = W(y
1
, . . . , y
d
)/

y
1
y
d
. Let V
2n
(W
H
, Z
d
2
) be the
collection of all elements in V
2n
(W
H
) that are invariant under Z
d
2
.
Theorem 8.4.1 Let W
H
and W
L
be dened as above. Then the relation
(4.4.1) denes a one-to-one correspondence between an orthonormal basis of
V
2n
(W
H
, Z
d
2
) and an orthonormal basis of V
d
n
(W
L
).
Hence, we can study the Laguerre polynomials via the Hermite polynomials
that are invariant under Z
d
2
. We are mainly interested in the case where H is a
reection-invariant weight function.
284 Generalized Classical Orthogonal Polynomials
Denition 8.4.2 Let h

be a reection-invariant weight function and assume


that h

is also invariant under Z


d
2
. Dene a weight function on R
d
+
by
W
L

(x) = h
2

x
1
, . . . ,

x
d
)e
[x[
/

x
1
x
d
, x R
d
+
.
The polynomials orthogonal to W
L

are called generalized Laguerre polynomials.


An orthonormal basis can be given in terms of Z
d
2
-invariant general Hermite
polynomials; see Proposition 8.3.2.
Proposition 8.4.3 For 0 j n let Y
h
,2n2 j
denote an orthonormal basis of
H
d
2n2j
(h
2

); then the polynomials


P
n
, j
(W
L

; x) = [c
L
j,n
]
1

H
2n2 j+

2 j
(
_
[x[)Y
h
,2n2 j
(

x
1
, . . . ,

x
d
),
where c
L
j,n
= c
H
j,n
from Proposition 8.3.2, form an orthonormal basis of V
d
n
(W
L

).
Just as in the case of the orthogonal polynomials on the simplex, in the inde-
composable case the requirement that h

is also invariant under Z


d
2
limits us to
essentially two possibilities:
The abelian group Z
d
2
The weight function is
W
L

(x; Z
d
2
) = x

1
1
x

d
d
e
[x[
. (8.4.1)
The orthogonal polynomials associated with W
L

are the classical orthogonal


polynomials from Subsection 5.1.4.
The hyperoctahedral group The weight function is
W
L

(x) =
d

i=1
x

/
i
1i<jd
[x
i
x
j
[

e
[x[
. (8.4.2)
Note that the parameters in the two weight functions above are chosen
so that the notation is in line with the usual multiple Laguerre polynomials.
They are related to W
H
+1/2
(x) =

d
i=1
[x
i
[
2
i
+1
e
|x|
2
and to W
H
+1/2
(x) in (8.3.4),
respectively, where we dene +
1
2
= (
1
+
1
2
, . . . ,
d
+
1
2
).
Another approach to the generalized Laguerre polynomials is to treat them as
the limit of the orthogonal polynomials on the simplex. Indeed, from the explicit
formulae for orthonormal bases in Proposition 8.2.6, the limit relation (8.3.1)
leads to the following.
Theorem 8.4.4 For the polynomials P
n
, j
(W
T
,
) dened in Proposition 8.2.5 and
the polynomials P
n
, j
(W
L

) dened in Proposition 8.4.3,


lim

P
n
, j
(W
T
+1/2,
; x/) = P
n
, j
(W
L

; x).
8.4 Generalized Laguerre Polynomials 285
In the case where h

is invariant under Z
d
2
, the multiple Laguerre polynomials
are also the limit of the classical orthogonal polynomials on T
d
. The verication
of the following limit is left to the reader.
Theorem 8.4.5 For the classical orthogonal polynomials P
n

(W
T
+1/2
) dened
in Proposition 5.3.1 with =
d+1
,
lim

(W
T
+1/2,
; x/) =

1
(x
1
)

d
(x
d
).
From the relation in Theorem 8.4.1, we can derive the differentialdifference
equations satised by the generalized Laguerre polynomials from those satised
by the generalized Hermite polynomials, in the same way that the equation sat-
ised by the orthogonal polynomials on T
d
is derived. We can also derive these
equations from the limit relation in Theorem 8.4.4 in a similar way to the equa-
tion satised by the generalized Hermite polynomials. Using either method, we
have a proof of the fact that the multiple Laguerre polynomials satisfy the partial
differential equation (5.1.9). Moreover, we have
Theorem 8.4.6 The orthogonal polynomials of V
d
n
(W
L

) associated with the


hyperoctahedral group satisfy the differentialdifference equation
_
d

i=1
_
x
i

2
x
2
i
+(
/
+1x
i
)

x
i
_
+

1i<jd
1
x
i
x
j
_
2
_
x
i

x
i
x
j

x
j
_

x
i
+x
j
x
i
x
j
[1(i j)]
_
_
P =nP.
As in the Hermite case, this equation is an example of the CalogeroSutherland
model in physics; see the discussion in Section 11.6.
There is another way of obtaining an orthonormal basis for the generalized
Laguerre polynomials, using orthogonal polynomials on the simplex. In fact, as
in Lemma 8.2.4, the following formula holds:
_
R
d
+
f (x)dx =
_

0
s
d1
_
[u[=1
f (su)duds (8.4.3)
for an integrable function f on R
d
+
. Denote by R
m

[[=m
a sequence of
orthonormal homogeneous polynomials of degree m in the homogeneous coor-
dinates from Proposition 4.4.8 associated with the weight function W(u) =
h
2
(

u
1
, . . . ,

u
d
)/

u
1
u
d
on the simplex.
Proposition 8.4.7 Let the R
m

be dened as above. Then an orthonormal basis


of V
d
n
(W
L

) consists of
P
n
,m
(x) = b
m,n
L
2m+

nm
(s)R
m

(x), 0 m n, [[ = m,
where x = su, [u[ = 1 and [b
m,n
]
2
=(

+2m+
1
2
)/(

+
1
2
).
286 Generalized Classical Orthogonal Polynomials
The verication of this basis is left to the reader.
From the limit relation in Theorem 8.4.4, we can also derive a Mehler-type
formula for the Laguerre polynomials.
Theorem 8.4.8 Let P
n

(W
L

) : [[ = n, n N
0
, denote an orthonormal basis
with respect to W
L

on R
d
. Then, for 0 < z < 1 and x, y R
d
+
,

n=0

[[=n
P
n

(W
L

; x)P
n

(W
L

; y)z
n
=
1
(1z)

+d/2
exp
_

z([x[ +[y[)
1z
_
V

_
exp
_
2

zx
1/2
, )
1z
__
(y
1/2
).
Proof The proof follows the same lines as that of Theorem 8.3.7, using the
explicit formula for the reproducing kernel of V
d
n
(W
T
,
) in Theorem 8.2.7.
We will merely point out the differences. Let K

be dened from the formula


for the reproducing kernel in Theorem 8.2.7, in a similar way to the kernel
V

[E

(x, y, , z)](y) in the proof of Theorem 8.3.7. Since we have C

2n
in the
formula of Theorem 8.2.7, we need to write

(x, y, u, z) =
1
2
[K

(x, y, u, z) +K

(x, y, u, z)]
in terms of the Gegenbauer series. Since

K

is even in z, the coefcients of C


()
2n+1
in the expansion vanish. In this way

K

can be written as an innite sum in z


2n
with coefcients given by P
n
(W
T
,
; x, y). The rest of the proof follows the proof
of Theorem 8.3.7. We leave the details to the reader.
In the case of the classical multiple Laguerre polynomials, the Mehler formula
can be written out explicitly, using the explicit formula for the intertwining oper-
ator in Theorem 7.5.4. Moreover, in terms of the modied Bessel function I

of
order ,
I

(x) =
1

( +
1
2
)
_
x
2
_

_
1
1
e
xt
(1t
2
)
1/2
dt.
This can be written as I

(x) = e
i/2
J

(xe
i/2
), where J

is the standard Bessel


function of order . The formula then becomes

n=0

[[=n

(x)

(x)z
n
=
d

i=1
(
i
+1)
1z
exp
_

z(x
i
+y
i
)
1z
_
(x
i
y
i
z)

i
/2
I

i
_
2

zx
i
y
i
1z
_
,
which is the product over i of the classical Mehler formula for the Laguerre
polynomials of one variable (see, for example, Erd elyi et al. [1953, Vol. II,
p. 189]).
8.5 Notes 287
8.5 Notes
The partial differential equations satised by the classical orthogonal polyno-
mials were derived using the explicit formula for the biorthonormal basis in
Sections 5.2 and 5.3; see Chapter XII of Erd elyi et al. [1953]. The deriva-
tion of the differentialdifference equation satised by the generalized classical
orthogonal polynomials was given in Xu [2001b].
The U

basis dened by the Rodrigues-type formula is not mutually orthogo-


nal. There is a biorthogonal basis dened by the projection operator proj
n
(W
B
,
)
onto V
n
(W
B
,
): for N
d
0
, dene
R

(x) := proj
n
(W
B
,
;V

, x),
where V

is the intertwining operator associated with h

. Then U

: [[ = n
and R

: [[ = n are biorthogonal with respect to W


B
,
on B
d
. These bases
were dened and studied in Xu [2005d], which contains further properties of U

a
.
The monic orthogonal polynomials in the case of Z
d
2
were studied in Xu [2005c].
The explicit formulae for the reproducing kernel in Theorems 8.1.10 and 8.2.8
were useful for a number of problems. The special case of the formula in Theo-
rem 5.2.8 was rst derived by summing up the product of orthogonal polynomials
in Proposition 5.2.1, and making use of the product formula of Gegenbauer
polynomials, in Xu [1999a]; it appeared rst in Xu [1996a]. The connection
to h-harmonics was essential in the derivation of Theorem 8.2.8 in Xu [1998d].
They have been used for studying the summability of orthogonal expansions, see
the discussion in the next chapter, and also in constructing cubature formulae in
Xu [2000c].
The orthonormal basis in Proposition 8.1.21 was rst discovered in Logan and
Shepp [1975] in connection with the Radon transform. The derivation from the
FunkHecke formula and further discussions were given in Xu [2000b]. This
basis was used in Bojanov and Petrova [1998] in connection with a numerical
integration scheme. More recently, it was used in Xu [2006a] to derive a recon-
struction algorithm for computerized tomography. In this connection, orthogonal
polynomials on a cylinder were studied in Wade [2010].
The relation between h-harmonics and orthogonal polynomials on the simplex
was used in Dunkl [1984b] in the case d = 2 with a product weight function. The
general classical orthogonal polynomials were studied in Xu [2001b, 2005a].
In the case of symmetric groups, the differentialdifference equation for the
generalized Hermite polynomials is essentially the Schr odinger equation for a
type of CalogeroSutherland model. See the discussion in Section 11.6. The
equation for the general reection group was derived in R osler [1998] using a
different method. The derivation as the limit of the h-Laplacian is in Xu [2001b].
Hermite introduced biorthonormal polynomials for the classical Hermite
weight function, which were subsequently studied by many authors; see Appell
288 Generalized Classical Orthogonal Polynomials
and de F eriet [1926] and Erd elyi et al. [1953, Vol. II, Chapter XII]. Making
use of the intertwining operator V, such a basis can be dened for the general-
ized Hermite polynomials associated with a reection group, from which follows
another proof of the Mehler-type formula for the generalized Hermite polyno-
mials; see R osler [1998]. Our proof follows Xu [2001b]. In the case where h

is associated with the symmetric groups, the Mehler-type formula was proved
in Baker and Forrester [1997a]. For the classical Mehler formula, see Erd elyi
et al. [1953, Vol. II, p. 194], for example.
9
Summability of Orthogonal Expansions
The basics of Fourier orthogonal expansion were discussed in Section 3.6. In
the present chapter various convergence results for orthogonal expansions are
discussed. We start with a result on a fairly general system of orthogonal polyno-
mials, then consider the convergence of partial-sum operators and Ces` aro means
and move on to the summability of the orthogonal expansions for h-harmonics
and the generalized classical orthogonal polynomials.
9.1 General Results on Orthogonal Expansions
We start with a general result about the convergence of the partial sum of an
orthogonal expansion, under a condition on the behavior of the Christoffel func-
tion dened in (3.6.8). In the second subsection we discuss the basics of Ces` aro
means.
9.1.1 Uniform convergence of partial sums
Recall that M was dened in Subsection 3.2.3 as the set of nonnegative Borel
measures on R
d
with moments of all orders. Let M satisfy the condi-
tion (3.2.17) in Theorem 3.2.17 and let it have support set R
d
. For f
L
2
(d), let E
n
(d; f )
2
be the error of the best L
2
(d) approximation from
d
n
;
that is,
E
n
(d; f )
2
= inf
P
d
n
_

[ f (x) P(x)[
2
d(x).
Note that E
n
(d, f )
2
0 as n according to Theorem 3.2.18. Let S
n
(d; f )
denote the partial sum of the Fourier orthogonal expansion with respect to
the measure d, as dened in (3.6.3), and let
n
(d; x) denote the Christoffel
function as dened in (3.6.8).
290 Summability of Orthogonal Expansions
Lemma 9.1.1 Let M satisfy (3.2.17). If, for x R
d
,

m=0
E
2
m(d; f )
2
_

2
m
+1
(d, x)
<, (9.1.1)
then S
n
(d; f ) converges at the point x. If (9.1.1) holds uniformly on a set E in
then S
n
(d) converges uniformly on E.
Proof Since S
n
(d; f ) is the best L
2
(d) approximation of f L
2
(d) from

d
n
, using (3.6.3) and the Parseval identity in Theorem 3.6.8 as well as the
orthogonality,
E
n
(d; f )
2
2
=
_
R
d
[ f S
n
(d; f )]
2
d
=
_
R
d
_

k=n+1
a
T
k
( f )P
k
(x)
_
2
d =

k=n+1
|a
k
( f )|
2
2
.
Therefore, using the CauchySchwarz inequality, we obtain
_
2
m+1

k=2
m
+1
[a
T
k
( f )P
k
(x)[
_
2

2
m+1

k=2
m
+1
|a
T
k
( f )|
2
2
2
m+1

k=2
m
+1
|P
k
(x)|
2
2

k=2
m
+1
|a
T
k
( f )|
2
2
2
m+1

k=0
|P
k
(x)|
2
2
= [E
2
m(d; f )
2
]
2
[
2
m+1 (d; x)]
1
.
Taking the square root and summing over m proves the stated result.
Proposition 9.1.2 Let M satisfy (3.2.17). If
[n
d

n
(d; x)]
1
M
2
(9.1.2)
holds uniformly on a set E with some constant M > 0 then S
n
(d; f ) is uniformly
and absolutely convergent on E for every f L
2
(d) such that

k=1
E
k
(d; f )
2
k
(d2)/2
<. (9.1.3)
Proof Since E
n
(d; f )
2
is nonincreasing,
E
2
m(d; f )
2
_

2
m
+1
(d, x)
M2
(m+1)d/2
1
2
m1
2
m

k=2
m1
+1
E
k
(d; f )
2
= M2
d/2+1
2
m(d2)/2
2
m

k=2
m1
+1
E
k
(d; f )
2
c
2
m

k=2
m1
+1
E
k
(d; f )
2
k
(d2)/2
.
Summing over m, the stated result follows from Lemma 9.1.1.
9.1 General Results on Orthogonal Expansions 291
Corollary 9.1.3 Let M and suppose that has a compact support set
in R
d
. Suppose that (9.1.2) holds uniformly on a subset E . If f C
[d/2]
()
and each of its [
d
2
]th derivatives satises
[D
[d/2]
f (x) D
[d/2]
f (y)[ c|x y|

2
,
where, for odd d, >
1
2
and, for even d, >0 then S
n
(d; f ) converges uniformly
and absolutely to f on E.
Proof For f satisfying the above assumptions, a standard result in approximation
theory (see, for example, Lorentz [1986, p. 90]) shows that there exists P
d
n
such that
E
n
(d, f )
2
|f P|

cn
([d/2]+)
.
Using this estimate, our assumption on implies that (9.1.3) holds.
Since there are
_
n+d
d
_
= O(n
d
) terms in the sum of [
n
(d; x)]
1
= K
n
(x, x),
the condition (9.1.2) appears to be a reasonable assumption. We will show that
this condition often holds. In the following, if d is given by a weight function
d(x) = W(x)dx then we write
n
(W; x). The notation A B means that there
are two constants c
1
and c
2
such that c
1
A/B c
2
.
Proposition 9.1.4 Let W, W
0
and W
1
be weight functions dened on R
d
.
If c
1
W
0
(x) W(x) c
2
W
1
(x) for some positive constant c
2
> c
1
> 0 then
c
1

n
(W
0
; x)
n
(W; x) c
2

n
(W
1
; x).
Proof By Theorem 3.6.6,
n
satises the property

n
(W; x) = min
P(x)=1,P
d
n
_
R
d
[P(y)[
2
W(y)dy;
the stated inequality follows as an easy consequence.
We will establish (9.1.2) for the classical-type weight functions. The proposi-
tion allows us an extension to more general classes of weight functions.
Proposition 9.1.5 Let W(x) =

d
i=1
w
i
(x
i
), where w
i
are weight functions
dened on [1, 1]. Then
n
(W; x) n
d
for all x in the interior of [1, 1]
d
.
Proof Let p
m
(w
i
) denote orthonormal polynomials with respect to w
i
. The
orthonormal polynomials with respect toW are P
n

(x) = p

1
(w
1
; x
1
) p

d
(w
d
; x
d
).
Let
n
(w
i
; t) be the Christoffel function associated with the weight function w
i
on [1, 1]. It is known that
n
(w
i
;t) n
1
for t (1, 1); see for example
Nevai [1986]. Hence, on the one hand,
292 Summability of Orthogonal Expansions

n
(W; x) =
_
n

m=0

[[=m
[P
n

(x)]
2
_
1

_
n

k
1
=0
. . .
n

k
d
=0
p
2
k
1
(w
1
; x
1
) p
2
k
d
(w
d
; x
d
)
_
1
=
d

i=1

n
(w
i
; x
i
) n
d
.
On the other hand, since : [[ n contains : 0
i
[n/d],
the inequality can be reversed to conclude that
n
(W; x)

d
i=1

[n/d]
(w
i
; x
i
) n
d
.
Proposition 9.1.6 For the weight function W = W
B
,
in (8.1.5) on B
d
or the
weight function W = W
T
,
in (8.2.1) on T
d
, [n
d

n
(W; x)]
1
c for all x in the
interior of B
d
or T
d
, respectively.
Proof We prove only the case of T
d
; the case of B
d
is similar. In the formula
for the reproducing kernel in Theorem 8.2.8, let
i
0 and 0 and use the
formula (1.5.1) repeatedly to obtain
P
n
(W
T
0,0
; x, x) =
2n+
d1
2
2
d
(d 1)

Z
d+1
2
C
(d1)/2
2n
(x
1

1
+ +
d+1
x
d+1
),
where x
d+1
= 1 x
1
x
d
. Hence, using the value of C
(d1)/2
n
(1) =
(d 1)
n
/n! n
d2
and the fact that C
(d1)/2
n
(x) n
(d3)/2
(see, for example,
Szeg o [1975, p. 196]), we have P
n
(W
T
0,0
; x, x) n
d1
. Consequently, we obtain

n
(W
T
0,0
; x) n
d
since
n
(W
T
0,0
; x) = [K
n
(W
T
0,0
; x, x)]
1
. Now, if all the param-
eters
i
and are even integers then the polynomial q dened by q
2
(x) =
W
T
,
(x)/W
T
0,0
(x) is a polynomial of degree m=
1
2
([[ +). Using the property of

n
in Theorem 3.6.6, we obtain

n
(W
T
,
; x) = min
P(x)=1,P
d
n
_
T
d
[P(y)]
2
W
T
,
(y)dy
= q
2
(x) min
P(x)=1,P
d
n
_
T
d
[P(y)q(y)/q(x)[
2
W
0
(y)dy
q
2
(x)
n+m
(W
0,0
; x).
This shows that n
d

n
(W
T
,
; x) = O(1) holds for every x in the interior of T
d
when all parameters are even integers. For the general parameters, Proposition
9.1.4 can be used to nish the proof.
In fact the estimate
n
(W; x) n
d
holds for W
B
,
and W
T
,
. Furthermore, the
asymptotics of
n
(W; x) are known in some cases; see the notes at the end of the
chapter.
9.1 General Results on Orthogonal Expansions 293
9.1.2 Ces` aro means of the orthogonal expansion
As in classical Fourier analysis, the convergence of the partial-sum operator for
the Fourier orthogonal expansion requires that the function is smooth. For func-
tions that are merely continuous, it is necessary to consider summability methods.
A common method is the Ces` aro (C, )-mean.
Denition 9.1.7 Let c
n

n=0
be a given sequence. For >0, the Ces` aro (C, )-
means are dened by
s

n
=
n

k=0
(n)
k
(n)
k
c
k
.
The sequence c
n
is (C, )-summable, by Ces` aros method of order , to s if s

n
converges to s as n .
It is a well-known fact that if s
n
converges to s then s

n
converges to s. The
reverse, however, is not true. For a sequence that does not converge, s

n
may still
converge for some . We note the following properties of s

n
(see, for example,
Chapter III of Zygmund [1959]).
1. If s
n
is the nth partial sum of the series

k=0
c
k
then
s

n
=

+n
n

k=0
(n)
k
(1 n)
k
s
k
.
2. If s
k
= 1 for all k then s

n
= 1 for all n.
3. If s

n
converges to s then s

n
converges to s for all >.
For a weight function W and a function f , denote the nth partial sum of the
Fourier orthogonal expansion of f by S
n
(W; f ), which is dened as in (3.6.3).
Its Ces` aro (C, )-means are denoted by S

n
(W; f ) and are dened as the (C, )-
means of the orthogonal expansion

n=0
a
T
n
( f )P
n
. Since S
n
(W; 1) = 1, property
(2) shows that S

n
(W; 1) = 1.
One important property of the partial-sum operator S
n
(W; f ) is that it is a pro-
jection operator onto
d
n
; that is, S
n
(W; p) = p for any polynomial p of degree n.
This property does not hold for the Ces` aro means. However, even if S
n
(W; f ) does
not converge, the operator
n
(W; f ) = 2S
2n1
(W; f ) S
n1
(W; f ) may converge
and, moreover,
n
(p) = p holds for all p
d
n1
. In classical Fourier analysis, the

n
( f ) are called the de la Vall eePoussin means. The following denition gives
an analogue that uses Ces` aro means of higher order.
Denition 9.1.8 For an integer m 1, dene

m
n
(W; f ) =
1
n
m
m

j=0
(2
j
n)
m
m

i=0,i,=j
1
2
j
2
i
S
m
2
j
n1
(W; f ).
294 Summability of Orthogonal Expansions
Note that when m = 1,
m
n
is precisely the de la Vall eePoussin mean
n
. The
main properties of the means
m
n
are as follows.
Theorem 9.1.9 The means
m
n
(W; f ) satisfy the following properties:

m
n
(W; p) = p for all p
d
n1
and the
m
n
(W; ) are uniformly bounded in n
whenever the S
m
n
(W; ) are uniformly bounded.
Proof The proof will show how the formula for
m
n
(W; f ) is determined.
Consider
m

j=0
a
j
S
m
2
j
n1
(W; f ) =
m

j=0
a
j
_
2
j
n+m1
m
_
2
j
(n1)

k=0
_
2
j
nk +m2
2
j
n1k
_
S
k
(W; f ).
We will choose a
j
so that the terms involving S
k
(W; f ) become 0 for 0 k n1
and

m
n
(W; f ) =
m

j=1
a

j
2
j
n1

k=n
_
2
j
nk +m2
2
j
n1k
_
S
k
(W; f ),
where a

j
= a
j
/
_
2
j
n+m1
m
_
; this will show that
m
n
(W; p) = p for all p
d
n1
.
That is, together with
m
n
(W; 1) = 1, we choose a
j
such that
m

j=0
a

j
_
2
j
nk +m2
2
j
n1k
_
= 0, k = 0, 1, . . . , n1 and
m

j=0
a
j
= 1.
Since m is an integer, it follows that
_
2
j
nk +m2
2
j
n1k
_
=
(2
j
nk +m2) (2
j
nk)
(m1)!
=
m1

s=0
p
s
(2
j
n)k
s
,
where the p
s
(t) are polynomials of degree m1s. The leading term of p
s
(t) is
a constant multiple of t
m1s
. Similarly,
_
2
j
n+m1
m
_
=
(2
j
n)
m
m!
+lower-degree terms.
Hence, it sufces to choose a

j
such that
m

j=0
a

j
2
js
= 0, s = 0, 1, . . . , m1 and
m

j=0
2
jm
a

j
=
m!
n
m
.
To nd such a

j
, consider the Lagrange interpolation polynomial L
m
of degree m
dened by
L
m
( f ;t) =
m1

j=0
f (2
j
)
j
(t) with
j
(t) =
m1

i=0,i,=j
t 2
i
2
j
2
i
,
which satises L
m
( f ; 2
j
) = f (2
j
) for j = 0, 1, . . . , m1, and
f (t) L
m
( f ;t) =
f
(m)
()
m!
m1

i=0
(t 2
i
), (1, 2
n1
).
9.1 General Results on Orthogonal Expansions 295
By the uniqueness of the interpolating polynomial, L
m
( f ) = f whenever f is a
polynomial of degree at most m1. Set f (t) = t
s
with s = m, m1, . . . , 0 in the
above formulae and let t = 2
m
; then
2
m
2

m1

j=0
2
jm

j
(2
m
) =
m1

i=0
(2
m
2
i
) and 2
sm

m1

j=0
2
s j

j
(2
m
) = 0.
Hence, the choice a

m
= m!/[n
m

m1
j=0
(2
m
2
j
)] and a

j
=
j
(2
m
)a

m
for
j = 0, 1, . . . , m1 gives the desired result. The denition of
j
leads to the
equation a

j
= m!/[n
m

m
i=0,i,=j
(2
j
2
i
)], which agrees with the denition of

m
n
(W; f ).
Since
n
m
_
2
j
n+m
m
_
=
m

i=1
(2
j
+i/n) (2
j
+1)
m
,
it follows that
m

j=0
[a
j
[
m

j=0
_
2
j
+1)
m
/
m

i=0,i,=j
[2
j
2
i
[

= A
m
,
which is independent of n. Consequently, [
m
n
(W; f , x)[ A
m
sup
n
[S
m
n
(W; f , x)[.
Let the weight function W be dened on the domain . For a continuous func-
tion f on the domain , dene the best approximation of f from the space
d
n
of
polynomials of degree at most n by
E
n
( f )

= min
P
d
n
|f P| = min
P
d
n
max
x
[ f (x) P(x)[.
Corollary 9.1.10 Suppose that the Ces` aro means S
m
n
(W; f ) converge uniformly
to a continuous function f . Then
[
m
n
(W; f ) f (x)[ B
m
E
n
( f ),
where B
m
is a constant independent of n.
Proof Since S
m
n
(W; f ) converges uniformly, there is a constant c
m
independent
of n such that |S
m
n
(W; f )| c
m
|f | for all n. Let P be a polynomial of degree n
such that E
n
( f ) = | f P|. Then, since
m
n
(W; P) = P, the stated result follows
from the triangle inequality
|
m
n
(W; f ) f | A
m
sup
n
|S
m
n
(W; f P)|+|f P| (A
m
c
m
+1)E
n
( f ),
where A
m
=

m
j=0
_
(2
j
+1)
m
/

m
i=0,i,=j
[2
j
2
i
[

as in the proof of Theorem 9.1.9.


296 Summability of Orthogonal Expansions
9.2 Orthogonal Expansion on the Sphere
Any f L
2
(h
2

d) can be expanded into an h-harmonic series. Let S


,n
be an
orthonormal basis of H
d
n
(h
2

). For f in L
2
(h
2

d), its Fourier expansion in terms


of S
,n
is given by
f

n=0

a
n

( f )S
,n
, where a
n

( f ) =
_
S
d1
f S
,n
h
2

d.
For the ordinary harmonics, this is usually called the Laplace series. The nth com-
ponent of this expansion is written, see (7.3.1), as an integral with reproducing
kernel P
n
(h
2

; x, y):
S
n
(h
2

; f , x) = c
/
h
_
S
d1
f (y)P
n
(h
2

; x, y)h
2

(y)d.
It should be emphasized that the nth component is independent of the particular
choice of orthogonal basis. Many results on orthogonal expansions for weight
functions whose support set has a nonempty interior, such as those in Section 9.1,
hold also for h-harmonic expansions. For example, a standard L
2
argument shows
that:
Proposition 9.2.1 If f L
2
(h
2

d) then S
n
(h
2

; f ) converges to f in L
2
(h
2

d).
For uniform convergence or L
p
-convergence for p ,= 2, it is necessary to con-
sider a summability method. It turns out that the formula for P
n
(h
2

; x, y) in
Corollary 7.3.2 allows us to show that the summability of h-harmonics can be
reduced to that of Gegenbauer polynomials.
In the following, we denote by L
p
(h
2

d) the space of Lebesgue-measurable


functions f dened on S
d1
for which the norm
| f |
p,h
=
_
c
/
h
_
S
d1
[ f (x)[
p
h
2

d
_
1/p
is nite. When p = , we take the space as C(S
d1
), the space of continuous
functions with uniform norm |f |

. The essential ingredient is the explicit for-


mula for the reproducing kernel in Corollary 7.3.2. Denote by w

the normalized
weight function
w

(t) = B( +
1
2
,
1
2
)
1
(1t
2
)
1/2
, t [1, 1],
whose orthogonal polynomials are the Gegenbauer polynomials (Subsection
1.4.3). A function f L
2
(w

, [1, 1]) can be expressed as a Gegenbauer


expansion:
f

k=0
b
k
( f )

n
with b
k
( f ) :=
_
1
1
f (t)

n
(t)w

(t)dt,
9.2 Orthogonal Expansion on the Sphere 297
where

C

n
denotes the orthonormal Gegenbauer polynomial. A summability
method for the Gegenbauer expansion is a sum in the form
n
( f , w

;t) =

n
k=0
c
k,n
b
k
( f )

k
(t), where the c
k,n
are given constants such that

n
k=0
c
k,n
1
as n . Evidently the sum can be written as an integral:

n
( f , w

;t) =
_
1
1
p

n
(w

; s, t) f (s)w

(s)ds,
with p

n
(w

; s, t) =
n
k=0
c
k,n

k
(s)

k
(t).
Proposition 9.2.2 For p = 1 and p =, the operator norm is given by
sup
|f |
p
1
|
n
( f , w

)|
p
= max
t[1,1]
_
1
1
[p

n
(w

; s, t)[w

(s)ds,
where |g| is the L
1
(w

, [1, 1]) norm of g.


Proof An elementary inequality shows that the left-hand side is no greater than
the right-hand side. For p = and xed t, the choice f (s) = sign[ p

n
(s, t)] is then
used to show that the equality holds. For p =1, consider a sequence s
n
( f ) dened
by s
n
( f , t) =
_
1
1
f (u)q
n
(u, t)w

(u)du which converges uniformly to f whenever


f is continuous on [1, 1] (for example, the Ces` aro means of sufciently high
order). Then choose f
N
(s) =q
N
(s, t

), where t

is a point for which the maximum


in the right-hand side is attained, such that

n
( f
N
, w

;t) =
_
1
1
q
N
(s, t

)p

n
(w

; s, t)w

(s)ds = s
N
(p

n
(w

; , t);t

),
which converges to p

n
(w

;t

, t) as N . The dominated convergence theorem


shows that |
n
( f
N
, w

)|
_
1
1
[p

n
(w

;t, t

)[w

(t)dt as N , which gives the


stated identity for p = 1.
Theorem 9.2.3 Let =

+
d2
2
. If
n
(, w

;t) =
n
k=0
c
k,n
b
k
()

k
(t) denes
a bounded operator on L
p
(w

, [1, 1]) for 1 p then


n
() =

n
k=0
c
k,n
S
k
(, h
2

) denes a bounded operator on L


p
(h
2

d). More precisely, if


_
_
1
1

n
( f , w

;t)

p
w

(t)dt
_
1/p
C
_
_
1
1
[ f (t)[
p
w

(t)dt
_
1/p
for f L
p
(w

, [1, 1]), where C is a constant independent of f and n, then


_
_
S
d1

n
(g; x)

p
h
2

(x)d
_
1/p
C
_
_
S
d1
[g(x)[
p
h
2

(x)d
_
1/p
for g L
p
(h
2

d) with the same constant. In particular, if


n
( f , w

) converges to
f in L
p
(w

, [1, 1]) then the means


n
(g) converge to g in L
p
(h
2

d).
298 Summability of Orthogonal Expansions
Proof Like
n
, the means
n
can be written as integrals:

n
(g; x) = c
/
h
_
S
d1
P

n
(x, y)g(y)h
2

(y)d
with P

n
(x, y) =

n
k=0
c
k,n
P
k
(h
2

; x, y). Since

C

k
(t)

C

k
(1) = [(n +)/]C

k
(t), it
follows from the explicit formula for P
n
(h
2

; x, y) given by Corollary 7.3.2 and


(7.3.1) that
P

n
(x, y) =V
_
p

n
(w

; x, ), 1)

(y).
Since V is positive,

V
_
p

n
(w

; x, ), 1)

(y)

V
_
[p

n
(w

; x, ), 1)[

(y).
Consequently, for p = and p = 1, applying Corollary 7.4.5 gives
|
n
(g)|
p,h
|g|
p,h
c
/
h
_
S
d1

V
_
p

n
(w

; x, ), 1)

(y)

h
2

(y)d(y)
|g|
p,h
_
1
1
[p

n
(w

; 1, t)[w

(t)dt
C|g|
p,h
by Proposition 9.2.2. The case 1 < p < follows from the Riesz interpolation
theorem; see Rudin [1991].
Note that the proof in fact shows that the convergence of the
n
means depends
only on the convergence of
n
at the point t = 1.
Let S

n
(h
2

; f ) denote the nth Ces` aro (C, ) mean of the h-harmonic expansion.
It can be written as an integral,
S

n
(h
2

; f , x) = c
/
h
_
S
d1
f (y)P

n
(h

; x, y)h
2

(y)d,
where P

n
(h

; x, y) is the Ces` aro (C, )-mean of the sequence of reproducing ker-


nels as in Denition 9.1.7. As an immediate corollary of Theorem 9.2.3, we
have
Corollary 9.2.4 Let f L
p
(h
2

d), 1 p < , or f C(S


d1
). Then the
h-harmonic expansion of f with respect to h
2

is (C, )-summable in L
p
(h
2

d)
or C(S
d1
) provided that

>

+
d2
2
.
Proof The (C, )-means of the Gegenbauer expansion with respect to w

converge if and only if >; see Szeg o [1975, p. 246, Theorem 9.1.3].
Corollary 9.2.5 The (C, )-means of the h-harmonic expansion with respect to
h
2

dene a positive linear operator provided that 2

+d 1.
9.3 Orthogonal Expansion on the Ball 299
Proof According to an inequality due to Kogbetliantz [1924], see also
Askey [1975, p. 71], the (C, )-kernel of the Gegenbauer expansion with respect
to w

is positive if 2 +1.
Note that for = 0 these results reduce to the classical results of ordinary
harmonics. In that case, the condition >
d2
2
is the so-called critical index (see,
for example, Stein and Weiss [1971]), and both corollaries are sharp.
We conclude this section with a result on the expansion of the function
V f (x, y)) in h-harmonics. It comes as an application of the FunkHecke for-
mula in Theorem 7.3.4. Let us denote by s

n
(w

) the Ces` aro (C, )-means of the


Gegenbauer expansion with respect to w

.
Proposition 9.2.6 Let f be a continuous function on [1, 1]. Then
S

n
(h
2

;V f (x, )), y) =V[s

n
(w

+(d2)/2
; f , x, ))](y).
Proof Let =

+
d2
2
. For each xed x S
d1
, P
n
(h
2

; x, y) is an element in
H
d
n
(h
2

) and it follows from the FunkHecke formula that


S
n
(h
2

;V f (x, )), y) =
_
S
d1
V f (x, u))P
n
(h
2

; y, u)h
2

(u)d(u)
=
n
( f )
n+

V
_
C

n
(x, ))
_
(y)
by Corollary 7.3.2. Hence, by the formula for
n
( f ) in Theorem 7.3.4,
S
n
(h
2

;V f (x, )), y) =
_
1
1
f (t)

n
(t)w

(t)dt V[

n
(x, ))](y).
Taking the Ces` aro means of this identity gives the stated formula.
9.3 Orthogonal Expansion on the Ball
For a weight function W dened on the set of R
d
, denote by L
p
(W, ) the
weighted L
p
space with norm dened by
| f |
W,p
=
_
_

[ f (x)[
p
W(x)dx
_
1/p
for 1 p < . When p = , take the space as C(), the space of continuous
functions on with uniform norm.
Recall that V
d
n
(W) denotes the space of orthogonal polynomials of degree n
with respect to W and P
n
(W; x, y) denotes the reproducing kernel of V
d
n
(W).
Denote by P
n
( f ; x) the nth component of the orthogonal expansion which has
P
n
(W; x, y) as the integral kernel. Let H be an admissible function dened on
R
d+m+1
, as in Denition 4.3.1. Denote the weighted L
p
space by L
p
(H, S
d+m
) to
300 Summability of Orthogonal Expansions
emphasize the dependence on S
d+m
. The orthogonal polynomials with respect
to the weight function W
m
H
dened in (4.3.1) are related to the orthogonal
polynomials with respect to Hd
d+m
on S
d+m
, as discussed in Section 4.3.
Theorem 9.3.1 Let H be an admissible weight function and W
H
m
be dened as
in (4.3.1). Let p be xed, 1 p . If the means
n
() =

n
k=0
c
k,n
S
k
() dene
bounded operators on L
p
(H, S
d+m
) then the means
n
() =
n
k=0
c
k,n
P
k
() dene
bounded operators on L
p
(W
m
H
, B
d
). More precisely, if
_
_
S
d+m

n
(F; x)

p
H(x)d
_
1/p
C
_
_
S
d+m
[F(x)[
p
H(x)d
_
1/p
for F L
p
(H, S
d+m
), where C is a constant independent of F and n, then
_
_
B
d

n
( f ; x)

p
W
m
H
(x)dx
_
1/p
C
_
_
B
d
[ f (x)[
p
H(x)dx
_
1/p
for f L
p
(W
m
H
, B
d
). In particular, if the means
n
(F) converge to F in
L
p
(H, S
d+m
) then the means
n
( f ) converge to f in L
p
(W
m
H
, B
d
).
Proof For f L
p
(W
m
H
, B
d
), dene a function F on S
d+m
by F(x) = f (x
1
), where
x = (x
1
, x
2
) S
d+m
, x
1
B
d
. By Lemma 4.3.2 it follows that
_
S
d+m
[F(x)[
p
H(x)d =C
_
B
d
[ f (x)[
p
W
m
H
(x)dx;
hence, F L
p
(H, S
d+m
). Using Theorem 4.3.5 and the notation y = (y
1
, y
2
)
S
d+m
and y
2
=[y
2
[, S
m
, the sums P
n
(F) and P
n
( f ) are seen to be related as
follows:
P
n
( f ; x
1
) =
_
B
d
P
n
(W
m
H
; x
1
, y
1
) f (y
1
)W
m
H
(y
1
)dy
1
=
_
B
d
_
_
S
m
P
n
(H; x,
_
y
1
,
_
1[y
1
[
2

_
)H
2
()d()
_
f (y
1
)W
m
H
(y
1
)dy
1
=
_
S
d+m
P
n
(H; x, y)F(y)H(y)dy = P
n
(F; x),
where we have used Lemma 4.3.2. Using the same lemma again,
_
B
d

n
( f ; x
1
)

p
W
m
H
(x
1
)dx
1
=
_
S
d+m

n
( f ; x
1
)

p
H(x)d(x)
=
_
S
d+m

n
(F; x)

p
H(x)d(x).
Hence, the boundedness of the last integral can be used to conclude that
_
B
d

n
( f ; x
1
)

p
W
m
H
(x
1
)dx
1
C
p
_
S
d+m
[F(x)[
p
H(x)d
=C
p
_
B
d
[ f (x)[
p
W
m
H
(x)dx.
9.3 Orthogonal Expansion on the Ball 301
This completes the proof for 1 p < . In the case p = , the norm becomes
the uniform norm and the result follows readily from the fact that P
n
( f ; x
1
) =
P
n
(F; x). This completes the proof for m > 0. In the case m = 0, we use Theorem
4.2.8 instead of Theorem 4.3.5.
Theorem 9.3.1 shows that, in order to study the summability of the Fourier
orthogonal expansion on the unit ball, we need only study the summability on the
unit sphere. In particular, in the limiting case m = 0 we can use Theorem 9.2.3
with d replaced by d +1 to obtain the following result for the reection-invariant
weight function W
B
,
dened in Denition 8.1.1.
Theorem 9.3.2 Let =

+
d2
2
. If the expression
n
() =
n
k=0
c
k,n
b
k
()

k
denes bounded operators on L
p
(w

, [1, 1]), 1 p , then the expres-


sion
n
() =
n
k=0
c
k,n
P
k
() denes bounded operators on L
p
(W
B
,
, B
d
). More
precisely, if
_
_
1
1

n
( f ; t)

p
w

(t)dt
_
1/p
C
_
_
1
1
[ f (t)[
p
w

(t)dt
_
1/p
for f L
p
(w

, [1, 1]), where C is a constant independent of f and n, then


_
_
B
d

n
(g; x)

p
W
B
,
(x)dx
_
1/p
C
_
_
B
d
[g(x)[
p
W
B
,
(x)dx
_
1/p
for g L
p
(W
B
,
, B
d
) with the same constant. In particular, if the means
n
( f )
converge to f in L
p
(w

, [1, 1]), then the means


n
(g) converge to g in
L
p
(W
B
,
, B
d
).
For > 0, let S

n
( f ;W
B
,
) denote the Ces` aro (C, ) means of the orthogonal
expansion with respect to W
B
,
. As a corollary of the above theorem and the
results for Gegenbauer expansions, we can state analogues of Corollaries 9.2.4
and 9.2.5.
Corollary 9.3.3 Let W
B
,
be as in Denition 8.1.1 with 0 and let
,
:=

+ +
d1
2
. Let f L
p
(W
B
,
, B
d
). Then the orthogonal expansion of f with
respect to W
B
,
is (C, )-summable in L
p
(W
B
,
, B
d
), 1 p , provided that
>
,
. Moreover, the (C, )-means dene a positive linear operator if
2
,
+1.
For the classical weight function W
B

(x) = (1 |x|
2
)
1/2
( = 0), the
condition for (C, )-summability in Corollary 9.3.3 is sharp.
Theorem 9.3.4 The orthogonal expansion of every continuous function f with
respect to W

, with 0, is uniformly (C, )-summable on B


d
if and only if
> +
d1
2
.
302 Summability of Orthogonal Expansions
Proof We only need to prove the necessary condition. Let |x| = 1. Let K

n
(W
B

)
denote the (C, )-means of the reproducing kernel of the orthogonal expansion.
We need to show that
I

n
(x) =
_
B
d

n
(W
B

; x, y)

(1|y|
2
)
1/2
dy
is unbounded when = +
d1
2
. By Theorem 5.2.8, P
n
(W
B

; x, y) =

C
+(d1)/2
n
(1)

C
+(d1)/2
n
(x, y)), where we use the normalized Gegenbauer
polynomials. Therefore K

n
(W
B

; x, y) = K

n
(w
+(d1)/2
; 1, x, y)), where K

n
(w

)
denotes the (C, ) means of the Gegenbauer expansion with respect to w

. Using
the polar coordinates and Lemma 7.4.4,
I

n
=
d2
_
1
0
r
d1
_
1
1

n
(w
+(d1)/2
; rs, 1)

(1s
2
)
(d3)/2
ds(1r
2
)
1/2
dr.
Making the change of variables s t/r and exchanging the order of integration,
use of a beta integral shows that
I

n
=
d2
_
1
1

n
(w
+(d1)/2
;t, 1)

_
1
[t[
(r
2
t
2
)
(d3)/2
r(1r
2
)
1/2
dr dt
=
d2
A

_
1
1

n
(w
+(d1)/2
;t, 1)

(1t
2
)
+(d2)/2
dt
where A

is a constant. Therefore, the (C, )-summability of the orthogonal


expansion with respect to W
B

on the boundary of B
d
is equivalent to the (C, )-
summability of the Gegenbauer expansion with index +
d1
2
at the point x = 1.
The desired result follows from Szeg o [1975, Theorem 9.1.3, p. 246], where the
result is stated for the Jacobi expansion; a shift of
1
2
on the index is necessary for
the Gegenbauer expansion.
The proof in fact shows that the maximum of the (C, )-means with respect to
W
B

is attained on the boundary of the unit ball.


Since the weight function on the sphere requires all parameters to be nonnega-
tive, we need to assume 0 in the above results. The necessary part of the last
theorem, however, holds for all >
1
2
.
For the weight function W
B
,
dened in (8.1.5), further results about orthogo-
nal expansions can be derived. Let S

n
(W
B
,
; f ) denote the Ces` aro (C, )-means
of the orthogonal expansion with respect to W
B
,
and let P(W
B
,
; f ) denote the
nth component of the expansion as in (3.6.2). Recall that s

n
(w; f ) denotes the
nth (C, )-mean of the orthogonal expansion with respect to w. Let w
a,b
(t) =
[t[
2a
(1t
2
)
b1/2
denote the generalized Gegenbauer weight function.
Proposition 9.3.5 Let W
B
,
(x) =

d
i=1
[x
i
[

i
(1|x|
2
)
1/2
. Let f
0
be an even
function dened on [1, 1] and f (x) = f
0
(|x|); then
9.3 Orthogonal Expansion on the Ball 303
S

n
(W
B
,
; f , x) = s

n
(w
[[+(d1)/2,
; f
0
, |x|).
Proof The formula for the intertwining operator V in Theorem 7.5.4 shows that,
for y = sy
/
with [y
/
[ = 1, V[g(x, ))](y) = V[g(sx, ))](y
/
). Hence, using polar
coordinates, the formulae for the reproducing kernel in Theorem 8.1.10 and
Corollary 7.4.5, we obtain, with =[[ +
d1
2
,
P
n
(W
B
,
; f , x) = w
B
,
c

n+ +
+
_
1
0
f
0
(s)s
d1
(1s
2
)
1/2

_
S
d1
V
_
_
1
1
C
+
n
_
sx, ) +
_
1|x|
2
_
1s
2
t
_
(1t
2
)
1
dt
_
(y
/
)
h
2

(y
/
)d(y
/
)ds
= c

w
B
,
n+ +
+
B

_
1
0
f
0
(s)s
d1
(1s
2
)
1/2

_
1
1
_
1
1
C
+
n
_
s|x|u+
_
1s
2
_
1|x|
2
t
_
(1t
2
)
1
dt
(1u
2
)
1
duds.
Upon using Corollary 7.5.6 we conclude that
P
n
(W
B
,
; f , x) = c

C
(,)
n
(|x|)
_
1
0
_

C
(,)
n
(s) +

C
(,)
n
(s)
_
f
0
([s[)
s
d1
(1s
2
)
1/2
ds
= c

C
(,)
n
(|x|)
_
1
1

C
(,)
n
(s) f
0
(s)[s[
d1
(1s
2
)
1/2
ds,
where

C
(,)
n
(t) are the normalized generalized Gegenbauer polynomials and
c is the normalization constant of w
,
, as can be seen by setting n = 0. Taking
Ces` aro (C, )-means proves the stated result.
Proposition 9.3.5 states that the partial sum of a radial function is also a radial
function. It holds, in particular, for the classical weight function W
B

. For W
B

,
there is another class of functions that are preserved in such a manner. A function
f dened on R
d
is called a ridge function if f (x) = f
0
(x, y)) for some f
0
: RR
and y R
d
. The next proposition states that the partial sum of a ridge function is
also a ridge function.
Proposition 9.3.6 Let f : R R. For |y| = 1 and n 0,
S

n
(W
B

; f (x, )), y) = s

n
(w
+(d1)/2
; f , x, y)).
In fact, using the the FunkHecke formula in Theorem 8.1.17 and following
the proof of Proposition 9.2.6, such a result can be established, for functions of
304 Summability of Orthogonal Expansions
the form V
B
f (x, )), for the weight function W
B
,
in Proposition 9.3.5. The most
interesting case is the result for the classical weight function. The proof is left to
the reader.
9.4 Orthogonal Expansion on the Simplex
Although the structures of the orthogonal polynomials and the reproducing kernel
on T
d
follow from those on B
d
, the summability of orthogonal expansions on
T
d
does not follow in general as a consequence of the summability on B
d
. To
state the result, let us denote by w
(a,b)
(t) = (1 t)
a
(1 +t)
b
the Jacobi weight
function and by p
(a,b)
n
the orthonormal Jacobi polynomials with respect to w
(a,b)
.
In the following theorem, let P
n
( f ) denote the nth component of the orthogonal
expansion with respect to W
T
,
.
Theorem 9.4.1 Let =

+ +
d2
2
. If
n
() =
n
k=0
c
k,n
b
k
()p
(,1/2)
k
denes
bounded operators on L
p
(w
(,1/2)
, [1, 1]), 1 p , then the means
n
() =

n
k=0
c
k,n
P
k
() dene bounded operators on L
p
(W
T
,
, T
d
). More precisely, if
_
_
1
1

n
( f ;t)

p
w
(,1/2)
(t)dt
_
1/p
C
_
_
1
1
[ f (t)[
p
w
(,1/2)
(t)dt
_
1/p
for f L
p
(w
(,1/2)
, [1, 1]), where C is a constant independent of f and n, then
_
_
T
d

n
(g; x)

p
W
T
,
(x)dx
_
1/p
C
_
_
T
d
[g(x)[
p
W
T
,
(x)dx
_
1/p
for g L
p
(W
T
,
, T
d
) with the same constant. In particular, if the means
n
( f )
converge to f in L
p
(w
(,1/2)
, [1, 1]) then the means
n
(g) converge to g in
L
p
(W
T
,
, T
d
).
Proof We follow the proof of Theorem 9.2.3. The means
n
can be written as
integrals,

n
(g; x) = w
T
,
_
T
d
P

n
(x, y)g(y)W
T
,
(y)dy,
with P

n
(x, y) =
n
k=0
c
k,n
P
k
(W
T
,
; x, y). Likewise, write
n
( f ) as an integral with
kernel p

n
(w
,1/2
; s, t) =
n
k=0
c
k,n
p
(,1/2)
k
(s)p
(,1/2)
k
(t). Thus, following the
proof of Theorem 9.2.3, the essential part is to show that
I
n
(; x) =
_
T
d

n
(x, y)

W
T
,
(y)dy =
_
S
d

n
(x, y
2
)

h
2
,
(y
/
)d(y
/
)
with y
/
= (y, y
d+1
) uniformly bounded for x T
d
, where the second equality fol-
lows from Lemmas 4.4.1 and 4.2.3. Let V be the intertwining operator associated
with h
,
and the reection group W = W
0
Z
2
. From the proof of Theorem
9.4 Orthogonal Expansion on the Simplex 305
8.1.10, we see that the formula for the reproducing kernel P
k
(W
T
,
) in Theorem
8.2.7 can be written as
P
k
(W
T
,
; x, y) =
2n+ +
1
2
+
1
2
V
_
C
+1/2
2n
(x
/

1/2
, ))
_
(y
/

1/2
),
where x
/
= (x, x
d+1
) and [x
/
[ = 1. Using the relation between the Jacobi and
Gegenbauer polynomials (Denition 1.5.5 with = 0), we have
2n+ +
1
2
+
1
2
C
+1/2
2n
(t) = p
(,1/2)
n
(1)p
(,1/2)
n
(2t
2
1).
Hence the kernel can be written in terms of Jacobi polynomials as
P

n
(x, y
2
) = 2
d

Z
d
2
V
_
p

n
(w
(,1/2)
; 1, 2, z())
2
1)

(y
/
)
where z() = x
/

1/2
. We observe that [z()[
2
= [x
/
[ = 1. Hence, since V is a
positive operator, we can use Corollary 7.4.5 to conclude that
I
n
(; x) 2
d

Z
d
2
_
S
d
V
_
[p

n
(w
(,1/2)
; 1, 2, z())
2
1)[

(y
/
)h
2

(y
/
)d
= B

_
1
1
[p

n
(w
(,1/2)
; 1, 2t
2
1)[(1t
2
)

dt
=
_
1
1
[p

n
(w
(,1/2)
; 1, u)[w
(,1/2)
(u)du,
where the last step follows from a change of variables, and the constant in front
of the integral becomes 1 (on setting n = 0). This completes the proof.
As a corollary of the above theorem and the results for the Jacobi expansion,
we can state an analogue of Corollary 9.3.3.
Corollary 9.4.2 Let W
T
,
be as in Denition 8.2.1 with , 0 and f
L
p
(W
T
,
, T
d
). Then the orthogonal expansion of f with respect to W
T
,
is (C, )-
summable in L
p
(W
T
,
, T
d
), 1 p , provided that >

++
d1
2
. Moreover,
the (C, )-means are positive linear operators if 2

+2 +d.
In particular, these results apply to the classical orthogonal polynomials on T
d
.
In this case, it should be pointed out that the condition >

+ +
d1
2
is sharp
only when at least one
i
= 0. For sharp results and further discussion, see the
notes at the end of the chapter.
For > 0, let S

n
(W
T
,
; f ) denote the Ces` aro (C, )-means of the orthogonal
expansion with respect to W
T
,
. Let us also point out the following analogue of
Proposition 9.3.5.
306 Summability of Orthogonal Expansions
Proposition 9.4.3 Let w
(a,b)
(t) = (1t)
a
(1+t)
b
be the Jacobi weight function.
Let f (x) = f
0
(2[x[ 1). Then
S

n
(W
T
,
; f , x) = s

n
(w
(

+(d2)/2,1/2)
; f
0
, 2[x[ 1).
Proof Let P
n
,m
denote the orthonormal basis in Proposition 8.2.5. By denition,
P
n
(W
T
,
; x, y) =
n

m=0

[[=m
P
,m
(x)P
,m
(y).
Since P
n
,m
(x) =b
m,n
p
(+2m+(d2)/2,1/2)
nm
(2s1)R
m

(x) and R
m

is homogeneous
in the homogeneous coordinates of T
d
, using the
1
-radial coordinates for the
simplex, x = su, we obtain
w
T
,
_
T
d
f (x)P
n
(W
T
,
; x, y)W
T
,
(y)dy
= w
T
,
_
1
0
f
0
(2s 1)s
+d1
(1s)
1/2
_
[u[=1
P
n
(W
T
,
; x, su)h(

u
1
, . . . ,

u
d
)
du

u
1
u
d
ds
= c
_
1
0
f
0
(2s 1)p
(+(d2)/2,1/2)
n
(2t 1)p
(+(d2)/2,1/2)
n
(2s 1)
s
+d1
(1s)
1/2
ds,
where t =[x[ and c = B( +
d
2
, +
1
2
)
1
.
9.5 Orthogonal Expansion of Laguerre and Hermite
Polynomials
For the multiple Laguerre polynomials with respect to the weight function
W
L

(x) = x

e
[x[
with x

=
d
i=1
[x
i
[

i
, we start with
Theorem 9.5.1 If f is continuous at the origin and satises
_
xR
d
+
,[x[1
[ f (x)[x

[x[
1/2
e
[x[/2
dx <
then the Ces` aro (C, )-means of the multiple Laguerre expansion of f converge
at the origin if and only if >[[ +d
1
2
.
Proof The generating function of the Laguerre polynomials (Subsection 1.4.2)
can be written as

n=0

n
(0)

n
(x)r
n
= (1r)
1
e
xr/(1r)
, [r[ < 1.
9.5 Orthogonal Expansion of Laguerre and Hermite Polynomials 307
Summing the above formula gives a generating function for P
n
(W
L

; x, 0):

n=0
P
n
(W
L

; x, 0)r
n
=

n=0

[k[=n

k
(0)

k
(x)r
n
= (1r)
[[d
e
[x[r/(1r)
,
which implies that P
n
(W
L

; x, 0) = L
[[+d1
n
([x[). Multiplying the last expression
by the power series of (1r)
1
shows that
P

n
(W
L

; x, 0) =
n!
( +1)
n
L
[[++d
n
([x[).
Therefore, we conclude that
S

n
(W
L

; f , 0) =
n!
( +1)
n
w
L

_
R
d
+
f (x)L
[[+d+
n
([x[)W
L

(x)dx.
Using
1
radial coordinates as in (8.4.3), it follows that
S

n
(W
L

; f , 0) = w
L

n!
( +1)
n
_

0
_
_
[y[=1
f (ry)y

dy
_
L
[[+d+
n
(r)r
[[+d1
e
r
dr.
The right-hand side is the (C, )-mean of the Laguerre expansion of a one-
variable function. Indeed, dene F(r) = c
d
_
[y[=1
f (ry)y

dy, where the constant


c
d
=([[ +d)/
d
i=1
(
i
+1); then the above equation can be written as
S

n
(W
L

; f , 0) = s

(w
[[+d1
; F, 0),
where w
a
(t) = t
a
e
t
is the usual Laguerre weight function for one variable.
Note that c
d
= 1/
_
[y[=1
y

dy, so that F is just the average of f over the simplex


y : [y[ =1. This allows us to use the summability theorems for the Laguerre
expansion of one variable. The desired result follows from Szeg o [1975, p. 247,
Theorem 9.1.7]. The condition of that theorem is veried as follows:
_

1
[F(r)[r
[[+d11/2
e
r/2
dr
c
d
_

1
_
[y[=1
[ f (ry)y

[ dy r
[[+d3/2
e
r/2
dr
c
d
_
xR
d
+
,[x[1
[ f (x)x

[ [x[
1/2
e
[x[/2
dx,
which is bounded under the given condition; it is evident that F is continuous at
r = 0 if f is continuous at the origin.
The result can be extended to R
+
by using a convolution structure of the
Laguerre expansion on R
+
. The convolution is motivated by the product formula
of Laguerre polynomials,
308 Summability of Orthogonal Expansions
L

n
(x)L

n
(y) =
(n+ +1)2

(n+1)

2
_

0
L

n
(x +y +2

xy cos)e

xycos
j
1/2
(

xy sin)sin
2
d,
where j

is the Bessel function of fractional order (see, for example, Abramowitz


and Stegun [1970]). For a function f on R
+
dene the Laguerre translation
operator T

x
f by
T

x
f (y) =
( +1)2

2
_

0
f (x +y +2

xy cos)e

xycos
j
1/2
(

xysin)sin
2
d.
The product formula implies that L

n
(x)L

n
(y) = L

n
(0)T

x
L

n
(y). The convolution
of f and g is dened by
( f g)(x) =
_

0
f (y)T

x
g(y)y

e
y
dy.
It satises the following property due to G orlich and Markett [1982].
Lemma 9.5.2 Let 0 and 1 p . Then, for f L
p
(w

, R
+
) and g
L
1
(w

, R
+
),
| f g|
p,w

|f |
p,w

|g|
1,w

.
For the proof we refer to the paper of G orlich and Markett or to the mono-
graph by Thangavelu [1993, p. 139]. The convolution structure can be extended
to multiple Laguerre expansions and used to prove the following result.
Theorem 9.5.3 Let
i
0, 1 i d, and 1 p . The Ces` aro (C, )-
means of the multiple Laguerre expansion are uniformly bounded as operators
on L
p
(W

, R
d
+
) if > [[ +d
1
2
. Moreover, for p = 1 and , the (C, )-means
converge in the L
p
(W

, R
d
+
) norm if and only if >[[ +d
1
2
.
Proof The product formula of the Laguerre polynomials and the denition of the
reproducing kernel gives
P

n
(W

; x, y) = T

1
x
1
T

d
x
d
P

n
(W

; 0, y)
where T
x
i
acts on the variable y
i
. Therefore, it follows that
S

n
(W

; f , x) =
_
R
d
+
f (y)T

1
x
1
T

d
x
d
P

n
(W

; 0, y)W

(y)dy,
which can be written as a d-fold convolution in an obvious way. Therefore,
applying the inequality in Lemma 9.5.2 d times gives
|S

n
(W

; f )|
p,W

|P

n
(W

; 0, )|
1,W

|f |
p,W

.
9.5 Orthogonal Expansion of Laguerre and Hermite Polynomials 309
The norm of P

n
(W

, 0, x) is bounded if and only if >[[ +d


1
2
by the Theo-
rem 9.5.1. The convergence follows from the standard density argument.
The following result states that the expansion of an
1
-radial function is also
an
1
-radial function.
Proposition 9.5.4 Let w
a
(t) =t
a
e
t
denote the Laguerre weight function. Let f
0
be a function dened on R
+
and f (x) = f
0
([x[). Then, for the multiple Laguerre
expansion,
S
n
(W
L

; f , x) = s

n
(w
[[+d1
; f
0
, [x[).
Proof The proof is based on Proposition 8.4.7. Using (8.4.3) and the orthogonal-
ity of R
m

, we obtain
_
R
d
+
f
0
([y[)P
n
(W
L

; x, y)y

e
[y[
dy
=
_

0
f (r)r
[[+d1
e
r
_
[u[=1
P
n
(W

; x, ru)u

dudr
= c
_

0
f (r)r
[[+d1
e
r

L
[[+d1
n
(r)dr

L
[[+d1
n
([x[),
where the constant c can be determined by setting n =0. Taking the Ces` aro means
gives the desired result.
For the summability of the multiple Hermite expansion, there is no special
boundary point for R
d
or convolution structure. In this case the summability often
involves techniques from classical Fourier analysis.
Proposition 9.5.5 Let W
H

be as in Denition 8.3.1. Let f

(x) = f (

x). Then,
for each x R
d
,
lim

n
(W
B
,
; f

, x/

) = S

n
(W
H

; f , x).
Proof We look at the Fourier coefcient of f

. Making the change of variables


x y/

leads to
a
n

(W
B
,
; f ) = w
B
,
_
B
d
f (

x)P
n

(W
B
,
; x)W
B
,
(x)dx
= w
B
,

(+d)/2
_
R
d
f (y)

(y)P
n

(W
B
,
; y/

)h
2

(y)(1|y|
2
/)
1/2
dy,
where

is the characteristic function of the set y : |y|

. Using the
fact that w
B
,

(+d)/2
w
H

as and Theorem 8.3.4, the dominated


convergence theorem shows that
lim

a
n

(W
B
,
; f ) = w
H

_
R
d
f (y)P
n

(W
H

; y)h
2

(y)e
|y|
2
dy,
310 Summability of Orthogonal Expansions
which is the Fourier coefcient a
n

(W
H

; f ); the stated equation follows from


Theorem 8.3.4.
A similar result can be seen to hold for the Laguerre expansion by using the
limit relation in Theorem 8.4.4. We leave it to the reader. As an application of this
limit relation we obtain
Proposition 9.5.6 Let w
a
(t) = [t[
2a
e
t
2
. Let f
0
be an even function on R and
f (x) = f
0
(|x|). Then, for the orthogonal expansion with respect to W
H

(x) =

d
i=1
x
2
i
i
e
|x|
2
,
S

n
(W
H

; f , x) = s

n
(w
[[+(d1)/2
; f
0
, |x|).
Proof In the identity of Proposition 9.3.5 set f (x) = f (

x) and replace
x by x/

; then take the limit . The left-hand side is the limit in


Proposition 9.5.5, while the right-hand side is the same limit as that for d =1.
Furthermore, for the classical multiple Hermite polynomials there is an
analogue of Proposition 9.3.6 for ridge functions.
Proposition 9.5.7 Let W
H
(x) = e
|x|
2
and w(t) = e
t
2
. Let f : R R. For
|y| = 1 and n 0,
S

n
(W
H
; f (x, )), y) = s

n
(w; f , x, y)).
The proof follows by taking the limit in Proposition 9.3.6.
Corollary 9.5.8 Let f (x) = f
0
(|x|) be as in Proposition 9.5.6. Let f
L
p
(W
H
; R
d
), 1 p <. Then S

n
(W
H
; f ) converges to f in the L
p
(W
H
; R
d
) norm
if >
d1
2
.
Proof Using polar coordinates, we obtain

d/2
_
R
d
[S

n
(W
H
; f , x)[
p
e
|x|
2
dx =
d/2

d1
_

0
s

n
(w
(d1)/2
; f
0
, r)r
d1
e
r
2
dr.
Hence, the stated result follows from the result for the Hermite expansion on the
real line; see Thangavelu [1993].
For general results on the summability of multiple Hermite expansion, we
again refer to Thangavelu [1993].
9.6 Multiple Jacobi Expansion 311
9.6 Multiple Jacobi Expansion
Multiple Jacobi polynomials are orthogonal with respect to the weight function
W
,
(x) =
d
i=1
(1 x
i
)

i
(1 +x
i
)

i
; see Subsection 5.1.1. Recall that p
(a,b)
n
(t)
denotes an orthonormal Jacobi polynomial with respect to (1 t)
a
(1 + t)
b
.
There is a convolution structure for the Jacobi polynomials, discovered by
Gasper [1972].
Lemma 9.6.1 Let , >1. There is an integral representation of the form
p
(,)
n
(x)p
(,)
n
(y) = p
(,)
n
(1)
_
1
1
p
(,)
n
(t)d
(,)
x,y
(t), n 0,
where the real Borel measure d
(,)
x,y
on [1, 1] satises
_
1
1
[d
(,)
x,y
(t)[ M, 1 < x, y < 1,
for some constant M independent of x, y if and only if and + 1.
Moreover, the measures are nonnegative, that is, d
(,)
x,y
(t) 0 if and only if

1
2
or + 0.
The product in Lemma 9.6.1 gives rise to a convolution structure of multiple
Jacobi polynomials, which allows us to reduce the summability on [1, 1]
d
to the
summability at the point = (1, . . . , 1), just as in the Laguerre case. The Jacobi
polynomials have a generating function (Bailey [1935, p. 102, Ex. 19])
G
(,)
(r; x) =

k=0
p
(,)
k
(1)p
(,)
k
(x)r
n
=
1r
(1+r)
++2
2
F
1
_++2
2
,
++3
2
+1
;
2r(1+x)
(1+r)
2
_
, 0 r < 1.
Multiplying this formula with different variables gives a generating function for
the multiple Jacobi polynomials,

n=0
r
n

[k[=n
P
(,)
k
(x)P
(,)
k
(1) =
d

i=1
G
(
i
,
i
)
(r; x
i
) := G
(,)
d
(r; x),
where 1 = (1, 1, . . . , 1). Further multiplication, by
(1r)
1
=

n=0
( +1)
n
r
n
n!
,
gives

n=0
( +1)
n
n!
K

n,d
(W
,
; x, 1)r
n
= (1r)
1
G
(,)
d
(r; 1) (9.6.1)
for the Ces` aro (C, )-means of the multiple Jacobi polynomials.
312 Summability of Orthogonal Expansions
Theorem 9.6.2 Let
j

1
2
and
j

1
2
for 1 j d. Then the Ces` aro
(C, )-means S

n,d
(W
,
; f ) of the product Jacobi expansion dene a positive
approximate identity on C([1, 1])
d
if
d
i=1
(
i
+
i
) +3d 1; moreover,
the order of summability is best possible in the sense that (C, )-means are not
positive for 0 < <
d
i=1
(
i
+
i
) +3d 1.
Proof Let () =
d
i=1

i
; if all
i
0 then () = [[. Lemma 9.6.1 shows
that it is sufcient to prove that, for
j

j
, the kernel K

n,d
(W
,
; x, 1) 0 if
and only if () +() +3d 1. For d = 1, the (C, + +2)-means
K
++2
n,1
(w
,
; x, 1) are nonnegative for 1 x 1, as proved in Gasper [1977].
Hence, by (9.6.1) with d = 1, the function (1 r)
3
G
(,)
(r; x) is a com-
pletely monotone function of r, that is, a function whose power series has all
nonnegative coefcients. Since multiplication is closed in the space of completely
monotone functions, it follows that
(1r)
()()3d
G
(,)
d
(r; x) =
d

j=1
(1r)

j
3
G
(
j
,
j
)
d
(r; x
j
)
is a completely monotone function. Consequently, by (9.6.1) we conclude that
the means K
()+()+3d1
n,d
(W
,
; x, 1) 0. We now prove that the order of
summation cannot be improved. If the (C,
0
)-means are positive then (C, )-
means are positive for
0
. Hence, it sufces to show that the (C, () +
() +3d 1 )-means of the kernel are not positive for 0 < < 1. From
the generating function and the fact that
2
F
1
(a, b; c; 0) = 1, we conclude that, for
=() +() +3d 2,
(1r)
1
G
(,)
d
(r; 1) = (1r)(1r
2
)
()()2d
=

k=0
(() +() +2d)
k
k!
_
r
2k
r
2k+1
_
.
Hence, setting
A
k
=
(() +() +3d 1)
k
k!
K
()+()+3d2
k,d
(W
,
; 1, 1)
and comparing with (9.6.1), we conclude that
A
2k
=A
2k+1
=
(() +() +2d)
k
k!
0.
Therefore, it follows that
(() +() +3d +2)
2n1
(2n1)!
K
()+()+3d1
2n+1,d
(W
,
; 1, 1)
=
2n+1

k=0
()
2n+1k
(2n+1k)!
A
k
=
n

k=0
1
2n2k +1
(1)
2n2k
(2n2k)!
A
2k
.
Since 0 < < 1, we conclude that the (C, () +() +3d 1)-means are
not positive.
9.6 Multiple Jacobi Expansion 313
The positivity of these Ces` aro means implies their convergence, since it shows
that the uniform norm of S

n
(W
,
; f ) is 1. The explicit formula for the reproduc-
ing kernel is known only in one special case, namely, the case when
i
=
i
=
1
2
for all i; see Xu [1995] or Berens and Xu [1996]. This formula, given below, is
nonetheless of interest and shows a character different from the explicit formu-
lae for the reproducing kernel on the ball and on the simplex. Denote the weight
function by W
0
(x), that is,
W
0
(x) = (1x
2
1
)
1/2
(1x
2
d
)
1/2
, x [1, 1]
d
.
We also need the notion of divided difference. The divided difference of a func-
tion f : R R at the pairwise distinct points x
0
, x
1
, . . . , x
n
in R, is dened
inductively by
[x
0
] f = f (x
0
) and [x
0
, . . . , x
n
] f =
[x
0
, . . . , x
n1
] f [x
1
, . . . , x
n
] f
x
0
x
n
.
The difference is a symmetric function of the coordinates of the points.
Theorem 9.6.3 Let 1 = (1, 1, . . . 1). Then
P
n
(W
0
; x, 1) = [x
1
, . . . , x
d
] G
n
with
G
n
(t) = (1)
[(d+1)/2]
2(1t
2
)
(d1)/2
_
T
n
(t) for d even,
U
n1
(t) for d odd.
Proof The generating function of the Chebyshev polynomials T
n
and U
n
, given
in Proposition 1.4.12, implies that
(1r
2
)
d

d
i=1
(12rx
i
+r
2
)
=

n=0
P(W
0
; x, 1)r
n
.
The left-hand side can be expanded as a power series using the formula
[x
1
, . . . , x
d
]
1
ab()
=
b
d1

d
i=1
(abx
i
)
,
which can be proved by induction on the number of variables; the result is
(1r
2
)
d

d
i=1
(12rx
i
+r
2
)
=
(1r
2
)
d
(2r)
d1
[x
1
, . . . , x
d
]
1
12r() +r
2
=
(1r
2
)
d
(2r)
d1
[x
1
, . . . , x
d
]

n=d1
U
n
()r
n
= 2
1d
[x
1
, . . . , x
d
]

n=0
r
n
d

k=0
_
d
k
_
(1)
k
U
n2k+d1
(),
314 Summability of Orthogonal Expansions
where the second equation uses the generating function of the Chebyshev poly-
nomials of the second kind, given in Denition 1.4.10, and the third equation uses
the fact that [x
1
, . . . , x
d
]p = 0 whenever p is a polynomial of degree at most d 2.
Using U
m+1
(t) = sinm/sin and sinm = (e
im
e
im
)/(2i), with t = cos,
as well as the binomial theorem, we obtain
d

k=0
_
d
k
_
(1)
k
U
n2k+d1
(t)
=
1
sin
d

k=0
_
d
k
_
(1)
k
(sin)
n2k+d
=
1
2i sin
_
e
i(n+d)
(1e
2i
)
d
+e
i(n+d)
(1e
2i
)
d
_
= (2i)
d
(sin)
d1
e
in
(1)
d
e
in
2i
= 2
d1
G
n
(t),
from which the stated formula follows on comparing the coefcients of r
n
in the
two power expansions.
Proposition 9.6.4 Dene x = cos = (cos
1
, . . . , cos
d
) and y = cos =
(cos
1
, . . . , cos
d
). For Z
d
2
, denote by +the vector that has components

i
+
i

i
. Then
P
n
(W
0
; x, y) =

Z
d
2
P
n
(W
0
; cos(+), 1).
Proof With respect to the normalized weight function
1
(1 t
2
)
1/2
, the
orthonormal Chebyshev polynomials are

T
0
(t) =1 and

T
n
(t) =

2 cos n. Hence
P
n
(W
0
; x, y) = 2
d

/
[[=n
cos
1

1
cos
1

1
cos
d

d
cos
d

d
,
where N
d
0
and the notation

/
means that whenever
i
=0 the termcontaining
cos
i

i
is halved. Then the stated formula is seen to be merely a consequence of
the addition formula for the cosine function.
The multiple Chebyshev expansion is related to the multiple Fourier series on
T
d
. In fact, P
n
(W
0
; x, 1) is the Dirichlet kernel in the following sense:
P
n
(W
0
; x, 1) =

[[=n
e
i,)
, x = cos.
The corresponding summability of the multiple Fourier series is called
1
summa-
bility; it has a completely different character to the usual spherical summability
(in which the summation of the kernel is taken over multi-indices in N
d
0
:
|| = n); see, for example, Stein and Weiss [1971], Podkorytov [1981] or
Berens and Xu [1997].
9.7 Notes 315
9.7 Notes
Section 9.1 Proposition 9.1.2 and its proof are extensions of the one-variable
results; see Freud [1966, p. 139]. The properties of best polynomial approx-
imations can be found in books on approximation theory, for example,
Lorentz [1986], Cheney [1998] or DeVore and Lorentz [1993].
For d = 1, the asymptotics of the Christoffel function are known for general
weight functions. It was proved in M at e, Nevai and Totik [1991] that
lim
n
n
n
(d, x) =
/
(x)
_
1x
2
for a.e. x [1, 1]
for measures belonging to the Szeg o class (that is, log
/
(cost) L
1
[0, 2]). For
orthogonal polynomials of several variables, we may conjecture that an analogous
result holds:
lim
n
_
n+d
d
_

n
(W; x) =
W(x)
W
0
(x)
,
where W
0
is the analogue of the normalized Chebyshev weight function for the
given domain. This limit relation was established for several classical-type weight
functions on the cube [1, 1]
d
with W
0
(x) =
d
(1 x
2
1
)
1/2
(1 x
2
d
)
1/2
,
on the ball B
d
with W
0
(x) = w
B
(1 |x|
2
)
1/2
and on the simplex T
d
with
W
0
(x) = w
T
x
1/2
1
x
1/2
d
(1[x[)
1/2
, where w
B
and w
T
are normalization con-
stants, so that W
0
has unit integral on the corresponding domain; see Xu [1995],
Bos [1994] and Xu [1996a, b], The above limit was also studied for a central
symmetric weight function in Bos, Della Vecchia and Mastroianni [1998]. More
recently, fairly general results on the asymptotics of the Christoffel function were
established in Kro o and Lubinsky [2013a, b]; their results were motivated by the
universality limit, a concept originating in random matrix theory.
Section 9.2 Kogbetliantzs inequality on the positivity of the (C, )-means
of Gegenbauer polynomials is a special case of a much more general positiv-
ity result on Jacobi polynomials due to Askey and to Gasper (see, for example,
Askey [1975] and Gasper [1977]). These inequalities also imply other positivity
results for h-harmonics.
For the weight function h

(x) =

d
i=1
[x
i
[

i
, which is invariant under Z
d
2
, the
result in Corollary 9.2.4 can be improved as follows.
Theorem 9.7.1 The (C, )-means of the h-harmonic expansion of every con-
tinuous function for h

(x) =
d
i=1
[x
i
[

i
converge uniformly to f if and only if
>
d2
2
+[[ min
1id

i
.
A further result shows that the great circles dened by the intersection of S
d1
and the coordinate planes form boundaries on S
d1
. Dene
316 Summability of Orthogonal Expansions
S
d1
int
= S
d1
_ d
_
i=1
x S
d1
: x
i
= 0,
which is the interior region bounded by these boundaries on S
d1
. Then, for con-
tinuous functions, S

n
(h
2

, f ; x) converges to f (x) for every x S


d1
int
provided that
>
d2
2
, independently of . The proof of these results is in Li and Xu [2003],
which uses rather involved sharp estimates of the Ces` aro means of the repro-
ducing kernels given in Theorem 7.5.5. For L
p
convergence with 1 < p < , a
sharp critical index for the convergence of S

n
(h
2

, f ) was established in Dai and


Xu [2009b].
Section 9.3 The summability of the expansion in classical orthogonal poly-
nomials on the ball can be traced back to the work of Chen, Koschmieder, and
several others, where the assumption that 2 is an integer was imposed; see Sec-
tion 12.7 of Erd elyi et al. [1953]. Theorem 9.3.4 was proved in Xu [1999a].
The general relation between summability of orthogonal expansion on the sphere
and on the ball is studied in Xu [2001a]. For the weight function W
B
,
(x) =

d
i=1
[x
i
[

i
(1 |x|
2
)
1/2
, which is invariant under Z
d
2
, Corollary 9.3.3 can be
improved to a sharp result: S

n
(W
B
,
, f ) converges to f uniformly for continuous
functions f if and only if >
d1
2
+[[ min
1id+1

i
, with
d+1
= as in
Theorem 9.7.1; further, pointwise convergence holds for x inside B
d
and x not
on one of the coordinate hyperplanes, provided that >
d1
2
. The proof uses
Theorem 9.3.1.
Section 9.4 A result similar to that in Theorem 9.7.1 also holds for classical
orthogonal polynomials on the simplex. The proof requires a sharp estimate for
the Ces` aro means of the reproducing kernels given in Theorem 8.2.7. The esti-
mate is similar to that for the kernel in the case of Z
d
2
-invariant h
2

, but there are


additional difculties. A partial result appeared in Li and Xu [2003]; a complete
analogue to the result on the ball was proved in Dai and Xu [2009a], that is, that
S

n
(W
T
,
, f ) converges to f uniformly for continuous functions f if and only if
>
d1
2
+[[ min
1id+1

i
with
d+1
= . The pointwise convergence in the
interior of T
d
holds if >
d1
2
. Furthermore, a sharp result for convergence in
L
p
, 1 < p <, was established in Dai and Xu [2009b].
Section 9.5 For Laguerre expansions, there are several different forms of
summability, depending on the L
2
spaces under consideration. For example,
considering the Laguerre functions L

n
(x) = L

n
(x)e
x/2
x
/2
and their other
varieties, L

n
(
1
2
x
2
)x

or L

n
(x
2
)(2x)
1/2
, one can form orthogonal systems in
L
2
(dx, R
+
) and L
2
(x
2+1
dx, R
+
). Consequently, there have been at least four
types of Laguerre expansion on R
+
studied in the literature and each has its
extension in the multiple setting; see Thangavelu [1992] for the denitions. We
considered only orthogonal polynomials in L
2
(x

dx; R
+
), following Xu [2000d].
For results in the multiple Hermite and Laguerre expansions, see the monograph
by Thangavelu [1993].
9.7 Notes 317
Section 9.6 For multiple Jacobi expansions the order of can be reduced if
we consider only convergence without positivity. However, although their product
nature leads to a convolution structure, which allows us to reduce the summability
to the point x = 1, convergence at the point 1 no longer follows from the product
of the results for one variable.
Theorem 9.7.2 Let
j
,
j

1
2
. The Ces` aro (C, ) means of the multiple
Jacobi expansion with respect to W
,
are uniformly convergent in the norm of
C([1, 1]
d
) provided that >

d
j=1
max
j
,
j
+
d
2
.
Similar results also hold for the case where
j
> 1,
j
> 1 and
j
+
j

1, 1 j d, with a properly modied condition on . In particular, if


j
=
j
=

1
2
then convergence holds for > 0. This is the order of the
1
summability
of the multiple Fourier series. The proof of these results was given in Li and
Xu [2000], which uses elaborate estimates of the kernel function that are written
in terms of the integral of the Poisson kernel of the Jacobi polynomials.
Further results The concise formulae for the reproducing kernels with
respect to the h
2

associated with Z
d
2
on the sphere, W
B
,
on the unit ball and
W
T
,
on the simplex open the way for in-depth study of various topics in approx-
imation theory and harmonic analysis. Many recent advances in this direction
are summed up in the book Dai and Xu [2013]. These kernels are also useful
in the construction of highly localized bases, called needlets, a name coined
in Narcowich, Petrushev, and Ward [2006], where such bases were constructed
on the unit sphere. These bases were constructed and studied by Petrushev and
Xu [2008a] for W
B

on the ball; by Ivanov, Petrushev and Xu [2010], [2012] for


W
T
,
on the simplex and on product domains, respectively; by Kerkyacharian,
Petrushev, Picard and Xu [2009] for the Laguerre weight on R
d
+
; and by Petrushev
and Xu [2008b] for the Hermite weight on R
d
.
The Ces` aro-summability of orthogonal expansions on a cylindrical domain was
studied by Wade [2011].
10
Orthogonal Polynomials Associated with
Symmetric Groups
In this chapter we consider analysis associated with symmetric groups. The
differentialdifference operators for these groups, called type A in Weyl group
nomenclature, are crucial in this theory. The techniques tend to be algebraic,
relying on methods from combinatorics and linear algebra. Nevertheless the
chapter culminates in explicit evaluations of norm formulae and integrals of
the MacdonaldMehtaSelberg type. These integrals involve the weight function

1i<jd

x
i
x
j

2
on the torus and the weight function on R
d
equipped with the
Gaussian measure. The fundamental objects are a commuting set of self-adjoint
operators and the associated eigenfunction decomposition. The simultaneous
eigenfunctions are certain homogeneous polynomials, called nonsymmetric Jack
polynomials. The Jack polynomials are a family of parameterized symmetric
polynomials, which have been studied mostly in combinatorial settings.
The fact that the symmetric group is generated by transpositions of adja-
cent entries will frequently be used in proofs; for example, it sufces to prove
invariance under adjacent transpositions to show group invariance. Two bases of
polynomials will be used, not only the usual monomial basis but also the p-basis;
these are polynomials, dened by a generating function, which have convenient
transformation formulae for the differentialdifference operators. Also, they pro-
vide expressions for the nonsymmetric Jack polynomials which are independent
of the number of trailing zeros of the label N
d
0
. The chapter concludes with
expressions for the type-A exponential-type kernel and the intertwining operators
and an algorithm for the nonsymmetric Jack polynomials labeled by partitions.
The algorithm is easily implementable in a symbolic computation system. There
is a brief discussion of the associated Hermite-type polynomials.
10.1 Partitions, Compositions and Orderings
In this chapter we will be concerned with the action of the symmetric group
S
d
on R
d
and on polynomials. Some orderings of N
d
0
are fundamental and are
10.1 Partitions, Compositions and Orderings 319
especially relevant to the idea of expressing a permutation as a product of adjacent
transpositions.
Consider the elements of S
d
as functions on 1, 2, . . . , d. For x R
d
and
w S
d
let (xw)
i
= x
w(i)
for 1 i d; extend this action to polynomials by writ-
ing wf (x) = f (xw). This has the effect that monomials transform to monomials,
w(x

) = x
w
, where (w)
i
=
w
1
(i)
for N
d
0
. (Consider x as a row vector
as a column vector and w as a permutation matrix, with 1s at the (w( j), j)
entries.) The reections in S
d
are transpositions interchanging x
i
and x
j
, denoted
by (i, j) for i ,= j. The term composition is a synonym for multi-index; it is com-
monly used in algebraic combinatorics. The basis polynomials (some orthogonal)
that will be considered have composition labels and are contained in subspaces
invariant under S
d
. In such a subspace the standard composition label will be
taken to be the associated partition. We recall the notation
i
for the unit label:
(
i
)
j
=
i, j
. This will be used in calculations of quantities such as
i
(when
N
d
0
and
i
1).
Denition 10.1.1 A partition (with no more than d parts) is a composition
N
d
0
such that
1

2

d
. The set of all such partitions is denoted by N
d,P
0
.
For any N
d
0
, let
+
be the unique partition such that
+
= w for some
w S
d
. The length of a composition is () := maxi :
i
> 0.
There is a total order on compositions, namely, lexicographic order (lex order
for short) (see also Section 3.1), dened as follows: ~
L
means that
i
=
i
for 1 i < m and
m
>
m
for some m d (, N
d
0
); but there is a partial
order better suited for our purposes, the dominance order.
Denition 10.1.2 For , N
d
0
, say, dominates , that is, _ , when

j
i=1

j
i=1

i
for 1 j d. Also, ~ means _ with ,=. Further,
means that [[ =[[ and either
+
~
+
or
+
=
+
and ~.
Clearly the relations _and are partial orderings ( means or =
). Dominance has the nice property of reverse invariance: for N
d
0
let
R
=
(
d
,
d1
, . . . ,
1
); then, for , N
d
0
, [[ = [[ and _ implies that
R
_

R
. In our applications dominance order will be used to compare compositions
, having
+
=
+
and partitions , having [[ =[[. The following lemma
shows the effect of two basic operations.
Lemma 10.1.3 Suppose that N
d
0
and N
d,P
0
; then
(i) if
i
>
j
and i < j then ~(i, j);
(ii) if
i
>
j
+1 (implying i < j) then ~
+
, where
i
=
i
1,
j
=
j
+1
and
k
=
k
for k ,= i, j;
320 Orthogonal Polynomials Associated with Symmetric Groups
(iii)
+
_;
(iv) if
i
>
j
+1 and 1 s <
i

j
then ~
_

(s)
_
+
where
(s)
i
=
i
s,

(s)
j
=
j
+s and
(s)
k
=
k
for k ,= i, j (that is,
(s)
= s(
i

j
)).
Proof For part (i) let = (i, j); then
m
k=1

k
=
m
k=1

k
+(
i

j
) for i m<
j, otherwise
m
k=1

j
=
m
k=1

j
. For part (ii) assume (by relabeling if necessary)
that
i
>
i+1

j1
>
j

j+1
(possibly j =i +1) and that now
k
=
k
for all k except i, for which
i
=
i
1 and j, for which
j
=
j
+1. By construc-
tion
i

i+1
and
j1

j
, so =
+
(this is the effect of the relabeling) and
clearly ~
+
. For part (iii) note that
+
can be obtained by applying a nite
sequence of adjacent transpositions (apply (i, i +1) when
i
<
i+1
).
For part (iv), note that
_

(
i

j
s)
_
+
=
_

(s)
_
+
. Part (ii) shows that ~
_

(1)
_
+
~
_

(2)
_
+
~ ~
_

(t)
_
+
for t = [
1
2
(
i

j
)], and this proves
part (iv).
10.2 Commuting Self-Adjoint Operators
The differentialdifference operators of Chapter 4 for the group S
d
allow one
parameter, denoted by , and have the formula, for f
d
and 1 i d,
D
i
f (x) =
f (x)
x
i
+
d

j=1, j,=i
f (x) f (x(i, j))
x
i
x
j
,
often abbreviated as
i
+
j,=i
_
1(i, j)

/(x
i
x
j
).
Lemma 10.2.1 The following commutants hold for 1 i, j d:
(i) x
i
D
i
f (x) D
i
x
i
f (x) =f (x)

j,=i
(i, j) f (x);
(ii) x
j
D
i
f (x) D
i
x
j
f (x) = (i, j) f (x) for i ,= j.
Proof In Proposition 6.4.10 set u =
i
, t =
j
; then
x
j
D
i
f (x) =D
i
x
j
f (x)

j
,
i
_
f (x)

k,=i

j
,
i

k
_
(i, k) f (x).
Only the positive roots v =(
i

k
) appear in the sum (the inner product is the
usual one on R
d
).
Lemma 10.2.2 For m, n N
0
,
x
m
1
x
n
2
x
n
1
x
m
2
x
1
x
2
= sign(mn)
maxm,n1

i=minm,n
x
m+n1i
1
x
i
2
.
10.2 Commuting Self-Adjoint Operators 321
We will describe several inner products , ) on
d
with the following
properties; such inner products are called permissible.
1. if p P
d
n
and q P
d
m
then n ,= m implies that p, q) = 0;
2. for p, q
d
and w S
d
, wp, wq) =p, q);
3. for each i, the operator D
i
x
i
is self-adjoint, that is,
D
i
(x
i
p(x)), q(x)) =p(x), D
i
(x
i
q(x))).
One such inner product has already been dened, in Denition 7.2.2, namely
p, q) = p(D
1
, D
2
, . . . , D
d
)q(x)[
x=0
. Here D

i
= x
i
(a multiplication factor) and
so D
i
x
i
is self-adjoint for each i. These operators do not commute, but a sim-
ple modication provides sufcient commuting self-adjoint operators that the
decomposition of
d
into joint eigenfunctions provides an orthogonal basis.
Recall that for two linear operators A, B, operating on the same linear space,
the commutant is [A, B] = ABBA.
Lemma 10.2.3 For i ,= j, [D
i
x
i
, D
j
x
j
] = (D
i
x
i
D
j
x
j
)(i, j); moreover,
[D
i
x
i
, D
j
x
j
(i, j)] = 0.
Proof By part (ii) of Lemma 10.2.1, x
j
D
i
=D
i
x
j
+ (i, j). Since D
i
D
j
=D
j
D
i
,
we have
D
i
x
i
D
j
x
j
D
j
x
j
D
i
x
i
=D
i
[D
j
x
i
+ (i, j)] x
j
D
j
(D
i
x
j
+ (i, j))x
i
= (D
i
x
i
D
j
x
j
)(i, j).
Also, (D
i
x
i
D
j
x
j
)(i, j) = [D
i
x
i
(i, j) (i, j)D
i
x
i
].
Denition 10.2.4 For 1 i d dene the self-adjoint operators U
i
on
d
by
U
i
=D
i
x
i
+
i1

j=1
( j, i).
Note that each U
i
preserves the degree of homogeneity; that is, it maps P
d
n
to P
d
n
for each n. The transpositions are self-adjoint because (i, j)p, q) =
p, (i, j)
1
q) =p, (i, j)q) by the hypothesis on the inner product.
Theorem 10.2.5 For 1 i < j d, U
i
U
j
=U
j
U
i
.
Proof Write U
i
= D
i
x
i
+ A and U
j
= D
j
x
j
+ (i, j) B, where
A =

k<i
(k, i) and B =

k<j,k,=i
(k, j). Then [U
i
, U
j
] = [D
i
x
i
, D
j
x
j
(i, j)]
[D
i
x
i
, B] [A, D
j
x
j
] +
2
[A, B+(i, j)]. The rst term is zero by Lemma 10.2.3,
and the middle two are zero because the transpositions in A, B individually
322 Orthogonal Polynomials Associated with Symmetric Groups
commute with the other terms; for example D
i
x
i
(k, j) = (k, j)D
i
x
i
for k ,= i.
Further,
[A, B+(i, j)] =
i1

k=1
j1

s=1
[(k, i), (s, j)]
=
i1

k=1
([(k, i)(k, j) (i, j)(k, i)] [(k, j)(k, i) (k, i)(i, j)]).
Each bracketed term is zero since (k, i)(k, j) = (i, j)(k, i) in the sense of
multiplication in the group S
d
.
We consider the action of U
i
on a monomial x

. For any i, by Lemma 10.2.2


we have
D
i
x
i
x

= (
i
+1)x

+

A
x


B
x

where
A =
_
j,=i
+k(
j

i
) : (
i

j
), 0 k
i

j
,
B =
_
j,=i
+k(
i

j
) : (
i
<
j
1), 1 k
j

i
1.
The combined coefcient of x

is (
i
+1) +#j :
j

i
, j ,= i. Monomials
with coefcient are of the form x

, where
= (
1
, . . . ,
i
k, . . . ,
j
+k, . . .) for 1 k
i

j
;
in each case
+
~
+
, by part (iv) of Lemma 10.1.3, except for = (i, j)
(for k =
i

j
). Monomials with coefcient have the form x

, where =
(
1
, . . . ,
i
+k, . . . ,
j
k, . . .) for 1 k
j

i
1. Again by Lemma 10.1.3,

+
~
+
. When the remaining terms of U
i
are added, the effect is to remove the
terms ( j, i)x

with j < i and


j
<
i
, for which ( j, i) ~. Thus
U
i
x

= ((d #j :
j
>
i
# j : j < i and
j
=
i
) +
i
+1)x

+q
,i
,
where q
,i
is a sum of terms x

with . This shows that U


i
is repre-
sented by a triangular matrix on the monomial basis (triangularity with respect to
the partial order ). It remains to study the simultaneous eigenfunctions, called
nonsymmetric Jack polynomials, in detail.
10.3 The Dual Polynomial Basis
There is a basis of homogeneous polynomials which has relatively tractable
behavior under the actions of D
i
and U
i
and also provides an interesting inner
product. These polynomials are products of analogues of x
n
i
.
10.3 The Dual Polynomial Basis 323
Denition 10.3.1 For 1 i d dene the homogeneous polynomials p
n
(x
i
; x),
n N
0
, by logarithmic differentiation applied to the generating function:

n=0
p
n
(x
i
; x)y
n
= (1x
i
y)
1
d

j=1
(1x
j
y)

,
valid for [y[ < min
j
_
1/

x
j

_
.
We will show that D
i
p
n
(x
i
; x) = (d +n) p
n1
(x
i
; x) and D
j
p
n
(x
i
; x) = 0 for
j ,= i, using an adaptation of logarithmic differentiation. Let f
0
= (1x
i
y)
1
and
f
1
=
d
j=1
(1x
j
y)

(the latter is S
d
-invariant); then
D
i
( f
0
f
1
)
f
0
f
1

y
2
( f
0
f
1
)/y
f
0
f
1
=
(1+)y
1x
i
y
+

j,=i
1
x
i
x
j
_
1
1x
i
y
1x
j
y
_

y
2
x
i
1x
i
y

j=1
y
2
x
j
1x
j
y
= (1+)y +

j,=i
y y
2
x
j
1x
j
y
= (d +1)y.
In the generating function,

n=0
D
i
p
n
(x
i
; x)y
n
=

n=0
p
n
(x
i
; x)(n+d +1)y
n+1
,
as claimed. For j ,= i,
D
j
( f
0
f
1
)
f
0
f
1
=
y
1x
j
y
+
1
x
j
x
i
_
1
1x
i
y
1x
j
y
_
= 0.
These polynomials are multiplied together (over i) to form a basis.
Denition 10.3.2 For N
d
0
let p

(x) =

d
i=1
p

i
(x
i
; x); alternatively, set

N
d
0
p

(x)y

=
d

i=1
_
(1x
i
y
i
)
1
d

j=1
(1x
j
y
i
)

_
,
valid for max
i
[y
i
[ < min
i
(1/[x
i
[).
The polynomials p

(x) transform like monomials under the action of S


d
;
indeed, if w S
d
then wp

(x) = p

(xw) = p
w
(x); this can be proved using
the generating function.
Denote the generating function by F(x, y). The value of D
i
p

will be derived
from the equation
D
i
F
F
=
y
2
i
F
F
y
i
+(d +1)y
i
+

j,=i
y
i
y
j
_
F (y
j
, y
i
)F

(y
i
y
j
)F
,
324 Orthogonal Polynomials Associated with Symmetric Groups
where (y
i
, y
j
) denotes the interchange of the variables y
i
, y
j
in F. A typical term
in the difference part has the form
1
x
i
x
j
_
1
(1x
i
y
i
)(1x
j
y
j
)
(1x
i
y
j
)(1x
j
y
i
)
_
=
y
i
y
j
(1x
i
y
j
)(1x
j
y
i
)
.
Using similar methods as for p
n
(x
i
; x) we obtain
1
F
_
D
i
F y
2
i
F
y
i
_
=
y
i
y
2
i
x
i
1x
i
y
i
+
d

j=1
_
y
j
1x
i
y
j

y
2
i
x
j
1x
j
y
i
_
+

j,=i
y
i
y
j
(1x
i
y
j
)(1x
j
y
i
)
= (d +1)y
i
+

j,=i
_
y
i
y
j
(1x
i
y
j
)(1x
j
y
i
)
+
y
j
1x
i
y
j

y
i
1x
j
y
i
_
= (d +1)y
i
+

j,=i
y
i
y
j
(x
i
x
j
)
(1x
i
y
j
)(1x
j
y
i
)
;
the sum in the last line equals

j,=i
y
i
y
j
(F (y
j
, y
i
)F)
(y
i
y
j
)F
.
As before, if the superscript (x) refers to the variable being acted upon then
D
(x)
i
x
i
F(x, y) =D
(y)
i
y
i
F(x, y) for each i. Indeed,
1
F
D
(x)
i
x
i
F =1+
x
i
y
i
1x
i
y
i
+
d

j=1
x
i
y
j
1x
i
y
j
+

j,=i
1
x
i
x
j
_
x
i

x
j
(1x
i
y
i
)(1x
j
y
j
)
(1x
i
y
j
)(1x
j
y
i
)
_
=1+
(1+)x
i
y
i
1x
i
y
i
+

j,=i
1x
j
y
j
(1x
i
y
j
)(1x
j
y
i
)
,
which is symmetric in x, y (as is F, of course).
Proposition 10.3.3 For N
d
0
and 1 i d, D
i
p

= 0 if
i
= 0, else
D
i
p

= [ (1+#j :
j
<
i
) +
i
] p

i
+

A
p


B
p

,
where
A =
_
j,=i
+k
i
(k +1)
j
: max
_
0,
j

i
_
k
j
1,
B =
_
j,=i
(k +1)
i
+k
j
: max
_
1,
i

j
_
k
i
1.
10.3 The Dual Polynomial Basis 325
Proof We have that D
i
p

is the coefcient of y

in
_
y
2
i

y
i
+(d +1)y
i
_
F +

j,=i
y
i
y
j
(y
i
y
j
)
[F (y
j
, y
i
)F] .
The rst termcontributes (d +
i
) p

i
. For any j ,=i let N
d
0
satisfy
k
=
k
for k ,=i, j. Then p

appears in the sum with coefcient when there is a solution


to
i
s =
i
,
j
+1 +s =
j
, 0 s
i

j
1 (this follows from Lemma
10.2.2), that is, for 0
j
min
_

i
1,
j
1
_
and
i
+
j
=
i
+
j
1, and
p

appears in the sum with coefcient when there is a solution to the same
equations but with
i
and
j
interchanged. In the case
j

i
the values
i
=
i

1,
j
=
j
appear (so that =). These are deleted from the set B and combined
to give the coefcient of p

; the total is (d #j : j ,= i and


j

i
).
A raising operator for the polynomials p

is useful, as follows.
Denition 10.3.4 For N
d
0
and 1 i d let
i
p

= p
+
i
, and extend by
linearity to spanp

: N
d
0
.
Proposition 10.3.5 For 1 i d the operator D
i

i
satises
D
i

i
p

= [ (d #j :
j
>
i
) +
i
+1] p

+

A
p


B
p

,
where N
d
0
and
A =
_
j,=i
+k(
i

j
) : max
_
1,
j

i
_
k
j
,
B =
_
j,=i
+k(
j

i
) : max
_
1,
i

j
+1
_
k
i
.
Proof This is a consequence of Proposition 10.3.3 with replaced by +
i
.
The sets A, B used before are reformulated, setting
j
=
j
k in A and
i
=
i
k
in B.
Corollary 10.3.6 For N
d
0
and 1 i d,
D
i

i
p

= [ (d # j :
j
>
i
) +
i
+1] p

j
>
i
p
(i, j)
+q
,i
,
where q
,i
is a sum of terms of the form p

with [[ = [[ and
+
~
+
and

j
= 0 implies that
j
= 0.
Proof Suppose that , N
d
0
with [[ = [[ and
k
=
k
for all k ,= i, j (for any
i, j with i ,= j); then part (iv) of Lemma 10.1.3 implies that
+
~
+
if and only
if min
_

i
,
j
_
< min
_

i
,
j
_
max
_

i
,
j
_
< max
_

i
,
j
_
; of course the latter
326 Orthogonal Polynomials Associated with Symmetric Groups
inequality is redundant since
i
+
j
=
i
+
j
. Then any AB (the sets in
Proposition 10.3.5) satises the condition min
_

i
,
j
_
< min
_

i
,
j
_
for some
j; the exceptions = (i, j) occur in A when
i

j
1.
It remains to show that p

: N
d
0
is actually a basis for
d
for all but
a discrete set of negative values of ; further, D
i

i
= D
i
x
i
+, as one might
already suspect from the coefcient of p

in D
i

i
p

. The following is valid for


spanp


d
, which in fact is an apparent restriction, only.
Proposition 10.3.7 For 1 i d, D
i

i
=D
i
x
i
+.
Proof The generating function for p
n
(x
i
; x) is (1 x
i
y)
1
d

j=1
(1x
j
y)

. Let

d
j=1
(1x
j
y)

n=0

n
(x)y
n
, where
n
is a homogeneous S
d
-invariant
polynomial; thus
p
n+1
(x
i
; x) =
n+1

j=0
x
j
i

n+1j
(x) =
n+1
(x) +x
i
p
n
(x
i
; x).
Also,
n+1
(x)/x
i
=p
n
(x
i
; x) because

n=0

x
i

n
(x)y
n
=
y
1x
i
y
d

j=1
(1x
j
y)

.
Let N
d
0
and write p

(x) = p

i
(x
i
; x)g(x), where g(x) =
j,=i
p

j
(x
j
; x). Then
D
i

i
p

D
i
x
i
p

=D
i
[(p
1+
i
(x
i
; x) x
i
p

i
(x
i
; x)) g(x)]
=D
i
[
1+
i
(x)g(x)]
=
_

x
i

1+
i
(x)
_
g(x) +
1+
i
(x)D
i
g(x)
=p

i
(x
i
; x)g(x) =p

(x).
This uses the product rule for D
i
with one factor invariant and the fact that
D
i
g(x) = 0 (Proposition 10.3.3).
Proposition 10.3.8 For 1 i d and 0, the linear operator D
i

i
is one to
one on spanp

: N
d
0
.
Proof Because D
i

i
= (1, i)D
1

1
(1, i) it sufces to show that D
1

1
is one to
one. The effect of D
1

1
on p

: N
d
0
, [[ = n respects the partial order
because D
1

1
p

= [ (d # j :
j
>
1
) +
1
+1] p

j
>
1
p
(1, j)
+
q
,1
(Corollary 10.3.6), where each term p

in q
,1
satises
+
~
+
and
(1, j) ~ when
j
>
1
(part (i) of Lemma 10.1.3).
10.3 The Dual Polynomial Basis 327
To show that p

: N
d
0
is a basis we let
V
m
n
= spanp

: N
d
0
, [[ = n,
i
= 0 for i > m
and demonstrate that dimV
m
n
= dimP
m
n
, where P
m
n
are the homogeneous
polynomials in m variables, by a double induction argument for 1 m d.
Proposition 10.3.9 For n N
0
, 1 m d and 0,
dimV
m
n
=
_
m+n1
n
_
with V
m
n
=p P
d
n
: D
i
p = 0, for i > m.
Proof The boundary values are dimV
m
0
= 1 and
dimV
1
n
= dim[cp
n
(x
1
; x) : c R] = 1;
the latter holds because p
n
(1; (1, 0, . . .)) = ( +1)
n
/n! ,= 0. Suppose that
dimV
m
n
=
_
m+n1
n
_
for all pairs (m, n) with m+n = s and 1 md; this implies
that the associated p

are linearly independent. For m < d and m+n = s we


have
V
m+1
n
= spanp

: [[ = n,
i
= 0 for i > m+1,
m+1
1+V
m
n
=
m+1
V
m+1
n1
+V
m
n
.
This is a direct sum because D
m+1

m+1
is one to one on V
m+1
n1
and D
m+1
V
m
n
=
0, which shows that dimV
m+1
n
=dimV
m+1
n1
+dimV
m
n
=
_
m+n1
n1
_
+
_
m+n1
n
_
=
_
m+n
n
_
by the inductive hypothesis. Further, V
m
n
= p V
m+1
n
: D
m+1
p = 0;
together with V
1
n
V
2
n
V
d
n
this proves the remaining statement.
We now discuss the matrices of U
i
with respect to the basis
_
p

: N
d
0
_
;
their triangularity is opposite to that of the basis x

: N
d
0
. The eigenvalues
appear in so many calculations that they deserve a formal denition. There is an
associated rank function r.
Denition 10.3.10 For N
d
0
and 1 i d let
r

(i) = #
_
j :
j
>
i
_
+#
_
j : 1 j i,
j
=
i
_
,

i
() = (d r

(i) +1) +
i
+1.
For a partition N
d,P
0
, one has r

(i) = i and
i
() = (d i +1) +
i
+1.
Thus r

is a permutation of 1, . . . , N for any N


d
0
.
Proposition 10.3.11 For N
d
0
and 1 i d, U
i
p

=
i
(a) p

+q
,i
, where
q
,i
is a sum of p

with ; also, if
j
= 0 for each j m then
j
= 0 for
j m, with 2 m < d.
328 Orthogonal Polynomials Associated with Symmetric Groups
Proof By Corollary 10.3.6 and Proposition 10.3.7,
U
i
p

=
_
(d # j :
j
>
i
) +
i
+1

j
>
i
p
(i, j)

j<i
p
(i, j)
+q
,i
=
i
(a)p

p
(i, j)
: j > i;
j
>
i

p
(i, j)
: j < i;
j
<
i
+q
,i
,
where q
,i
is a sum of p

with
+
~
+
and [[ = [[. Also, (i, j) ~ in
the two cases j > i,
j
>
i
and j < i,
j
<
i
; that is, (i, j) .
Corollary 10.3.12 For N
d
0
and 2 m < d, if
j
= 0 for all j m then
U
i
p

= [ (d i +1) +1] p

=
i
()p

for all i m.
Proof Suppose that satises the hypothesis and i m; then, by Proposi-
tion 10.3.5,
D
i

i
p

= [ (d # j :
j
> 0) +1] p

j
>0
(i, j) p

,
and

j<i
(i, j) p

j
>0
(i, j) p

+ [#j : j < i;
i
= 0] p

. Thus U
i
p

=
[(d # j :
j
> 0# j : j < i;
j
= 0) +1]p

= [(d i +1) +1]p

.
The set of values
i
() determines
+
, provided that > 0: arrange the
values in decreasing order, so that if
i
>
j
then
i
() >
j
() and if
i
=
j
and i < j then
i
()
j
() = m for some m = 1, 2, . . . It is now easy to see
that the list
1
(), . . . ,
d
() determines and thus a uniquely dened set of
joint eigenvectors of U
i
. The following are the nonsymmetric Jack polynomials
normalized to be monic in the p-basis.
Theorem 10.3.13 For N
d
0
let

be the unique polynomial of the form

= p

B(, )p

:
which satises U
i

=
i
()

for 1 i d. The coefcients B(, ) are in


Q() and do not depend on the number d of variables provided that
j
= 0 for
all j >mfor some mand d m. For any permissible inner product, ,= implies
that

_
= 0.
Proof The existence of the polynomials in the stated form follows from the self-
adjointness of the operators U
i
and their triangular matrix action on the p-basis
ordered by . Suppose that
j
= 0 for all j > m, with some xed m. Because
each U
i
with i m has spanp

:
j
= 0 for all j > m as an invariant subspace,
this space contains the joint eigenvectors of U
i
: 1 i m. In the action of
U
i
the value of d appears only on the diagonal, so that U
i
(d)1 has the same
10.4 S
d
-Invariant Subspaces 329
eigenvectors as U
i
and the eigenvectors have coefcients that are rational in Q()
with respect to the p-basis. By Corollary 10.3.12 any polynomial in spanp

j
= 0 for all j > m is an eigenfunction of U
i
with eigenvalue (d i +1) +1
when i > m (as was shown, this agrees with
i
() when
j
= 0 for all j > m).
If , N
d
0
and ,= then
i
() ,=
i
() for some i; thus

_
= 0 for
any inner product for which U
i
is self-adjoint.
10.4 S
d
-Invariant Subspaces
Using only the properties of permissible inner products (that they are S
d
-invariant
and that each D
i

i
or D
i
x
i
is self-adjoint), many useful relations can be estab-
lished on the subspaces span

:
+
= for any partition N
d,P
0
. We will
show that

) is a multiple of

) which is independent of the choice


of inner product. Of course,

_
= 0 if ,= because
i
() ,=
i
() for
some i. It is convenient to have a basis which transforms like x

or p

(where
(xw)

= x
w
for w S
d
), and this is provided by the orbit of

. In fact the
action of S
d
on the eigenfunctions

is not conveniently expressible, but adja-


cent transpositions (i, i +1) not only have elegant formulations but are crucial in
the development.
Lemma 10.4.1 For 1 i d the commutant [U
i
, ( j, j +1)] = 0 for j > i or
j < i 1 and (i, i +1)U
i
=U
i+1
(i, i +1) +.
Proof First, [D
i

i
, ( j, j +1)] = 0 if j < i 1 or j > i. Also, ( j, k)(m, j)( j, k) =
(m, k) provided that j, k, m are all distinct. This shows that [
k<i
(k, i), ( j, j +1)] =
0 under the same conditions on j, when j < i 1 the key fact is ( j, j +1)( j, i) +
( j + 1, i) = ( j + 1, i) + ( j, i)( j, j + 1). For i = j, we have (i, i +1)U
i
=
D
i+1

i+1
(i, i + 1)

m<i
(m, i + 1)(i, i + 1) = U
i+1
(i, i + 1) + (i, i + 1)
(i, i +1).
Proposition 10.4.2 For N
d
0
, if
i
=
i+1
for some i < d then (i, i +1)

; if N
d,P
0
and w = then w

with w S
d
.
Proof Let g = (i, i +1)

. By the lemma, U
j
g =
j
()g when j < i
or j > i + 1, while U
i+1
g =
i
()(i, i +1)

(
i+1
() +)

=
i
()g
and similarly U
i
g =
i+1
()(i, i + 1)

(
i
() )

=
i+1
()g. Conse-
quently, either g = 0 or it is a joint eigenfunction of the U
j
with eigenvalues
(
1
(), . . . ,
i+1
(),
i
(), . . .). This is impossible by the denition of
j
.
For a partition , the permutations which x are generated by adjacent
transpositions (for example, suppose that
j1
>
j
= =
k
>
k+1
for some
0 < j < k < d; then (i, i +1) = for j i < k). Thus w = implies
w

.
330 Orthogonal Polynomials Associated with Symmetric Groups
With this, one can dene a monomial-type basis for span

:
+
= for
each N
d,P
0
(the term monomial refers to the actions wx

=x
w
or wp

= p
w
for w S
d
and N
d
0
).
Denition 10.4.3 For N
d,P
0
let

and for N
d
0
with
+
= let

= w

, where w =.
Proposition 10.4.2 shows that

is well dened, for if w


1
= = w
2

then w
1
2
w
1
= and w
2

= w
2
_
w
1
2
w
1

_
= w
1

. We now collect some


properties of

.
Proposition 10.4.4 For N
d
0
and w S
d
the following hold:
(i) w

=
w
;
(ii)

= p

+B(, ) p

:
+
~
+
,
i
= 0 implies
i
= 0, where the
coefcients B(, ) Q() and are independent of d;
(iii) D
i

= [ (d # j :
j
>
i
) +
i
+1]

j
>
i

(i, j)
for 1
i d.
Proof Let =
+
. Part (i) follows from the basic property that w = implies
w

. Theorem 10.3.13 shows that

satises part (ii) because is maxi-


mal for the dominance ordering _ on :
+
= , that is, _ and
+
=
implies that =. The equation in (ii) transforms under w S
d
with w = to
the claimed expression.
The fact that

is an eigenfunction of U
i
implies that
D
i

= [ (d i +1) +
i
+1]

j<i
(i, j)

= [ (d # j :
j
>
i
) +
i
+1]

j
>
i

(i, j)
,
where the terms in the sum in the rst line which correspond to values of j sat-
isfying j < i and
j
=
i
have been added into the rst part of the second line.
Again apply w to this formula with w = (the index i is changed also, but this
does not matter because i is arbitrary).
A few more details will establish that span

:
+
= = span

:
+
=
. The important thing is to give an explicit method for calculating

from

,
for
+
=. Adjacent transpositions are the key.
Proposition 10.4.5 Suppose that N
d
0
and
i
>
i+1
; then, for c =[
i
()

i+1
()]
1
and = (i, i +1),

= (1c
2
)

and span

is invariant under .
10.4 S
d
-Invariant Subspaces 331
Proof Note that
i
() =
i+1
(),
i+1
() =
i
() and
i
()
i+1
()

i+1
+; thus 0 <c <1. Let g =

; then U
j
g =
j
()g =
j
()g
for j < i or j > i +1 (by Lemma 10.4.1). Further, U
i
g =
i+1
()g and U
i+1
g =

i
()g, because U
i
= U
i+1
+. This shows that g is a scalar multiple of

, having the same eigenvalues. The coefcient of p

in g is 1 (coming from

because p

does not appear in the expansion of

since ~).
Thus

= c

. To nd

, note that

=
_
1c
2
_

.
Now, for N
d,P
0
let
E

= span

:
+
=.
Since E

is closed under the action of S


d
, the proposition shows that span

+
= = E

. The set

forms an orthogonal basis and E

is invariant
under each D
i

i
, 1 i d. The next task is to determine the structural constants

) and

_
1
d
_
in terms of

) and

_
1
d
_
, respectively.
Proposition 10.4.6 Suppose that N
d
0
and
i
>
i+1
, = (i, i +1), and c =
[
i
()
i+1
()]
1
. Then:
(i)

) =
_
1c
2
_

);
(ii)

_
1
d
_
= (1c)

_
1
d
_
;
(iii) f
0
= (1+c)

satises f
0
= f
0
;
(iv) f
1
= (1c)

satises f
1
=f
1
.
Proof Because

) = 0 and is an isometry we have

) =

) = c
2

) +

);
thus

) =
_
1c
2
_

). The map of evaluation at 1


d
is invariant
under ; thus

_
1
d
_
=

_
1
d
_
c

_
1
d
_
= (1c)

_
1
d
_
. Parts (iii) and
(iv) are trivial calculations.
Aiming to use induction on adjacent transpositions, we introduce a function
which measures (heuristically speaking) how much is out of order.
Denition 10.4.7 Functions on N
d
0
for 1 i, j d and = are given by:
(i)

i, j
() = 1+

j
()
i
()
if i < j and
i
<
j
, else

i, j
() = 1,
(ii) E

() =
i<j

i, j
() (E

() = 1 for N
d,P
0
).
Theorem 10.4.8 For N
d
0
and =
+
,
(i)

) =E
+
()E

()

),
(ii)

_
1
d
_
=E

()

_
1
d
_
.
332 Orthogonal Polynomials Associated with Symmetric Groups
Proof First, suppose that
+
=,
i
>
i+1
and = (i, i +1); then
E

()/E

() = 1+[
i
()
i+1
()]
1
.
Indeed,

k, j
() =

k, j
() for all values of j, k ,= i, i + 1. Also,

i, j
() =

i+1, j
() and

i+1, j
() =

i, j
() for j > i + 1;

j,i
() =

j,i+1
() and

j,i+1
() =

j,i
() for j < i. The claim is now proven from the special values

i,i+1
() = 1 and

i,i+1
() = 1 +[
i
()
i+1
()]
1
(recall that
i
() =

i+1
() and
i+1
() =
i
()).
By part (i) of Proposition 10.4.6,

_ = 1c
2
=
E
+
()E

()
E
+
()E

()
(recall that c =[
i
()
i+1
()]
1
. Similarly,

_
1
d
_

(1
d
)
= 1c =
E

()
E

()
.
Since is obtained from by a sequence of adjacent transpositions satisfying
this hypothesis, the proof is complete.
The last results of this section concern the unique symmetric and skew-
symmetric polynomials in E

(which exist if
1
> >
d
). Recall that
R
=
(
d
,
d1
, . . . ,
1
); the set of values
i
_

R
_
is the same as
i
() but not
necessarily in reverse order. For example, suppose that
j1
>
j
= =
k
>

k+1
for some 0 < j < k < d; then the list
i
() : i = j, . . . , k agrees with

dki+1
_

R
_
: i = j, . . . , k (in that order). Thus
E

(
R
) =

_
1+

i
()
j
()
:
i
>
j
_
.
Denition 10.4.9 For N
d,P
0
let
j

=E
+
_

R
_

+
=
1
E
+
()

and, if
1
> >
d
, let
a

=E

R
_

wS
d
sign(w)
E

(w)

w
.
Recall that sign(w) = (1)
m
, where w can be expressed as a product of m
transpositions. For N
d,P
0
we use the notation #S
d
() = # N
d
0
:
+
=
for the cardinality of the S
d
-orbit of .
Theorem10.4.10 For N
d,P
0
, the polynomial j

has the following properties:


(i) j

+
=

;
(ii) wj

= j

for each w S
d
;
10.4 S
d
-Invariant Subspaces 333
(iii) j

_
1
d
_
= #S
d
()

_
1
d
_
;
(iv) j

, j

) = #S
d
()E
+
_

R
_

).
Proof It sufces to show that (i, i +1) j

= j

for any i < d. Indeed, with =


(i, i +1),
j

=E
+
(
R
)
_

E
+
()
:
+
=,
i
=
i+1
_
+

E
+
()
+

E
+
()
:
+
=,
i
>
i+1
__
.
Each term in the rst sum is invariant under . By part (iii) of Proposition 10.4.6,
each term in the second sum is invariant because
E
+
()/E
+
() = 1+[
i
()
i+1
()]
1
.
Thus j

is invariant under S
d
and is a scalar multiple of

+
=

. Since
R
is
minimal for _in :
+
= and, by the triangularity for

in the p-basis, the


coefcient of p

R in j

is 1, part (i) holds. Part (iii) is an immediate consequence


of (i).
By the group invariance of the inner product,
j

, j

) =

+
=
j

) = # :
+
= j

)
= #S
d
()E
+
_

R
_

).
Note that

and E
+
() = 1.
Theorem 10.4.11 For N
d,P
0
with
1
> >
d
, the polynomial a

has the
following properties:
(i) a

=
wS
d
sign(w)
w
;
(ii) wa

= sign(w)a

for each w S
d
;
(iii) a

, a

) = d!E

R
_

).
Proof The proof is similar to the previous one. To show part (ii) it sufces to
show that a

=a

for = (i, i +1) and for each i < d. As before, each term
in the sum is skew under :
a

=E

(
R
)

_
sign(w)
_

w
E

(w)


w
E

(w)
_
: w S
d
, (w)
i
> (w)
i+1
_
.
Both a

and
wS
d
sign(w)
w
have the skew property and coefcient sign(w)
for p

R, where w =
R
. Thus part (i) holds. Finally,
a

, a

) =

w
1
S
d

w
2
S
d
sign(w
1
)sign(w
2
)

w
1

,
w
2

_
= d!

wS
d
sign(w)

,
w
) = d!

, a

)
= d!E

R
_

),
334 Orthogonal Polynomials Associated with Symmetric Groups
and this proves part (iii) (in the rst line we change the variable of summation,
letting w
2
= w
1
w).
The polynomial j

is a scalar multiple of the Jack polynomial J

(x; 1/). The


actual multiplying factor will be discussed later.
10.5 Degree-Changing Recurrences
In this section the emphasis will be on the transition from

m
to

, where
N
d,P
0
and
m
>
m+1
=0 for some md. By use of the dening properties we will
determine the ratios

)/

m
,

m
_
for each of three permissible inner
products. This leads to norm and 1
d
evaluation formulae in terms of hook-length
products. A certain cyclic shift is a key tool.
Denition 10.5.1 For 1 < m d, let
m
= (1, 2)(2, 3) (m1, m) S
d
, so
that
m
= (
m
,
1
,
2
, . . . ,
m1
,
m+1
, . . .) for N
d,P
0
.
Suppose that N
d,P
0
satises m = (), and let

= (
m
1,
1
,
2
, . . . ,
m1
, 0, . . .) =
m
(
m
).
We will showthat D
m

= [(d m+1) +
m
]
1
m

, which leads to the desired


recurrences. The idea is to identify D
m

as a joint eigenfunction of the operators

1
m
U
i

m
and match up the coefcients of the p

.
Lemma 10.5.2 Suppose that N
d,P
0
and m = (); then the following hold:
(i) x
m
D
m

= [ (d m) +
m
]


j>m
(m, j)

;
(ii) x
m
D
m

_
1
d
_
=
m

_
1
d
_
;
(iii) x
m
D
m

) =

m
+
m
[ (d m+1) +
m
]

);
(iv) x
m
D
m

, x
m
D
m

) =

m
+
m
[ (d m+1) +
m
]
[ (d m) +
m
]

).
Proof By part (i) of Lemma 10.2.1,
x
m
D
m

=
_
D
m
x
m
1

j,=m
(m, j)
_

= (U
m
1)


j>m
(m, j)

= [
m
() 1]


j>m
(m, j)

= [(d m) +
m
]


j>m
(m, j)

.
10.5 Degree-Changing Recurrences 335
This proves parts (i) and (ii). Next, observe that

, (m, j)

) =

, (m, m+
1)

) for j > m; by Proposition 10.4.5,

+ c

where =
(m, m+1) and c = [
m
()
m+1
()]
1
= ( +
m
)
1
(recall that
i
() =
(d i +1) +
i
+1 for each N
d,P
0
). Thus
x
m
D
m

) = [ (d m) +
m
]

) (d m)

, (m, j)

)
=
_
[ (d m) +
m
]

2
(d m)
+
m
_

)
=

m
+
m
[ (d m+1) +
m
]

).
For part (iv), let a = (d m) +
m
; the group invariance properties of the inner
product are now used repeatedly:
x
m
D
m

, x
m
D
m

)
= a
2

) 2a

j>m

, (m, j)

)
+
2
_

j>m
(m, j)

, (m, j)

) +2

m<j<k
(m, j)

, (m, k)

)
_
=
_
a
2
2ac (d m) +
2
(d m)(1+c(d m1))
_

)
=

m
+
m
[ (d m) +
m
] [ (d m+1) +
m
]

).
Next, we consider the behavior of D
m

under D
i

i
, with the aim of identify-
ing it as a transform (
1
m
) of a joint eigenfunction. The cyclic shift
m
satises

1
m
(1) = m and
1
m
(i) = i 1 for 2 i m. Recall the commutation relations
for D
i

i
, (i, j) and w S
d
:
1. wD
i

i
=D
w(i)

w(i)
w;
2. w
_
w
1
(i), w
1
( j)
_
= (i, j)w.
Lemma 10.5.3 Suppose that N
d,P
0
and m = (); then
(i) D
i

i
D
m

=
_

i
() +[

j<i
( j, i) +(i, m)]
_
D
m

for i < m;
(ii) D
m

m
D
m

= [
m
() 1] D
m

;
(iii) U
i

m
D
m

= [ (d i +1) +1]
m
D
m

for i > m;
(iv) U
i

m
D
m

=
i1
()
m
D
m

for 1 < i m;
(v) U
1

m
D
m

= [
m
() 1]
m
D
m

.
Proof By Proposition 10.3.7 and Lemma 10.2.1, D
i

i
D
m
= D
m
+D
i
[D
m
x
i
+
(i, m)] for i < m; thus
D
i

i
D
m

=D
m
D
i

+ (i, m)D
m

=D
m
_

i
() +

j<i
( j, i)
_

+ (i, m)D
m

,
336 Orthogonal Polynomials Associated with Symmetric Groups
which proves part (i), since D
i
(i, m) = (i, m)D
m
and D
m
( j, i) = ( j, i)D
m
for j <
i < m. Next, D
m

m
D
m
=D
m
+D
m
[D
m
x
m
1
j,=m
( j, m)], so that
D
m

m
D
m

=D
m
_
U
m
1

j>m
( j, m)
_

= [
m
() 1] D
m


j>m
( j, m)D
j

.
However, D
j

= 0 for j > m because the expansion of

in the p-basis (see


Denition 10.3.2 and Proposition 10.3.9) contains no p

with
j
>0. Also, D
m

has the latter property, which shows that U


i
D
m

= [ (d i +1) +1]D
m

for
each i > m (Corollary 10.3.12). Thus part (iii) follows from
m
U
i
= U
i

m
for
i > m.
Apply
m
to both sides of the equation in (i):
_
D
i

j<i
( j, i) +(i, m)
__
D
m

=
i
()D
m

,
and obtain equation (iv) with index i + 1 (so that 2 i + 1 m). A similar
operation on equation (ii) proves equation (v).
Consider the multi-index

= (
m
1,
1
,
2
, . . . ,
m1
, 0, . . .); clearly the
eigenvalues are given by
1
(

) = [(d m) +
m
] =
m
() 1 and
i
(

) =

i1
() for 2 i m, because
m
1 <
m1
. The remaining eigenvalues are
trivially
i
(

) =
i
() = (d i +1) +1 for i > m. Thus
m
D
m

is a multiple
of

; the value of the multiple can be determined from the coefcient of p

in
m
D
m

, that is, the coefcient of p

m
in D
m

. As usual, this calculation


depends on the dominance ordering.
Lemma 10.5.4 Suppose that N
d,P
0
, m = () and p

m
appears with a
nonzero coefcient in D
m
p

; then one of the following holds:


(i) =, with coefcient (d m+1) +
m
;
(ii) = +k(
j

m
), where j > m, 1 k <
m
, with coefcient ;
(iii) = +k(
m

j
), where j < m, 1 k
j

m
, with coefcient .
Proof By Proposition 10.3.3, when = the coefcient of p

i
in D
m
p

is
(1+# j :
j
<
m
) +
m
= (d m+1) +
m
. The two other possibilities
correspond to sets A, B with differing from in two entries. In the case of set
A, for some j ,= m we have
j
=
j
+k,
m
=
m
k and
j

m
k 1. Since

m
is the smallest nonzero part of this implies that j >m and
j
=0. In the case
of set B, for some j ,= m we have
j
=
j
k,
m
=
m
+k and
m

j
k <
j
(and
j
>
m
implies j < m).
10.6 Norm Formulae 337
Theorem 10.5.5 Suppose that N
d,P
0
and m = (); then
D
m

= [ (d m+1) +
m
]
1
m

.
Proof In Lemma 10.5.3 it was established that D
m

is a scalar multiple of

1
m

. The multiple equals the coefcient of p

m
in D
m

(because the coef-


cient of p

in

is 1). But p

m
only arises from D
m
p

in the expansion of
D
m

, by Lemma 10.5.4: case (i) is the desired coefcient; case (ii) cannot occur
because if p

appears in the expansion of

then
j
= 0 for all j > m; case (iii)
cannot occur because ~ +k(
m

j
) for j < m, 1 k
j

m
.
Because
m
is an isometry we can now assert that
D
m

, D
m

) = [ (d m+1) +
m
]
2
E
+
(

)E

m
,

m
_
and
D
m

(1
d
) = [ (d m+1) +
m
] E

m
(1
d
).
10.6 Norm Formulae
10.6.1 Hook-length products and the pairing norm
The quantities (functions on N
d,P
0
) which have the required recurrence behavior
(see Lemma 10.5.2 and Theorem 10.5.5) are of two kinds: the easy one is the
generalized Pochhammer symbol, and the harder one comes from hook length
products.
Denition 10.6.1 For a partition N
d,P
0
and implicit parameter , the
generalized Pochhammer symbol is dened by
(t)

=
d

i=1
(t (i 1))

i
.
For example, in the situation described above,
(d +1)

(d +1)

m
=
( (d m+1) +1)

m
( (d m+1) +1)

m
1
= (d m+1) +
m
.
Next, we calculate E

): indeed,
E

) =
m

j=2
_
1+

j
(

)
1
(

)
_
=
m

j=2
_
1+

j1
() [
m
() 1]
_
=
m1

j=1
_
(m j +1) +
j

m
+1
(m j) +
j

m
+1
_
.
338 Orthogonal Polynomials Associated with Symmetric Groups
Recall that
j
() = (d j +1)+
j
+1. It turns out that the appropriate device
to handle this factor is the hook-length product of a tableau or of a composition.
Here we note that a tableau is a function whose domain is a Ferrers diagram and
whose values are numbers or algebraic expressions. The Ferrers diagram of
N
d
0
is the set (i, j) : 1 i d, 1 j
i
: for each node (i, j) with 1 j
i
there are two special subsets, the arm
(i, l) : j < l
i

and the leg


(l, j) : l > i, j
l

i
(l, j) : l < i, j
l
+1
i

(when j =
l
+1 the point is outside the diagram). The node itself, the arm
and the leg make up the hook. The denition of hooks for compositions is from
[160, p. 15]. The cardinality of the leg is called the leg length, formalized by the
following:
Denition 10.6.2 For N
d
0
, 1 i d and 1 j
i
the leg length is
L(; i, j) := #l : l > i, j
l

i
+#l : l < i, j
l
+1
i
.
For t Q() the hook length and the hook-length product for are given by
h(, t; i, j) =
i
j +t +L(; i, j)
h(, t) =
()

i=1

j=1
h(, t; i, j).
For the special case = 1 and N
d,P
0
, this goes back to the beginnings of
the representation theory for the symmetric group S
d
(due to Schur and to Young
in the early twentieth century); for arbitrary parameter values the concept is due
to Stanley (1989), with a more recent modication by Knop and Sahi (1997).
Here is an example. Below, we give the tableau (Ferrers diagram) for =
(5, 3, 2, 2) with the factors of the hook-length product h(, t) entered in each cell:
4+t +3 3+t +3 2+t + 1+t t
2+t +2 1+t +2 t
1+t + t +
1+t t
Compare this with the analogous diagram for = (2, 3, 2, 5):
1+t + t +
2+t +2 1+t +2 1+t
1+t t
4+t +3 3+t +3 2+t +3 1+t + t
There is an important relation between h(, t), for the values t = 1, +1, and
E

(). We will use the relation


i
()
j
() = [r

( j) r

(i)] +
i

j
:
10.6 Norm Formulae 339
Lemma 10.6.3 For N
d
0
, we have
h(, +1) = h
_

+
, +1
_
E
+
()
and
h(, 1) =
h(
+
, 1)
E

()
.
Proof We will use induction on adjacent transpositions. The statements are true
for =
+
. Fix
+
and suppose that
i
>
i+1
for some i. Let = (i, i +1).
Consider the ratio h(, t)/h(, t). The only node whose hook length changes
(in the sense of an interchange of rows i and i + 1 of the Ferrers diagram)
is (i,
i+1
+1). Explicitly, h(, t; s, j) = h(, t; s, j) for s ,= i, i +1 and 1
j
s
, h(, t; i, j) = h(, t; i +1, j) for 1 j
i+1
and h(, t; i +1, j)
= h(, t; i, j) for 1 j
i
except for j =
i+1
+1. Thus h(, t)/h(, t) =
h(, t; i +1,
i+1
+1)/h(, t; i,
i+1
+1). Note that L(; i +1,
i+1
+1)
= L(; i,
i+1
+1) +1 (since the node (i,
i+1
+1) is adjoined to the leg). Let
E
1
=s : s i,
s

i
s : s > i,
s
>
i
,
E
2
=s : s i +1,
s

i+1
s : s > i +1,
s
>
i+1
;
thus, by denition, r

(i) = #E
1
and r

(i +1) = #E
2
. Now, E
1
E
2
so that
r

(i +1) r

(i) = #(E
2
E
1
) and
E
2
E
1
=s : s < i,
i
>
s

i+1
is : s > i +1,
i

s
>
i+1
.
This shows that #(E
2
E
1
) = 1+L(; i,
i+1
+1) and
h(, t; i,
i+1
+1) = [r

(i +1) r

(i) 1] +t +
i

i+1
1,
h(, t; i +1,
i+1
+1) = [r

(i +1) r

(i)] +t +
i

i+1
1.
Thus
h(, +1; i +1,
i+1
+1)
h(, +1; i,
i+1
+1)
=
[r

(i +1) r

(i) +1] +
i

i+1
(r

(i +1) r

(i)) +
i

i+1
= 1+

[r

(i +1) r

(i)] +
i

i+1
=
E
+
()
E
+
()
(the last equation is proven in Theorem 10.4.8) and
h(, 1; i +1,
i+1
+1)
h(, 1; i,
i+1
+1)
=
[r

(i +1) r

(i)] +
i

i+1
[r

(i +1) r

(i) 1] +
i

i+1
=
_
1

[r

(i +1) r

(i)] +
i

i+1
_
1
=
E

()
E

()
.
340 Orthogonal Polynomials Associated with Symmetric Groups
Thus h(, +1) and h(
+
, +1)E
+
() have the same transformation prop-
erties under adjacent transpositions and hence are equal. Similarly, h(, 1) =
h(
+
, 1)/E

().
Next we consider the ratio h(, t)/h(
m
, t). Each cell above (m,
m
)
changes (the leg length decreases by 1) and each entry in row m changes. Thus
h(, t)
h(
m
, t)
=
m1

i=1

m
+t + (mi)

m
+t + (mi 1)
(
m
1+t).
The following relates this to E

).
Lemma 10.6.4 Suppose that N
d,P
0
and

= (
m
1,
1
, . . . ,
m1
, 0, . . .)
with m = (); then
( +
m
)E
+
(

) =
h(, +1)
h(
m
, +1)
and
1

m
E

) =
h(
m
, 1)
h(, 1)
.
Proof For the rst equation,
E
+
(

) =
m1

i=1
_
(mi +1) +
i

m
+1
(mi) +
i

m
+1
_
=
h(, +1)
h(
m
, +1)( +
m
)
by use of the above formula for h(, t)/h(
m
, t) with t = +1. For the second
equation,
E

) =
m1

i=1
_
(mi 1) +
i

m
+1
(mi) +
i

m
+1
_
=
m
h(
m
, 1)
h(, 1)
by the same formula with t = 1.
The easiest consequence is the formula for

(1
d
); the proof of Lemma 10.6.4
illustrates the ideas that will be used for the norm calculations.
Proposition 10.6.5 For N
d
0
and =
+
N
d,P
0
,

(1
d
) =E

()
(d +1)

h(, 1)
=
(d +1)

h(, 1)
.
Proof From Theorem 10.4.8 we have

(1
d
) = E

()

_
1
d
_
. Suppose that
m = (); then, by part (ii) of Lemma 10.5.2 and Theorem 10.5.5,

_
1
d
_
=

1
m
D
m

_
1
d
_
=
1
m
[ (d m+1) +
m
] E

m
(1
d
); thus

_
1
d
_

m
(1
d
)
= [ (d m+1) +
m
]
h(
m
, 1)
h(, 1)
.
10.6 Norm Formulae 341
Clearly
0
_
1
d
_
= 1 (the trivial 0 = (0, . . . , 0) N
d
0
) and
(d +1)

(d +1)

m
= (d m+1) +
m
.
An obvious inductive argument nishes the proof.
We now specialize to the permissible inner product f , g)
h
= f (D)g(x)[
x=0
.
In this case D
m
is the adjoint of multiplication by x
m
, and we use part (iii) of
Lemma 10.5.2.
Theorem 10.6.6 For N
d
0
and =
+
N
d,P
0
,

)
h
=E
+
()E

()(d +1)

h(, +1)
h(, 1)
.
Proof From Theorem 10.4.8,

)
h
=E
+
()E

()

)
h
. From the def-
inition of the inner product, x
m
D
m

)
h
= D
m

, D
m

)
h
. Suppose that
m = (); then, again by part (iii) of Lemma 10.5.2 and Theorem 10.5.5,

)
h
=
+
m

m
[ (d m+1) +
m
]
D
m

, D
m

)
h
=
+
m

m
[ (d m+1) +
m
]
_

_
h
=
+
m

m
[ (d m+1) +
m
] E
+
(

)E

m
,

m
_
h
.
Thus

)
h

m
,

m
_
h
=
(d +1)

h(, +1)h(
m
, 1)
(d +1)

m
h(
m
, +1)h(, 1)
.
Induction nishes the proof.
Corollary 10.6.7 For N
d
0
,

)
h
= (d +1)

+
h(, +1)
h(, 1)
.
10.6.2 The biorthogonal-type norm
Recall the generating function
F (x, y) =

N
d
0
p

(x)y

=
d

i=1
(1x
i
y
i
)
1
d

j,k=1
(1x
j
y
k
)

.
Because F is symmetric in x, y, the matrix A given by

N
d
0
p

(x)y

n=0

[[=[[=n
A

,
342 Orthogonal Polynomials Associated with Symmetric Groups
expressing p

(x) in terms of x

, is symmetric (consider the matrix A, which is


locally nite, as a direct sum of ordinary matrices, one for each degree n =[[ =
[[ N). The matrix is invertible and thus we can dene a symmetric bilinear
formon
d
(in the following sumthe coefcients have only nitely many nonzero
values):
_

N
d
0
a

,

N
d
0
b

_
p
=

,
a

_
A
1
_

,
that is,
_
p

, x

_
p
=
,
.
From the transformation properties of p

it follows that wf , wg)


p
= f , g)
p
for f , g
d
and w S
d
. We showed that D
(x)
i
x
i
F(x, y) = D
(y)
i
y
i
F(x, y), which
implies that D
(x)
i
x
i
is self-adjoint with respect to the form dened above (1 i
d). To see this, let D
(x)
i
x
i
x

; then BA = AB
T
(T denotes the trans-
pose). Thus , )
p
is a permissible inner product. To compute

)
p
we use
the fact that x
i
f ,
i
g)
p
= f , g)
p
for f , g
d
, 1 i d.
Lemma 10.6.8 Suppose that N
d,P
0
and m = (); then

m
D
m

= [ (d m+1) +
m
]

+ f
0
,
where D
m
f
0
= 0 and w

, f
0
) = 0 for any w S
d
.
Proof Let f
0
=
m
D
m

[ (d m+1) +
m
]

; by part (ii) of Lemma 10.5.3,


D
m
f
0
=0. Further, the basis elements p

which appear in the expansions of D


m

and

satisfy
j
=0 for all j >m. By Proposition 10.3.9, if p

appears in f
0
then

j
= 0 for all j > m. The expansion of f
0
in the orthogonal basis

involves
only compositions with the same property,
j
=0 for all j >m. Thus f
0
, g) =0
for any g E

(note that
m
> 0). Since E

is closed under the action of S


d
, this
shows that f
0
, w

) = 0 for all w S
d
.
The statement in Lemma 10.6.8 applies to any permissible inner product.
Theorem 10.6.9 For N
d
,

)
p
=
h(, +1)
h(, 1)
, =
+
N
d,P
0
.
Proof Let =
+
N
d,P
0
. From Theorem 10.4.8 we have

)
p
=E
+
()E

()

)
p
.
10.6 Norm Formulae 343
From the denition of the inner product,
x
m
D
m

,
m
D
m

)
p
=D
m

, D
m

)
p
.
Suppose that m = (); then, by parts (i) and (iii) of Lemma 10.5.2 and Lemma
10.6.8,
D
m

, D
m

)
p
=x
m
D
m

, [ (d m+1) +
m
]

+ f
m
)
p
= [ (d m+1) +
m
] x
m
D
m

)
p
+[(d m) +
m
]

, f
m
)
p


j>m
(m, j)

, f
m
)
p
=

m
+
m
[ (d m+1) +
m
]
2

)
p
.
But, as before (by Theorem 10.5.5),
D
m

, D
m

)
p
= [ (d m+1) +
m
]
2
_

_
p
= [ (d m+1) +
m
]
2
E
+
(

)E

m
,

m
)
p
and so

)
p

m
,

m
_
p
=
+
m

m
E
+
(

)E

) =
h(, +1)h(
m
, 1)
h(
m
, +1)h(, 1)
.
As before, induction on n =[[ nishes the proof (1, 1)
p
= 1).
Considering the generating function in Denition 10.3.2 as a reproducing
kernel for , )
p
establishes the following.
Corollary 10.6.10 For > 0, max
i
[x
i
[ < 1, max
i
[y
i
[ < 1, the following expan-
sion holds:
d

i=1
(1x
i
y
i
)
1
d

j,k=1
(1x
j
y
k
)

=

N
d
0
h(, 1)
h(, +1)

(x)

(y).
Later we will consider the effect of this corollary on the determination of the
intertwining operator.
10.6.3 The torus inner product
By considering the variables x
i
as coordinates on the complex d-torus, one can
dene an L
2
structure which provides a permissible inner product. Algebraically
this leads to constant-term formulae: the problem is to determine the constant
term in a Laurent polynomial (in x
1
, x
1
1
, x
2
, x
1
2
, . . .) when N.
344 Orthogonal Polynomials Associated with Symmetric Groups
Denition 10.6.11 The d-torus is given by
T
d
=x : x
j
= exp(i
j
); <
j
; 1 j d.
The standard measure m on T
d
is
dm(x) = (2)
d
d
1
d
d
and the conjugation operator on polynomials (with real coefcients) is
g

(x) = g
_
x
1
1
, x
1
2
, . . . , x
1
d
_
, g
d
.
The weight function is a power of
k(x) =

1i<jd
(x
i
x
j
)(x
1
i
x
1
j
);
thus k(x) 0 since (x
i
x
j
)(x
1
i
x
1
j
) =
_
2 sin
1
2
(
i

j
)

2
for x T
d
. For now
we do not give the explicit value of the normalizing constant c

, where c
1

=
_
T
d k

dm. The associated inner product is


f , g)
T
= c

_
T
d
f g

dm, f , g
d
.
For real polynomials f , g)
T
= g, f )
T
, since k

dm is invariant under inversion,


that is, x
j
x
1
j
for all j. Also, wf , wg)
T
= f , g)
T
for each w S
d
by the
invariance of k.
The fact that k(x) is homogeneous of degree 0 implies that homogeneous poly-
nomials of different degrees are orthogonal for , )
T
, when > 0. The next
theorem completes the proof that , )
T
is a permissible inner product. Write

i
:=

x
i
and
i
:=

j,=i
1
x
i
x
j
[1(i, j)]
in the next proof.
Theorem 10.6.12 For > 0, polynomials f , g
d
and 1 i d,
D
i
x
i
f , g)
T
= f , D
i
x
i
g)
T
;
if f , g are homogeneous of different degrees then f , g)
T
= 0.
Proof Since (/
i
) f (x) = i exp(i
i
)
i
f (x) = ix
i

i
f (x), integration by parts
shows that
_
T
d
x
i

i
h(x)dm(x) = 0 for any periodic differentiable h. Also, note
that x
i

i
g

(x) =x
1
i
(
i
g)

=(x
i

i
g)

. Thus the Euler operator =


d
i=1
x
i

i
is self-adjoint for , )
T
(observe that (k

) = k
1
k = 0 for any > 0).
Suppose that f , g are homogeneous of degrees m, n, respectively; then m f , g)
T
=
f , g)
T
= f , g)
T
= n f , g)
T
; so m ,= n implies f , g)
T
= 0. Next,
10.6 Norm Formulae 345
_
T
d
_
D
i
x
i
f (x)g

(x) f (x)[D
i
x
i
g(x)]

_
k(x)

dm(x)
=
_
T
d
_
f g

+x
i
(
i
f )g

f g

+x
i
f
i
g

+x
i
f g

1
k

i
k
_
k

dm(x)
+
_
T
d
_
x
i
f g

1
k

i
k +(
i
x
i
f )g

f (
i
x
i
g)

_
k

dm(x).
This equation results from integration by parts, and the rst part is clearly zero.
The integrand of the second part equals

j,=i
_
f g

x
i
_
1
x
i
x
j
+
x
2
i
x
1
i
x
1
j
_
+
x
i
f x
j
(i, j) f
x
i
x
j
g

f
x
1
i
g

x
1
j
(i, j)g

x
1
i
x
1
j
_
=

j,=i
1
x
i
x
j
_
f g

(x
i
+x
j
) +x
i
f g

x
j
g

(i, j) f
+x
j
f g

x
i
f (i, j)g

.
For each j ,= i the corresponding term is skew under the transposition (i, j) and
the measure k

dm is invariant, and so the second part of the integral is also zero,


proving the theorem.
Theorem 10.6.13 For N
d
0
,

)
T
=
(d +1)

h(, +1)
((d 1) +1)

h(, 1)
.
Proof From Theorem 10.4.8 we have

)
T
= E
+
()E

()

)
T
. By
the denition of the inner product x
m
D
m

, x
m
D
m

)
T
=D
m

, D
m

)
T
(that
is, multiplication by x
i
is an isometry). Suppose that m = (); then by part (iv)
of Lemma 10.5.2 and Theorem 10.5.5

)
T
=
+
m

m
[ (d m+1) +
m
][ (d m) +
m
]
D
m

, D
m

)
T
=
( +
m
)[ (d m+1) +
m
]

m
[ (d m) +
m
]
_

_
T
=
( +
m
)[ (d m+1) +
m
]

m
[ (d m) +
m
]
E
+
(

)E

m
,

m
_
T
.
Thus

)
T

m
,

m
_
T
=
(d +1)

((d 1) +1)

m
h(, +1)h(
m
, 1)
(d +1)

m
((d 1) +1)

h(
m
, +1)h(, 1)
.
Induction on n =[[ nishes the proof.
346 Orthogonal Polynomials Associated with Symmetric Groups
10.6.4 Monic polynomials
Another normalization of nonsymmetric Jack polynomials is such that they are
monic in the x-basis; the polynomials are then denoted by
x

. For N
d
0
,

(x) = x

B(, )x

: ,
with coefcients in Q(). Note that the triangularity in the x-basis is in the
opposite direction to that in the p-basis. Since this polynomial has the same
eigenvalues for U
i
as

, it must be a scalar multiple of the latter. The value


of this scalar multiple can be obtained from the fact that

,
x

)
p
= 1 (since
p

, x

)
p
= 1 and the other terms appearing in

,
x

are pairwise orthogonal,


owing to the -ordering). Let
x

= c

; then 1 = c

)
p
and thus
c

=
h(, 1)
h(, +1)
.
The formulae for norms and value at 1
d
imply the following (where =
+
):
1.
x

_
1
d
_
=
(d +1)

h(, +1)
;
2.
x

,
x

)
h
= (d +1)

h(, 1)
h(, +1)
;
3.
x

,
x

)
T
=
(d +1)

((d 1) +1)

h(, 1)
h(, +1)
;
4.
x

,
x

)
p
=
h(, 1)
h(, +1)
.
10.6.5 Normalizing constants
The alternating polynomial for S
d
, namely
a(x) =

1i<jd
(x
i
x
j
),
has several important properties relevant to these calculations. For an inner prod-
uct related to an L
2
structure, the value of a, a) shows the effect of changing to
+1. Also, a is h-harmonic because
h
a is a skew polynomial of degree
_
d
2
_
2,
hence 0. Let
= (d 1, d 2, . . . , 1, 0) N
d,P
0
;
then a is the unique skew element of E

(and a = a

, cf. Denition 10.4.9).


The previous results will thus give the values of a, a) in the permissible inner
products.
Theorem 10.6.14 The alternating polynomial satises
a(x) =

wS
d
sign(w)x
w
=

wS
d
sign(w)p
w
=

wS
d
sign(w)
w
.
10.6 Norm Formulae 347
Proof The rst equation for a is nothing other than the Vandermonde deter-
minant. By Proposition 10.4.4,

= p

+B(, ) p

:
+
~ ; [[ = [[.
But
+
~ and [[ = [[ =
_
d
2
_
implies that
j
=
k
for some j ,= k;
in turn this implies that

wS
d
sign(w)p
w
= 0. Thus

wS
d
sign(w)
w
=

wS
d
sign(w)p
w
. To compute this we use a determinant formula of Cauchy:
for 1 i, j d let A
i j
= (1x
i
y
j
)
1
; then
det A =
a(x)a(y)

d
i, j=1
(1x
i
y
j
)
.
To prove this, use row operations to put zeros in the rst column (below the rst
row): let B
1 j
= A
1 j
and B
i j
= A
i j
A
i1
A
1 j
/A
11
for 2 i d and 1 j d; thus
B
i1
= 0 and
B
i j
=
(x
1
x
i
)(y
1
y
j
)
(1x
1
y
j
)(1x
i
y
1
)(1x
i
y
j
)
.
Extracting the common factors from each row and column shows that
det A = det B =

d
i=2
(x
1
x
i
)
d
j=2
(y
1
y
j
)
(1x
1
y
1
)
d
i=2
(1x
i
y
1
)
d
j=2
(1x
1
y
j
)
det (A
i j
)
d
i, j=2
,
which proves the formula inductively.
Now apply the skew operator to the generating function for p

wS
d
sign(w)

N
d
0
p
w
(x)y

=

wS
d
sign(w)
d

i=1
[1(wx)
i
y
i
]
1
d

j,k=1
(1x
j
y
k
)

= a(x)a(y)
d

j,k=1
(1x
j
y
k
)
1
.
This holds by the Cauchy formula, since the second part of the product is S
d
-
invariant. The coefcient of y

in a(y) is 1 since the rest of the product does not


contribute to this exponent, and thus

wS
d
sign(w)p
w
(x) = a(x).
Proposition 10.6.15 In the pairing norm, a, a)
p
= d!; also,
E

R
_
h(, +1)/h(, 1) = 1.
Proof By the norm formula in Theorem 10.4.11,
a, a)
p
= d!E

(
R
)

)
p
= d!E

(
R
)h(, +1)/h(, 1);
but a, a)
p
=

w
1
,w
2
S
d
sign(w
1
w
2
)

p
w
1

(x), x
w
2

_
p
= d!, by denition.
348 Orthogonal Polynomials Associated with Symmetric Groups
The second part of the proposition can be proved by a direct calculation
showing that
E

R
_
=

1i<jd
( j i 1) +( j i)
( j i)( +1)
=
h(, 1)
h(, +1)
.
Theorem 10.6.16 The torus norm for a is a, a)
T
=d!
(d +1)

((d 1) +1)

, and for
> 0 the normalizing constant is
c

=
_
_
T
d
k

dm
_
1
=
( +1)
d
(d +1)
for N, c

= (!)
d
/(d)!.
Proof By the norm formula and Proposition 10.6.15,
a, a)
T
= d!E

R
_
(d +1)

((d 1) +1)

h(, +1)
h(, 1)
= d!
(d +1)

((d 1) +1)

.
Further, c
1
+1
=
_
T
d aa

dm = c
1

a, a)
T
and so
c

c
+1
= d!
(d +1)

((d 1) +1)

=
(d +1)
d
( +1)
d
.
Analytic function theory and the asymptotics for the gamma function are used
to complete the proof. Since k 0, the function
() =
( +1)
d
(d +1)
_
T
d
k

dm
is analytic on : Re > 0; additionally, the formula for c

/c
+1
shows that
( +1) = (). Analytic continuation shows that is the restriction of an
entire function that is periodic and of period 1. We will show that () =
O
_
[[
(d1)/2
_
, implying that is a polynomial and periodic, hence constant; of
course, (0) = 1. Restrict to a strip, say : 1 Re 2. On this strip
[
_
T
d k

dm[
_
T
d k
Re
dm M
1
for some M
1
> 0. By the Gauss multiplication
formula, we have
( +1)
d
(d +1)
= (2)
(d1)/2
d
d1/2
d1

j=1
( +1)
( + j/d)
.
The asymptotic formula ( +a)/( +b) =
ab
_
1+O(
1
)
_
, valid on
Re 1, applied to each factor yields

( +1)
d
(d +1)

= M
2

d
d

[[
(d1)/2
_
1+O
_
1

__
.
Extend this bound by periodicity. Hence = 1.
10.6 Norm Formulae 349
Recall that, for f , g
d
(Theorem 7.2.7),
f , g)
h
= (2)
d/2
b

_
R
d
_
e

h
/2
f
__
e

h
/2
g
_
[a(x)[
2
e
|x|
2
/2
dx,
where the normalizing constant b

satises 1, 1)
h
= 1. Since e

h
/2
a = a, the
following holds.
Theorem 10.6.17 For > 0,
a, a)
h
= d! (d +1)

and b

=
d

j=2
( +1)
( j +1)
.
Proof Using the method of the previous proof,
a, a)
h
= d!E

(
R
)
(d +1)

h(, +1)
h(, 1)
= d! (d +1)

.
Further,
b
1
+1
= (2)
d/2
_
R
d

i<j

x
i
x
j

2+2
e
[[x[[
2
/2
dx = b
1

a, a)
h
.
Also, d! (d +1)

= d!
d
j=2
( j +1)
j1
= ( +1)
1d

d
j=2
( j +1)
j
; thus
d

j=2
( j +1)
( +1)
d! (d +1)

=
d

j=2
( j + j +1)
( +2)
.
We will use the same technique as for the torus, rst converting the integral to
one over the sphere S
d1
, a compact set. Let =
1
2
d(d 1), the degree of h. In
spherical polar coordinates,
(2)
d/2
_
R
d

i<j

x
i
x
j

2
e
[[x[[
2
/2
dx
= 2

_
d
2
+
_

_
d
2
_
_
S
d1

i<j

x
i
x
j

2
d(x);
the normalizing constant (see (4.1.3)) satises
d1
=
_
S
d1 d = 1. Dene the
analytic function
() =
_
d
2
+
d(d1)
2

_ d

j=2
_
( +1)
( j +1)
2
d(d1)/2
(
d
2
)
d1

_
S
d1

i<j

x
i
x
j

2
d(x)
_
.
The formula for b

/b
+1
shows that ( +1) =(). By analytic continuation,
is entire and periodic. On the strip : 1 Re 2 the factors of excluding
350 Orthogonal Polynomials Associated with Symmetric Groups
the gamma functions in are uniformly bounded. From the asymptotic formulae
(valid in < argz < and a, b > 0),
log(z) =
_
z
1
2
_
logz z +
1
2
log(2) +O
_
1
[z[
_
,
log(az +b) =
_
az +b
1
2
_
(logz +loga) az +
1
2
log(2) +O
_
1
[z[
_
;
the second formula is obtained from the rst by the use of log(az +b) = logaz +
b/(az) +O
_
[z[
2
_
(see Lebedev [1972], p. 10). Apply this to the rst part of
log:
log
_
d
2
+
d(d1)
2

_
+(d 1)log( +1)
d

j=2
log( j +1)
=
d1
2
(1+d)log
d(d1)
2

d

j=2
_
j +
1
2
_
log j +
1
2
log(2)
+
_
d1
2
+
d(d1)
2
+(d 1)
_
+
1
2
_

j=2
_
j +
1
2
__
log
+
_

d(d1)
2
(d 1) +
d

j=2
j
_
+O
_
[[
1
_
=C
1
+C
2
+
d1
2
log +O
_
[[
1
_
,
where C
1
and C
2
are real constants depending only on d. Consequently, () =
O
_
[[
(d1)/2
_
in the strip : 1 Re 2. By periodicity the same bound
applies for all , implying that is a periodic polynomial and hence constant.
This proves the formula for b

.
10.7 Symmetric Functions and Jack Polynomials
In the present context, a symmetric function is an element of the abstract ring
of polynomials in innitely many indeterminates e
0
= 1, e
1
, e
2
, . . . with coef-
cients in some eld (an extension of Q). Specialized to R
d
, the generators e
i
become the elementary symmetric functions of degree i (and e
j
= 0 for j > d).
For xed N
d
0
a polynomial p

can be expressed as a polynomial in x


1
, . . . , x
d
and the indeterminates e
i
(involving an arbitrary unbounded number of variables)
as follows. Let n max
i

i
; then p

is the coefcient of y

in
d

i=1
(1x
i
y
i
)
1
d

j=1
_
1e
1
y
j
+e
2
y
2
j
+ +(1)
n
e
n
y
n
j

.
When e
i
is evaluated at x R
d
the original formula is recovered. As remarked
before, the polynomials

have expressions in the p-basis with coefcients


10.7 Symmetric Functions and Jack Polynomials 351
independent of the number d of underlying variables provided that d m, where

j
=0 for any j >m. Thus the mixed x
j
, e
i
expression for p

provides a similar
one for

which is now meaningful for any number (m) of variables.


The Jack polynomials are parametrized by 1/ and the original formulae
(Stanley [1989]) use two hook-length products h

, h

, called upper and lower,


respectively. In terms of our notation:
Denition 10.7.1 For a partition N
d,P
0
, with m = () and parameter , the
upper and lower hook-length products are
h

() =
[[
h(, 1) =
m

i=1

j=1
_

1
(
i
j +1) +#k : k > i; j
k

,
h

() =
[[
h(, ) =
m

i=1

j=1
_

1
(
i
j) +1+#k : k > i; j
k

.
There is a relationship between h
_

R
, +1
_
and h(, ), which is needed
in the application of our formulae. The quantity #S
d
() = # :
+
= =
dim E

appears because of the symmetrization.


Let m
j
() = #i : 1 i d,
i
= j (zero for j >
1
) for j 0. Then
#S
d
() = d!/

1
j=0
m
j
()!.
Lemma 10.7.2 For a partition N
d,P
0
,
h
_

R
, +1
_
= #S
d
()
(d +1)

(d)

h(, ).
Proof We will evaluate the ratios (d +1)

/(d)

and h
_

R
, +1
_
/h(, ).
Clearly,
(d +1)

(d)

=
1

d
d!

d
i=1
[(d +1i) +
i
]
=
[d ()]!

(d)
d!

()
i=1
[(d +1i) +
i
] .
If the multiplicity of a particular
i
is 1 (in ) then
L
_

R
; d +1i, j +1
_
= L(; i, j)
for 1 j <
i
. To account for multiplicities, let w be the inverse of r

R, that is,
r

R (w(i)) =i for 1 i d. Then


R
w(i)
=
i
(for example, suppose that
1
>
2
=

3
>
4
; then w(1) =d, w(2) =d2, w(3) =d1). As in the multiplicity-1 case
we have
L
_

R
; w(i), j +1
_
= L(; i, j), 1 j <
i
.
352 Orthogonal Polynomials Associated with Symmetric Groups
Then h
_

R
, +1; w(i), j +1
_
= [
i
j + (L(; i, j) +1)], which is the same
as h(, ; i, j) for 1 j <
i
. Thus
h
_

R
, +1
_
h(, )
=
()

i=1
h
_

R
, +1; w(i), 1
_
h(, ; i,
i
)
.
By the construction of w, h
_

R
, +1; w(i), 1
_
= (
i
+ (d +1i)). Suppose
that
i
has multiplicity m (that is,
i1
>
i
= =
i+m1
>
i+m
); it then
follows that L(; i +l 1,
i
) = ml for 1 l m and
m

l=1
h(, ; i +l 1,
i
) =
m

l=1
[(ml +1)] =
m
m!.
Thus
h
_

R
, +1
_
h(, )
=

()
i=1
[
i
+ (d +1i)]

()

1
j=1
m

( j)!
.
Combining the ratios, we nd that
h
_

R
, +1
_
=
(d +1)

(d)

d!
[d ()]!

1
j=1
m

( j)!
h(, ).
Since m

(0) = d (), this completes the proof.


There is another way to express the commuting operators, formulated by
Cherednik [1991] for f
d
:
X
i
f =
1

x
i
f
x
i
+x
i
j<i
f (i, j) f
x
i
x
j
+

j>i
x
j
[ f (i, j) f ]
x
i
x
j
+(1i) f ;
it is easy to check that X
i
f D
i
x
i
f =
j<i
(i, j) f [1+(d 1)] f , so that
X
i
f =U
i
f (1+d) f . Another denition of nonsymmetric Jack polynomials
is as the simultaneous eigenfunctions of X
i
, normalized to be monic in the x-
basis. Another notation for these in the literature is E

_
x;
1
_
; this is the same
polynomial as
x

.
Previously, we dened symmetric (that is, S
d
-invariant) elements j

of the
spaces E

for each N
d,P
0
. An alternative approach to j

is to regard them as
the simultaneous eigenfunctions of d algebraically independent symmetric differ-
ential operators. These eigenfunctions can be taken as restrictions of elementary
symmetric functions of U
i
; subsequently to the denition we will prove their
S
d
-invariance.
Denition 10.7.3 For 0 i d, let T
i
be the linear operator on
d
dened by
(for indeterminate t)
d

i=0
t
i
T
i
=
d

j=1
(1+tU
j
).
The denition is unambiguous since U
i
is a commutative set.
10.7 Symmetric Functions and Jack Polynomials 353
Theorem 10.7.4 For 1 i d and w S
d
, wT
i
= T
i
w. Further, T
i
coincides
with a differential operator when restricted to
W
(the symmetric polynomials).
Proof It sufces to show that ( j, j +1)T
i
=T
i
( j, j +1) for 1 j < d. Let =
( j, j +1). By Lemma 10.4.1 U
k
= U
k
for each k ,= j, j +1, so it remains to
prove that (1+tU
j
)
_
1+tU
j+1
_
= (1+tU
j
)
_
1+tU
j+1
_
. Again by Lemma
10.4.1,
_
U
j
+U
j+1
_
=
_
U
j+1
+
_
+(U
j
), implying that U
j
+U
j+1
commutes with . Also, U
j
U
j+1
=
_
U
j+1
+
_
U
j+1
= U
j+1
(U
j
) +
U
j+1
=U
j
U
j+1
. Thus T
i
=T
i
.
This shows that q
W
implies T
i
q
W
. The same proof as that used for
Theorem 6.7.10 (which depended on the Leibniz rule) can be used to show that
T
i
is consistent with a differential operator of degree i on
W
, provided that
ad(q)
2
U
k
= 0 for each q
W
and 1 k d. But, for any f
d
, we have
qU
k
f = U
k
(qf )
_
x
k
q/x
k
_
f , so that ad(q)U
k
= x
k
q/x
k
(a multiplier)
and [ad (q)]
2
U
k
= 0.
The eigenvalue of T
i
acting on

is the coefcient of t
i
in

d
j=1
[1+t
j
()];
dene
d

i=0

i
()t
i
=
d

j=1
(1+t [ (d i +1) +
i
+1]).
The values
i
() : 1 i d determine N
d,P
0
uniquely. For any wS
d
, w

is an eigenfunction of T
i
with eigenvalue
i
(). The following is now obvious.
Theorem 10.7.5 For each N
d,P
0
there is a unique (up to scalar multiplica-
tion) symmetric polynomial q such that T
i
q =
i
()q, for each i with 1 i d
and q = bj

for some b Q().


The Jack polynomials J

_
x;
1
_
(in d variables, thus labeled by partitions
with no more than d nonzero parts) were dened to be an orthogonal fam-
ily for an algebraically dened inner product on symmetric functions (see
Jack [1970/71], Stanley [1989] and Macdonald [1995]) and for the torus
(Beerends and Opdam [1993]). The polynomials with =
1
2
are called zonal
spherical functions in statistics (James and Constantine [1974]). Bergeron and
Garsia [1992] showed that these are spherical functions on a certain nite homo-
geneous space. Beginning with a known coefcient, on the one hand we can
express J

in terms of j

, indeed the coefcient of x

in J

_
x;
1
_
is h

() =

[[
h(, ). On the other hand,
j

=E
+
_

R
_

+
=
1
E
+
()

,
and the expansion of

in the x-basis shows that x

only occurs in

.
But E

_
x;
1
_
is monic in x

and thus the coefcient of x

in j

is
h
_

R
, +1
_
/h(, 1).
354 Orthogonal Polynomials Associated with Symmetric Groups
Proposition 10.7.6 For N
d,P
0
,
J

_
;
1
_
=
(d)

h(, 1)
(d +1)


[[
#S
d
()
j

.
Proof Let J

_
;
1
_
= bj

and match up the coefcients of x

. This implies
that
b =
h(, )h(, 1)
k
[[
h(
R
, +1)
,
and Lemma 10.7.2 nishes the calculation.
Corollary 10.7.7 For N
d,P
0
the following hold:
(i) J

_
1
d
;
1
_
=
[[
(d)

;
(ii)

J

_
;
1
_
, J

_
;
1
__
h
= (d)

()h

();
(iii)

J

_
;
1
_
, J

_
;
1
__
p
=
(d)

(d +1)

()h

();
(iv)

J

_
;
1
_
, J

_
;
1
__
T
=
(d)

((d 1) +1)

()h

().
Proof This involves straightforward calculations using the norm formulae from
the previous section, Theorem 10.4.10 and Lemma 10.7.2.
For their intrinsic interest as well as for applications to CalogeroSutherland
models we derive the explicit forms of the two operators,
d
i=1
U
i
and
d
i=1
U
2
i
(actually, we will use a convenient shift of the second). Recall the notation
i
=
/x
i
for 1 i d.
Proposition 10.7.8 We have
d
i=1
U
i
=
d
i=1
x
i

i
+d +
1
2
d(d +1).
Proof For 1 i d, by Lemma 10.2.1
U
i
= x
i
D
i
+ +1+

j,=i
(i, j)

j<i
(i, j)
= x
i
D
i
+ +1+

j>i
(i, j)
and
d

i=1
U
i
=
d

i=1
x
i

i
+

1i<jd
[1(i, j)] +

1i<jd
(i, j) +d(1+)
=
d

i=1
x
i

i
+d(1+) +
1
2
d(d 1).
We used here the general formula for

d
i=1
x
i
D
i
(see Proposition 6.5.3).
10.7 Symmetric Functions and Jack Polynomials 355
Theorem 10.7.9 An invariant operator commuting with each U
j
is given by
d

i=1
(U
i
1d)
2
=
d

i=1
(x
i

i
)
2
+2

1i<jd
x
i
x
j
_

j
x
i
x
j

1(i, j)
(x
i
x
j
)
2
_
+
1
6

2
d(d 1)(2d 1).
Proof For i ,= j, let
i j
= [1 (i, j)]/(x
i
x
j
), an operator on polynomials.
From the proof of the preceding proposition, U
i
1d = x
i

i
+
j,=i
x
i

i j

(d 1) +

j>i
(i, j), and we write
d

i=1
(U
i
1d)
2
=
d

i=1
(x
i

i
)
2
+(E
1
+E
2
) +
2
(E
3
+E
4
)
2(d 1)
d

i=1
(U
i
1) +
2
d(d 1)
2
,
where
E
1
=

i,=j
(x
i

i
x
i

i j
+x
i

i j
x
i

i
),
E
2
=

i<j
[x
i

i
(i, j) +(i, j)x
i

i
] =

i,=j
(i, j)x
i

i
,
E
3
=

i,=j
_

k,=i
x
i

i j
x
i

ik
+

k>i
[x
i

i j
(i, k) +(i, k)x
i

i j
]
_
,
E
4
=
d1

i=1

j>i

k>i
(i, j)(i, k)
=
d1

i=1
_
d i +

i<j<k
[(i, j)(i, k) +(i, k)(i, j)]
_
.
The argument depends on the careful grouping of terms and permuting the
indices of summation. Fix a pair (i, j) with i < j and combine the corresponding
terms in E
1
+E
2
to obtain
x
i
1(i, j)
x
i
x
j
+x
2
i

i
(i, j)
x
i
x
j
x
2
i
1(i, j)
(x
i
x
j
)
2
+x
i
x
i

i
(i, j)x
i

i
x
i
x
j
+(i, j)x
i

i
+x
j
1(i, j)
x
j
x
i
+x
2
j

j
(i, j)
x
j
x
i
x
2
j
1(i, j)
(x
i
x
j
)
2
+x
j
x
j

j
(i, j)x
j

j
x
j
x
i
+(i, j)x
j

j
;
the second line comes from interchanging i and j. The coefcient of (i, j)
i
is
x
i
x
j
+x
2
j
/(x
i
x
j
) +x
j
= 0, and similarly the coefcient of (i, j)
j
is zero. The
nonvanishing terms are
356 Orthogonal Polynomials Associated with Symmetric Groups
2
x
2
i

i
x
2
j

j
x
i
x
j
2
x
i
x
j
[1(i, j)]
(x
i
x
j
)
2
= 2x
i
x
j
_

j
x
i
x
j

1(i, j)
(x
i
x
j
)
2
_
+2(x
i

i
+x
j

j
).
Summing the last term over all i < j results in 2(d 1)
d
i=1
x
i

i
.
There are terms in E
3
with only two distinct indices: for any pair i, j with
i < j one has
x
i

i j
x
i

i j
+x
j

ji
x
j

ji
+x
i

i j
(i, j) +(i, j)x
i

i j
= [1(i, j)] [1(i, j)] = 0,
because (i, j)
i j
=
i j
so that
x
i

i j
x
i

i j
= x
i
/x
i
x
j
(x
i

i j
x
j

i j
) = x
i

i j
,
which combines with x
j

ji
x
j

ji
to produce 1(i, j). Also,
x
i

i j
(i, j) +(i, j)x
i

i j
= x
i
(i, j) 1/x
i
x
j
+x
j
(i, j) 1/x
j
x
i
=[1(i, j)].
For any triple (i, j, k) (all distinct, and the order matters), the rst part of E
3
(in
the second term interchange j and k) contributes
x
2
i
[1(i, k)]
(x
i
x
j
)(x
i
x
k
)

x
i
x
k
[(i, k) (i, k)(i, j)]
(x
i
x
k
)(x
k
x
j
)
and the second part of E
3
contributes (but only for k > i)
x
i
[ p(i, k) (i, j)(i, k)]
x
i
x
j
+
x
k
[(i, k) (i, k)(i, j)]
x
k
x
j
;
however, this is symmetric in i, k since (k, j)(k, i) = (i, k)(i, j) (interchanging i
and k in the rst term). Fix a triple (p, q, s) with p < q, and then the transposition
(p, q) appears in the triples (p, s, q), (q, s, p) with coefcient
x
2
p
(x
p
x
s
)(x
p
x
q
)
+
x
2
q
(x
q
x
s
)(x
q
x
p
)

x
p
x
q
(x
p
x
q
)(x
q
x
s
)

x
p
x
q
(x
q
x
p
)(x
p
x
s
)
+
x
p
x
p
x
s
+
x
q
x
q
x
s
= 0.
Finally we consider the coefcient of a given 3-cycle (of permutations), say,
(p, q)(p, s) = (q, s)(q, p) = (s, p)(s, q), which arises as follows:
x
p
x
q
(x
p
x
q
)(x
q
x
s
)

x
q
x
q
x
s
+
x
q
x
s
(x
q
x
s
)(x
s
x
p
)

x
s
x
s
x
p
+
x
s
x
p
(x
s
x
p
)(x
p
x
q
)

x
p
x
p
x
q
=1.
This is canceled by the corresponding 3-cycle in E
4
. It remains to nd the coef-
cient of 1 (the identity operator) in E
3
and E
4
. The six permutations of any
triple p < q < s correspond to the six permutations of x
2
p
/(x
p
x
q
)(x
p
x
s
),
10.8 Miscellaneous Topics 357
and these terms add to 2. Thus E
3
and E
4
contribute 2
_
d
3
_
+

d1
i=1
(d i) =
1
6
d(d 1)(2d 1). Further,
2(d 1)
d

i=1
[x
i

i
(U
i
1)] +
2
d(d 1)
2
= 0
and this nishes the proof.
Corollary 10.7.10 For N
d,P
0
and
+
=,
_
d

i=1
(x
i

i
)
2
+2

i<j
x
i
x
j
_

j
x
i
x
j

1(i, j)
(x
i
x
j
)
2
_
_

=
d

i=1

i
[
i
2(i 1)]

.
Proof The operator is
d
i=1
(U
i
1d)
2

1
6

2
d(d 1)(2d 1). Because of its
invariance, the eigenvalue for

is the same as that for

, namely,

d
i=1
[(d
i +1) +
i
d]
2

d
i=1
(i 1)
2
.
The difference part of the operator vanishes for symmetric polynomials.
Corollary 10.7.11 For N
d,P
0
,
_
d

i=1
(x
i

i
)
2
+2

i<j
x
i
x
j

j
x
i
x
j
_
j

=
d

i=1

i
[
i
2(i 1)] j

.
This is the second-order differential operator whose eigenfunctions are Jack
polynomials.
10.8 Miscellaneous Topics
Recall the exponential-type kernel K(x, y), dened for all x, y R
d
, with the prop-
erty that D
(x)
i
K(x, y) = y
i
K(x, y) for 1 i d. The component K
n
(x, y) that is
homogeneous of degree n in x (and in y) has the reproducing property for the
inner product , )
h
on P
d
n
, that is, K
n
_
x, D
(y)
_
q(y) = q(x) for q P
d
n
. So, the
orthogonal basis can be used to obtain the following:
K
n
(x, y) =

[[=n
h(, 1)
h(, +1)(d +1)

(x)

(y).
The intertwining map V can also be expressed in these terms. Recall its dening
properties: VP
d
n
P
d
n
,V1 =1 and D
i
Vq(x) =Vq(x)/x
i
for every q
d
, 1
i d. The homogeneous component of degree n of V
1
is given by
V
1
q(x) =
1
n!

[[=n
_
n

_
x

_
D
(y)
_

q(y).
358 Orthogonal Polynomials Associated with Symmetric Groups
Dene a linear map on polynomials by p

= x

/!; then
1
V
1
q(x)
=
[[=n
p

(x)(D
(y)
)

q(y) = F
n
(x, ), q)
h
, the pairing of the degree-n homoge-
neous part of F (x, y) (the generating function for p

and the reproducing kernel


for , )
p
) with q in the h inner product (see Denition 7.2.2).
Write q =

[[=n
b

(expanding in the -basis); then, with [[ = n =[[,


F
n
(x, ), q)
h
=

,
1

)
p

(x)b

)
h
=

(d +1)

+b

(x).
Thus
1
V
1
acts as the scalar (d +1)

1 on the space E

; hence V =
(d +1)
1

1 on E

. From the expansion

= p

: ,
the matrix B
,
can be dened for all , N
d
0
with [[ = n = [[, for xed n,
such that B
,
= 1 and B

= 0 unless . The inverse of the matrix has the


same triangularity property, so that
p

__
B
1
_

:
_
and thus
Vx

=!V p

=
!
(d +1)

_
!
(d +1)

+
(B
1
)

:
_
.
Next, we consider algorithms for p

and

. Since p

is a product of poly-
nomials p
n
(x
i
; x) only these need to be computed. Further, we showed in the
proof of Proposition 10.3.7 that p
n
(x
i
; x) =
n
(x) +x
i
p
n1
(x
i
; x) (and also that

n+1
(x)/x
i
=p
n
(x
i
; x)), where
n
is generated by

n=0

n
(x)t
n
=
d

i=1
(1x
i
t)

=
_
d

j=0
(1)
j
e
j
t
j
_

,
in terms of the elementary symmetric polynomials in x (e
0
=1, e
1
=x
1
+x
2
+ ,
e
2
= x
1
x
2
+ , . . . , e
d
= x
1
x
2
x
d
).
Proposition 10.8.1 For > 0, the polynomials
n
are given by the recurrence
(with
j
= 0 for j < 0)

n
=
1
n
d

i=1
(1)
i1
(ni +i)e
i

ni
.
Proof Let g =

d
j=0
(1)
j
e
j
t
j
and differentiate both sides of the generating
equation:

n=1
n
n
(x)t
n
=t

t
g

=
_
d

j=1
(1)
j
je
j
t
j
_
g
1
;
10.8 Miscellaneous Topics 359
thus

n=1
n
n
(x)t
n
d

j=0
(1)
j
e
j
t
j
=
d

j=1
(1)
j
je
j
t
j

n=0

n
(x)t
n
.
The coefcient of t
m
in this equation is
d

j=0
(1)
j
(m j)e
j

mj
=
d

j=1
(1)
j
je
j

mj
,
which yields m
m
=
d
j=1
(1)
j
(m j + j)e
j

mj
.
It is an easy exercise to show that
n
is the sum over distinct N
d
0
, with

1
+2
2
+ +d
d
= n, of ()
[[
(1)
n[[

d
j=1
_
e

j
j
/
j
!
_
.
The monic polynomials
x

can be computed algorithmically by means of a


kind of YangBaxter graph. This is a directed graph, and each node of the graph
consists of a triple (, (),
x

) where N
d
0
and () denotes the vector
(
i
())
d
i=1
. The latter is called the spectral vector because it contains the U
i
-
eigenvalues. The node can be labeled by for brevity. The adjacent transpositions
are fundamental in this application and are denoted by s
i
= (i, i +1), 1 i < d.
One type of edge in the graph comes from the following:
Proposition 10.8.2 Suppose that N
d
0
and
i
<
i+1
; then
_
1c
2
_

= s
i

x
s
i

c
x
s
i

,
where
c =

i+1
()
i
()

x
s
i

= s
i

+c
x

and the spectral vector (s


i
) = s
i
().
Proof The proof is essentially the same as that for Proposition 10.4.5. In the
monic case the leading term of s
i

+c
x

is x
s
i

, because every monomial x

in

satises s
i
.
The other type of edge is called an afne step and corresponds to the map
= (
2
,
3
, . . . ,
d
,
1
+1) for N
d
0
.
Lemma 10.8.3 For N
d
0
the spectral vector () =().
Proof For 2 j d, let
j
= 1 if
1

j
(equivalently,
1
+1 >
j
) and
j
=0
otherwise. Then, for 1 i < d,
r

(i) =
i+1
+#l : 2 l i +1,
l

i+1

+#l : i +1 < l d,
l
>
i+1

= r

(i +1);
360 Orthogonal Polynomials Associated with Symmetric Groups
see Denition 10.3.10. Thus

(i) = [d r

(i +1) +1] +
i+1
+ 1 =

(i +1). Similarly, r

(d) = r

(1) and

(d) =

(1) +1.
From the commutation relations in Lemma 10.2.1 we obtain
U
i
x
d
f = x
d
[U
i
(i, d)] f , 1 i < d,
U
d
x
d
f = x
d
(1+D
d
x
d
) f .
Let
d
= s
1
s
2
. . . s
d1
; thus
d
(d) = 1 and
d
(i) = i +1 for 1 i < d (a cyclic
shift). Then, by the use of U
i+1
= s
i
U
i
s
i
s
i
and s
j
U
i
s
j
= s
j
U
i
s
j
for j < i 1
or j > i,
U
i
x
d
f = x
d
_

1
d
U
i+1

d
_
f , 1 i < d,
U
d
x
d
f = x
d
_
1+
1
d
U
1

d
_
f .
If f satises U
i
f =
i
f for 1 i d then U
i
_
x
d

1
d
f
_
=
i+1
_
x
d

1
d
f
_
for 1
i < d and U
d
_
x
d

1
d
f
_
= (
1
+1)
_
x
d

1
d
f
_
. Note that
1
d
f (x) = f
_
x
1
d
_
=
f (x
d
, x
1
, . . . , x
d1
).
Proposition 10.8.4 Suppose that N
d
0
; then

(x) = x
d

(x
d
, x
1
, . . . , x
d1
).
Proof The above argument shows that x
d

_
x
1
d
_
has spectral vector ().
The coefcient of x

in x
d

_
x
1
d
_
is the same as the coefcient of
x

in
x

.
The YangBaxter graph is interpreted as an algorithm as follows:
1. the base point is
_
0
d
, ((d i +1) +1)
d
i=1
, 1
_
,
2. the node (, (), f ) is joined to (, (), x
d
f (x
d
, x
1
, . . . , x
d1
)) by
a directed edge labeled S;
3. if
i
<
i+1
then the node (, (), f ) is joined to
_
s
i
, s
i
(),
_
s
i
+

i+1
()
i
()
_
f
_
by a directed edge labeled T
i
.
A path in the graph is a sequence of connected edges (respecting the directions)
joining the base point to another node. By Propositions 10.8.4 and 10.8.2 the end
result of the path is a triple (, (),
x

).
For example, here is a path to the node labeled (0, 1, 0, 2): (0, 0, 0, 0)
S

(0, 0, 0, 1)
T
3
(0, 0, 1, 0)
T
2
(0, 1, 0, 0)
S
(1, 0, 0, 1)
T
3
(1, 0, 1, 0)
S
(0, 1, 0, 2).
10.8 Miscellaneous Topics 361
All paths with the same end point have the same length. To show, this intro-
duce the function (t) =
1
2
([t[ +[t +1[ 1) for t Z; then (t) = t for t 0
and = t 1 for t 1, also (t 1) = (t). For N
d
0
, dene
/
() =

1i<jd
(
i

j
).
Proposition 10.8.5 For N
d
0
, the number of edges in a path joining the base
point to the node (, (),
x

) is
/
() +[[.
Proof Argue by induction on the length. The induction starts with
/
_
0
d
_
+

0
d

= 0. Suppose that the claim is true for some . Consider the step to .
It sufces to show that
/
() =
/
(), since [[ =[[ +1. Indeed,

/
()
/
() =
d1

i=1
(
i+1
(
1
+1))
d

i=2
(
1

i
)
=
d

i=2
[ ((
1

i
) 1) (
1

i
)] = 0.
Now, suppose that
i
<
i+1
(that is,
i

i+1
1); then [s
i
[ =[[ and

/
(s
i
)
/
() = (
i+1

i
) (
i

i+1
)
= (
i+1

i
) (
i+1

i
1)
= 1.
This completes the induction.
In the above example

/
((0, 1, 0, 2)) = (1) + (2) + (1) + (1) + (2) = 3.
Recall that e

h
/2
is an isometric map on polynomials from the inner prod-
uct , )
h
to H = L
2
_
R
d
; b

i<j

x
i
x
j

2
e
[[x[[
2
/2
dx
_
. The images of

:
N
d
0
under e

h
/2
form the orthogonal basis of nonsymmetric JackHermite
polynomials. By determining e

h
/2
U
i
e

h
/2
, we will exhibit commuting self-
adjoint operators on H whose simultaneous eigenfunctions are exactly these
polynomials.
Proposition 10.8.6 For 1 i d, the operator
U
H
i
= e

h
/2
U
i
e

h
/2
=D
i
x
i
+ D
2
i

j<i
( j, i)
is self-adjoint on H .
362 Orthogonal Polynomials Associated with Symmetric Groups
Proof The self-adjoint property is a consequence of the isometric transformation.
Since
h
commutes with each transposition ( j, i), it sufces to show that
e

h
/2
D
i
x
i
=
_
D
i
x
i
D
2
i
_
e

h
/2
.
Use the relation (Lemma 7.1.10)
h
x
i
= x
i

h
+2D
i
to show inductively that

n
h
D
i
x
i
D
i
x
i

n
h
= 2nD
2
i

n1
h
;
now apply
h
to both sides of the equation to obtain

n+1
h
D
i
x
i

h
D
i
x
i

n
h
= 2nD
2
i

n
h
=
n+1
h
D
i
x
i
D
i
x
i

n+1
h
2D
2
i

n
h
.
Multiply the equation for n by
_

1
2
_
n
/n! and sum over n = 0, 1, 2, . . .; the result
is e

h
/2
D
i
x
i
D
i
x
i
e

h
/2
=D
2
i
e

h
/2
.
The nonsymmetric JackHermite polynomials e

h
/2

are not homoge-


neous; the norms in H are the same as the , )
h
norms for

.
10.9 Notes
The forerunners of the present study of orthogonal polynomials with nite
group symmetry were the Jack polynomials and also the Jacobi polynomials of
Heckman [1987], Heckman and Opdam [1987] and Opdam [1988]. These are
trigonometric polynomials periodic on the root lattice of a Weyl group and are
eigenfunctions of invariant (under the corresponding Weyl group) differential
operators (see, for example, Section 5.5 for an example in the type-A category).
Beerends and Opdam [1993] found an explicit relationship between Jack poly-
nomials and the HeckmanOpdam Jacobi polynomials of type A. Okounkov
and Olshanski [1998] studied the asymptotics of Jack polynomials as d .
It seems clear now that the theory of orthogonal polynomials with respect
to invariant weight functions requires the use of differentialdifference opera-
tors. Differential operators do not sufce. After Dunkls [1989a] construction of
rational differentialdifference operators, Heckman 1991a] dened trigonomet-
ric ones; the connection between the two is related to the change of variables
x
j
= exp(i
j
), at least for S
d
. Heckmans operators are self-adjoint but not com-
muting; later, Cherednik [1991] found a modication providing commutativity.
Lapointe and Vinet [1996] introduced the explicit form of U
i
used in this chap-
ter to generate Jack polynomials by a Rodrigues-type formula. Opdam [1995]
constructed orthogonal decompositions for polynomials associated with Weyl
groups in terms of commuting self-adjoint differentialdifference operators. Sev-
eral details and formulae in this chapter are special cases of Opdams results;
because we considered only symmetric groups, the formulae are a good deal more
accessible.
10.9 Notes 363
Dunkl [1998b] introduced the p-basis, partly for the purpose of analyzing the
intertwining operator. The algorithm for

in terms of

m
(when m = ())
was taken from this paper. Sahi [1996] computed the norms of the nonsymmetric
Jack polynomials for the inner product , )
p
, that is, the formal biorthogonal-
ity

p

, x

_
p
=

. Knop and Sahi [1997] developed a hook-length product


associated with tableaux of compositions (the row lengths were not in mono-
tone order). Baker and Forrester, in a series of papers ([1997a,b, 1998]) studied
nonsymmetric Jack polynomials for their application to CalogeroMoser sys-
tems (see Chapter 11) and also exponential-type kernels. Baker and Forrester
and also Lassalle [1991a] studied the Hermite polynomials of type A. Some tech-
niques involving the p-basis and the symmetric and skew-symmetric polynomials
appeared in Dunkl [1998a] and in Baker, Dunkl and Forrester [2000]. The evalu-
ation of the pairing
i<j
(D
i
D
j
)
i<j
(x
i
x
j
), used here in norm calculations,
was used previously in Dunkl and Hanlon [1998].
Proposition 10.8.5 was proved in Dunkl and Luque [2011, Proposition 2.13].
Etingof [2010] gave a proof for the explicit evaluation integrals of Macdonald
MehtaSelberg type for all the irreducible groups except type B
N
.
11
Orthogonal Polynomials Associated with
Octahedral Groups, and Applications
11.1 Introduction
The adjoining of sign changes to the symmetric group produces the hyperoctahe-
dral group. Many techniques and results from the previous chapter can be adapted
to this group by considering only functions that are even in each variable. A sec-
ond parameter
/
is associated with the conjugacy class of sign changes. The
main part of the chapter begins with a description of the differentialdifference
operators for these groups and their effect on polynomials of arbitrary parity (odd
in some variables, even in the others). As in the type-A case there is a fundamen-
tal set of rst-order commuting self-adjoint operators, and their eigenfunctions
are expressed in terms of nonsymmetric Jack polynomials. The normalizing
constant for the Hermite polynomials, that is, the MacdonaldMehtaSelberg
integral, is computed by the use of a recurrence relation and analytic-function
techniques. There is a generalization of binomial coefcients for the nonsym-
metric Jack polynomials which can be used for the calculation of the Hermite
polynomials. Although no closed form is as yet available for these coefcients,
we present an algorithmic scheme for obtaining specic desired values (by sym-
bolic computation). Calogero and Sutherland were the rst to study nontrivial
examples of many-body quantum models and to show their complete integra-
bility. These systems concern identical particles in a one-dimensional space, the
line or the circle. The corresponding models have been extended to include non-
symmetric wave functions by allowing the exchange of spin values between two
particles. We give a concise description of the Schr odinger equations for the
models and of the construction of wave functions and commuting operators,
using type A and type B operators. Type A operators belong to the symmetric
groups; see Section 6.2 for a classication of reection groups. The chapter con-
cludes with notes on the research literature in current mathematics and physics
journals.
11.2 Operators of Type B 365
11.2 Operators of Type B
The Coxeter groups of type B are the symmetry groups of the hypercubes
(1, 1, . . . , 1) R
d
and of the hyperoctahedra
i
: 1 i d. For given
d, the corresponding group is denoted W
d
and is of order 2
d
d!. For compatibility
with the previous chapter we use ,
/
for the values of the multiplicity function,
with assigned to the class of roots
i

j
: 1 i < j d and
/
assigned to
the class of roots
i
: 1 i d. For i ,= j the roots
i

j
,
i
+
j
correspond
to reections
i j
,
i j
respectively, where, for x R
d
,
x
i j
= (x
1
, . . . ,
i
x
j
, . . . ,
j
x
i
, . . . , x
d
),
x
i j
= (x
1
, . . . ,
i
x
j
, . . . ,
j
x
i
, . . . , x
d
).
As before, the superscripts above a symbol refer to its position in the list of
elements. For 1 i d the root
i
corresponds to the reection (sign change)
x
i
= (x
1
, . . . ,
i
x
i
, . . . , x
d
).
Note that
i

i j

i
=
j

i j

j
=
i j
. The associated Dunkl operators are as follows.
Denition 11.2.1 For 1 i d and f
d
the type-B operator D
i
is given by
D
i
f (x) =

x
i
f (x) +
/
f (x) f (x
i
)
x
i
+

j,=i
_
f (x) f (x
i j
)
x
i
x
j
+
f (x) f (x
i j
)
x
i
+x
j
_
.
Most of the work in constructing orthogonal polynomials was done in the
previous chapter. The results transfer, with these conventions: the superscript A
indicates the symmetric-group operators and, for x R
d
, let
y = (y
1
, . . . , y
d
) = (x
2
1
, . . . , x
2
d
),
D
A
i
g(y) =

y
i
g(y) +

j,=i
g(y) (i, j)g(y)
y
i
y
j
,
U
A
i
g(y) = D
A
i

i
g(y)

j<i
(i, j)g(y);
note that both
i j
and
i j
induce the transposition (i, j) on y and recall that D
A
i

i
=
D
A
i
y
i
+. To keep the numerous inner products distinct, we use , )
B
for the h
inner product, that is, for f , g
d
let
f , g)
B
= f (D
1
, . . . , D
d
)g(x
1
, . . . , x
d
)[
x=0
.
As in the type A case, we construct commuting self-adjoint operators whose
simultaneous eigenfunctions can be expressed in terms of the nonsymmetric Jack
366 Orthogonal Polynomials Associated with Octahedral Groups
polynomials. If two polynomials have opposite parities in the same variables then,
by a simple argument, they are orthogonal.
Denition 11.2.2 For any subset E i : 1 i d, let
x
E
=

iE
x
i
.
Suppose that g
1
, g
2
are polynomials in y and E, F are sets with E ,= F; then,
for any suitable inner product (specically, one that is invariant under the sign-
change group Z
d
2
), one has x
E
g
1
(y), x
F
g
2
(y)) = 0. Applying D
i
to a polynomial
x
E
g(y) gives two qualitatively different formulae depending on whether i E.
Proposition 11.2.3 Let g(y) be a polynomial and E j : 1 j d. Then,
for 1 i d, if i / E,
D
i
x
E
g(y) = 2x
i
x
E
D
A
i
g(y);
if i E,
D
i
x
E
g(y) = 2
x
E
x
i
_
(
/

1
2
+D
A
i
y
i
)g(y)

jE, j,=i
(i, j)g(y)
_
.
Proof We use the product rule, Proposition 6.4.12, specialized as follows:
D
i
x
E
g(y) = x
E
D
i
g(y) +g(y)
x
E
x
i
+
/
g(y)
x
E

i
x
E
x
i
+

j,=i
(i, j)g(y)
_
x
E

i j
x
E
x
i
x
j
+
x
E

i j
x
E
x
i
+x
j
_
.
Considering the rst term, we obtain
D
i
g(y) = 2x
i

y
i
g(y) +

j,=i
[g(y) (i, j)g(y)]
_
1
x
i
x
j
+
1
x
i
+x
j
_
= 2x
i
_

y
i
g(y) +

j,=i
g(y) (i, j)g(y)
y
i
y
j
_
= 2x
i
D
A
i
g(y).
Next, if i / E then x
E

i
x
E
= 0 and x
E

i j
x
E
= 0 = x
E

i j
x
E
for j / E. If
i / E and j E then
x
E

i j
x
E
x
i
x
j
+
x
E

i j
x
E
x
i
+x
j
=
x
E
x
j
_
x
j
x
i
x
i
x
j
+
x
j
+x
i
x
i
+x
j
_
= 0.
This proves the rst formula.
If i E then x
E

i
x
E
= 2x
E
and x
E

i j
x
E
=0 = x
E

i j
x
E
for j E, while
for j / E we have
x
E

i j
x
E
x
i
x
j
+
x
E

i j
x
E
x
i
+x
j
=
x
E
x
i
_
x
i
x
j
x
i
x
j
+
x
i
+x
j
x
i
+x
j
_
= 2
x
E
x
i
.
11.2 Operators of Type B 367
Thus, for i E,
D
i
x
E
g(y) = 2
x
E
x
i
_
y
i
D
A
i
g(y) +(
1
2
+
/
)g(y) +

(i, j)g(y) : j / E
_
.
The commutation y
i
D
A
i
g(y) = D
A
i
y
i
g(y) g(y)

j,=i
(i, j)g(y) established in
Lemma 10.2.1 nishes the proof.
Lemma 11.2.4 For i ,= j, the commutant
[D
i
x
i
, D
j
x
j
] =(D
i
x
i
D
j
x
j
)(
i j
+
i j
) =(D
i
x
i
,
i j
+
i j
).
Proof By Proposition 6.4.10, D
j
x
i
x
i
D
j
=(
i j

i j
) (as operators on
d
);
thus
D
i
x
i
D
j
x
j
D
j
x
j
D
i
x
i
=D
i
[D
j
x
i
+(
i j

i j
)]x
j
D
j
[D
i
x
j
+(
i j

i j
)]x
i
=D
i
(
i j

i j
)x
j
D
j
(
i j

i j
)x
i
=(D
i
x
i
D
j
x
j
)(
i j
+
i j
),
because
i j
x
j
= x
i

i j
and
i j
x
j
= x
i

i j
. The remaining equation follows from
(
i j
+
i j
)D
i
x
i
=D
j
x
j
(
i j
+
i j
).
Denition 11.2.5 For 1 i d, the self-adjoint (in , )
B
) operators U
i
are
given by
U
i
=D
i
x
i

j<i
(
i j
+
i j
).
Theorem 11.2.6 For 1 i, j d, U
i
U
j
=U
j
U
i
.
Proof Assume that i < j and let
rs
=
rs
+
rs
for r ,=s. Further, let U
i
=D
i
x
i

A, U
j
=D
j
x
j

i j
B, where A =
k<i

ki
and B =
k<j,k,=i

k j
. Then
[U
i
, U
j
] = [D
i
x
i
, D
j
x
j

i j
] [D
i
x
i
, B] [A, D
j
x
j
] +
2
[A,
i j
+B].
The rst term is zero by Lemma 11.2.4 and the next two are zero by the
transformation properties of D
i
x
i
and D
j
x
j
. The last term reduces to

2
i1

k=1
([
ki
,
k j
] +[
ki
,
i j
]) =
2
i1

k=1
_
(
ki

k j

i j

ki
) +(
ki

i j

k j

ki
)
_
.
Each bracketed term is zero; to see this, start with the known (from S
d
) relation

ki

k j

i j

ki
= 0 and conjugate it by
j
,
i
,
k
to obtain
ki

k j

i j

ki
= 0,

ki

k j

i j

ki
= 0,
ki

k j

i j

ki
= 0 respectively. (Replacing i, k by k, i shows
that the second term is zero.)
368 Orthogonal Polynomials Associated with Octahedral Groups
11.3 Polynomial Eigenfunctions of Type B
The nonsymmetric Jack polynomials immediately provide simultaneous eigen-
functions which are even in each variable.
Proposition 11.3.1 For N
d
0
and 1 i d,
U
i

(y) =
_

i
() +
/

1
2

(y).
Proof By Proposition 11.2.3,
D
i
x
i

(y)

j<i
(
i j
+
i j
)

(y)
= 2(
/

1
2
+D
A
i
y
i
)

(y) 2

j<i
(i, j)

(y)
= 2(
/

1
2
+U
A
i
)

(y)
= 2(
i
() +
/

1
2
)

(y).
Recall that D
A
i
y
i
=D
A
i

i
.
For mixed parity, the simplest structure involves polynomials of the form
x
1
x
2
x
k
g(y); this is a corollary, given below, of the following.
Lemma 11.3.2 Let E j : 1 j d; then, for i / E,
U
i
x
E
g(y) = 2x
E
_
(U
A
i
+
/

1
2
)

j>i, jE
(i, j)
_
g(y),
and, for i E,
U
i
x
E
g(y) = 2x
E
_
D
A
i
y
i


j<i, jE
(i, j)
_
g(y).
Proof For any j ,=i note that (
i j
+
i j
)x
E
g(y) =0 if #(i, jE) =1, otherwise
(
i j
+
i j
)x
E
g(y) = 2x
E
(i, j)g(y). For i / E, by the second part of Proposition
11.2.3,
U
i
x
E
g(y) = 2x
E
_
(D
A
i
y
i
+
/

1
2
)

jE
(i, j)
_
g(y) 2x
E
j<i, j / E
(i, j)g(y),
which proves the rst part. For i E, note that x
i
x
E
= x
E
y
i
/x
i
and, by the rst
part of Proposition 11.2.3,
U
i
x
E
g(y) = D
i
x
i
x
E
g(y) 2x
E
j<i, jE
(i, j)g(y)
= 2x
E
_
D
A
i
y
i
g(y)

j<i, jE
(i, j)g(y)
_
.
This completes the proof.
11.3 Polynomial Eigenfunctions of Type B 369
Corollary 11.3.3 For 1 k d and N
d
0
, let E = j : 1 j k; then, for
1 i k,
U
i
x
E

(y) = 2[
i
() ]x
E

(y)
and, for k < i d,
U
i
x
E

(y) = 2[
i
() +
/

1
2
]x
E

(y).
It is clear from S
d
theory that x
E

(y) : N
d
0
is a complete set of simulta-
neous eigenfunctions for U
i
of this parity type. To handle arbitrary subsets
E one uses a permutation which preserves the relative order of the indices
corresponding to even and odd parities, respectively. For given k, we consider per-
mutations w S
d
with certain properties; the set w(E) = w(1), w(2), . . . , w(k)
will be the set of indices having odd parities, that is, wx
E
=
k
i=1
x
w(i)
.
Proposition 11.3.4 For 1 k < d, let E = j : 1 j k, N
d
0
, and let w
S
d
with the property that w(i) <w( j) whenever 1 i < j k or k+1 i < j d;
then, for 1 i k,
U
w(i)
wx
E

(y) = 2[
i
() ]wx
E

(y)
and, for k < i d,
U
w(i)
wx
E

(y) = 2[
i
() +
/

1
2
]wx
E

(y).
Proof The transformation properties wD
A
i
y
i
=D
A
w(i)
y
w(i)
w and
(r, s)w = w(w
1
(r), w
1
(s)), for r ,= s
will be used. When 1 i k, by Lemma 11.3.2 we have
U
w(i)
wx
E

(y) = 2x
w(E)
w
_
D
A
i
y
i


j<w(i), jw(E)
(w
1
( j), i)
_

(y)
but the set w
1
( j) : j <w(i), j w(E) equals r : 1 r <i by the construction
of w. Thus U
w(i)
wx
E

(y) = 2x
w(E)
w(U
A
i
)

(y) = 2[
i
() ]wx
E

(y).
When k < i d, we express the rst part of Lemma 11.3.2 as
U
w(i)
wx
E

(y) = 2x
w(E)
w
_
D
A
i
y
i
+
/

1
2


jw(E)
(w
1
( j), i)

j<w(i), j / w(E)
(w
1
( j), i)
_

(y)
= 2x
w(E)
w(U
A
i
+
/

1
2
)

(y).
The set r : 1 r < i equals E r : k < r < i and, again by the construction
of w, r : k < r < i =w
1
( j) : j < w(i), j / w(E).
370 Orthogonal Polynomials Associated with Octahedral Groups
To restate the conclusion: the space of polynomials which are odd in the
variables x
w(i)
, 1 i k, is the span of the U
i
simultaneous eigenfunctions
wx
E

(y).
To conclude this section, we compute the , )
B
norms of x
E

(y) for E =r :
1 r k. By group invariance the norm of wx
E

(y) has the same value. First


the S
d
-type results will be used for the all-even case

(y); this depends on the


use of permissible (in the sense of Section 10.3) inner products.
Proposition 11.3.5 The inner product , )
B
restricted to polynomials in y =
(x
2
1
, . . . , x
2
d
) is permissible.
Proof Indeed, for f , g
d
the value of this inner product is given by
f , g)
B
= f (D
2
1
, . . . , D
2
d
)g(x
2
1
, . . . , x
2
d
)[
x=0
.
The invariance wf , wg)
B
= f , g)
B
for w S
d
is obvious. By the second part
of Proposition 11.2.3, D
A
i
y
i
f (y) = (
1
2
D
i
x
i
+
1
2

/
) f (y); the right-hand side is
clearly a self-adjoint operator in , )
B
.
By Theorem 10.4.8 the following holds.
Corollary 11.3.6 For N
d
0
and =
+
,

(y),

(y))
B
=E
+
()E

()

(y),

(y))
B
.
As in Section 10.6, the calculation of

(y),

(y))
B
for N
d,P
0
depends
on a recurrence relation involving D
A
m

, where m = (). We recall some key


details from Section 10.6. For 1 < m d let
m
= (1, 2)(2, 3) (m1, m) S
d
.
When m = (), let

= (
m
1,
1
,
2
, . . . ,
m1
, 0. . .); then D
A
m

= [(d
m+1) +
m
]
1
m

. This fact will again be used for norm calculations. For such
, for conciseness in calculations set
a

= (d m) +
/
+
m

1
2
,
b

= (d m+1) +
m
.
Lemma 11.3.7 Suppose that N
d,P
0
and m = (). Then
D
2
m

(y) = 4a

D
A
m

(y) = 4a

1
m

(y).
Proof By Proposition 11.2.3,
D
2
m

(y) =D
m
[2x
m
D
A
m

(y)] = 4[(
/

1
2
)D
A
m

(y) +D
A
m
y
m
D
A
m

(y)].
By Lemma 10.5.3, D
A
m
y
m
D
A
m

(y) = [
m
() 1]D
A
m

(y) because D
A
m
y
m
=
D
A
m

m
. Since
m
() = (d m+1) +
m
+1, we obtain
D
2
m

(y) = 4[
/

1
2
+
m
() 1]D
A
m

(y) = 4a

D
A
m

(y).
Finally, D
A
m

(y) = b

1
m

(y).
11.3 Polynomial Eigenfunctions of Type B 371
Theorem 11.3.8 Suppose that N
d,P
0
. Then

(y),

(y))
B
= 2
2[[
(d +1)

((d 1) +
/
+
1
2
)

h(, +1)
h(, 1)
.
Proof Suppose that m = (). By the dening property of , )
B
we have on the
one hand

D
2
m

(y), D
2
m

(y)
_
B
=

x
2
m
D
2
m

(y),

(y)
_
B
= 4a

x
2
m
D
A
m

(y),

(y)
_
B
= 4a

y
m
D
A
m

(y),

(y)
_
B
= 4a

m
+
m

(y),

(y))
B
,
because

y
m
D
A
m

(y),

(y)
_
B
= [
m
/( +
m
)]b

(y),

(y))
B
by part (iii) of
Lemma 10.5.2. On the other hand,

D
2
m

(y), D
2
m

(y)
_
B
= (4a

)
2
_

1
m

(y),
1
m

(y)
_
B
= (4a

)
2
_

(y),

(y)
_
B
= (4a

)
2
E
+
(

)E

m
(y),

m
(y)
_
B
.
Combining the two displayed equations shows that

(y),

(y))
B
= 4a

+
m

m
E
+
(

)E

m
(y),

m
(y)
_
B
.
From Lemma 10.6.4,
+
m

m
E
+
(

)E

) =
h(, +1)h(
m
, 1)
h(
m
, +1)h(, 1)
.
Moreover,
(d +1)

(d +1)

m
= (d m+1) +
m
= b

and
((d 1) +
/
+
1
2
)

((d 1) +
/
+
1
2
)

m
= (d m) +
/
+
m

1
2
= a

.
The proof is completed by induction on [[.
When = 0 =
/
these formulae reduce to the trivial identity

x

, x

_
=

d
i=1

i
!, because 2
2n
(1)
n
(
1
2
)
n
= (2n)! for n N
0
. It remains to compute the quan-
tity x
E

(y), x
E

(y))
B
for E = j : 1 j k and N
d
0
. The following is
needed to express the result. Let r| denote the largest integer r.
372 Orthogonal Polynomials Associated with Octahedral Groups
Denition 11.3.9 For N
d
0
let e(), o() N
d
0
with e()
i
=
i
/2| and
o()
i
=
i
e()
i
for 1 i d.
(Roughly, e() and o() denote the even and odd components of .) We start
with N
d
0
such that
i
is odd exactly when 1 i k and consider the U
i

simultaneous eigenfunctions indexed by , which are given by x


E

(y) where
= e().
Theorem 11.3.10 Suppose that 1 k d and N
d
0
satises the condition
that
i
is odd for 1 i k and
i
is even for k < i, and let E = j : 1 j k.
Then

x
E

e()
(y), x
E

e()
(y)
_
B
= 2
[[
(d +1)
e()
+((d 1) +
/
+
1
2
)
o()
+
h(e(), +1)
h(e(), 1)
.
Proof For any N
d
0
and 1 m k, by Proposition 11.2.3 we have
D
m
x
1
x
2
x
m

(y) = 2x
1
x
2
x
m1
_

1
2
+D
A
m
y
m


j<m
( j, m)
_

(y)
= 2x
1
x
2
x
m1
(
/

1
2
+U
A
m
)

(y)
= 2[
m
() +
/

1
2
]x
1
x
2
x
m1

(y).
Let = e(); then, using this formula inductively, we obtain

x
E

e()
(y), x
E

e()
(y)
_
B
=

D
1
D
k
x
E

e()
(y),
e()
(y)
_
B
= 2
k
k

i=1
[
i
() +
/

1
2
]

e()
(y),
e()
(y)
_
B
.
The last inner product has already been evaluated (and 2
2[e()[
2
k
= 2
[[
), so it
remains to show that
k

i=1
[
i
() +
/

1
2
]((d 1) +
/
+
1
2
)
e()
+ = ((d 1) +
/
+
1
2
)
o()
+.
Let =e() and =
+
and let wS
d
be such that
i
=
w(i)
and if
i
=
j
for
some i < j then w(i) < w( j). This implies that =o()
+
satises o()
i
=
w(i)
,
specically that
w(i)
=
i
+1 for 1 i k and
w(i)
=
i
for k < i. This shows
that N
d,P
0
; indeed by construction w(i) < w( j) implies that
w(i)

w( j)
and

w(i)

w( j)
. The latter could only be negated if
i
=
w(i)
=
w(i)
and
w( j)
=

j
+1 (so that j k < i), but
j
+1 >
i

j
implies that
i
=
j
and j < i
implies that w( j) < w(i), a contradiction.
11.3 Polynomial Eigenfunctions of Type B 373
For 1 i k, we have
i
() =[d # j :
j
>
i
# j : j < i;
j
=
i
] +

i
+1 = [d w(i) +1] +
w(i)
+1 = [d w(i) +1] +
w(i)
. Thus
k

i=1
[
i
() +
/

1
2
]((d 1) +
/
+
1
2
)

=
k

i=1
_
[d w(i)] +
/
+
w(i)

1
2
_
d

i=1
_
[d w(i)] +
/
+
1
2
_

w(i)
=
k

i=1
_
[d w(i)] +
/
+
1
2
_

w(i)
d

i=k+1
_
[d w(i)] +
/
+
1
2
_

w(i)
= ((d 1) +
/
+
1
2
)

.
This completes the proof.
Example 11.3.11 To illustrate the construction of w in the above proof, let =
(7, 5, 7, 2, 6, 8); then = e() = (3, 2, 3, 1, 3, 4), =
+
= (4, 3, 3, 3, 2, 1), =
o()
+
= (4, 4, 4, 3, 3, 1) and
w =
_
1 2 3 4 5 6
2 5 3 6 4 1
_
,
using the standard notation for permutations.
We turn now to the SelbergMacdonald integral and use the same techniques
as in the type-A case. The alternating polynomial for S
d
, namely
a
B
(x) =

1i<jd
_
x
2
i
x
2
j
_
= a(y),
plays a key role in these calculations. For the type-B inner product the value of
a
B
, a
B
)
B
shows the effect of changing to +1. Also, a
B
is h-harmonic because

h
a
B
is (under the action of
i j
or
i j
) a skew polynomial of degree 2
_
d
2
_
2,
hence 0 (note that
h
=
d
i=1
D
2
i
). As before, let
= (d 1, d 2, . . . , 1, 0) N
d,P
0
;
then a
B
is the unique skew element of spanw

: w W
d
and a = a

in
Denition 10.4.9). Thus the previous results will give the values of a
B
, a
B
)
B
.
Recall that, for f , g
d
(Theorem 7.2.7),
f , g)
B
= (2)
d/2
b
_
,
/
_

_
R
d
_
e

h
/2
f
__
e

h
/2
g
_ d

i=1
[x
i
[
2
/
[a
B
(x)[
2
e
|x|
2
/2
dx,
where the normalizing constant b(,
/
) satises 1, 1)
B
=1. Since e

h
/2
a
B
=a
B
the following holds (note that 2[[ = d(d 1)).
374 Orthogonal Polynomials Associated with Octahedral Groups
Theorem 11.3.12 We have a
B
, a
B
)
B
= 2
d(d1)
d! (d +1)

((d 1) +
/
+
1
2
)

and, for ,
/
0,
b(,
/
)
1
= (2)
d/2
_
R
d
d

i=1
[x
i
[
2
/

1i<jd
[x
2
i
x
2
j
[
2
e
|x|
2
/2
dx
= 2
d[(d1)+
/
]
(
/
+
1
2
)

d/2
( +1)
d1
d

j=2
( j +1)(( j 1) +
/
+
1
2
).
Proof Using the same method as in the proof of Theorem 10.6.17, and by
Theorem 11.3.8,
a
B
, a
B
)
B
= 2
d(d1)
d!E

R
_
(d +1)

((d 1) +
/
+
1
2
)

h(, +1)
h(, 1)
= 2
d(d1)
d! (d +1)

((d 1) +
/
+
1
2
)

.
Further,
b( +1,
/
)
1
= (2)
d/2
_
R
d
d

i=1
[x
i
[
2
/

i<j

x
2
i
x
2
j

2+2
e
|x|
2
/2
dx
= b(,
/
)
1
a
B
, a
B
)
B
.
From the proof of Theorem 10.6.17 we have
d

j=2
( j +1)
( +1)
d! (d +1)

=
d

j=2
( j + j +1)
( +2)
.
Also,
d

j=2
(( j 1) +
/
+
1
2
)((d 1) +
/
+
1
2
)

=
d

j=2
(( j 1)( +1) +
/
+
1
2
)
and
b(0,
/
)
1
= (2)
d/2
_
R
d
d

i=1
[x
i
[
2
/
e
|x|
2
/2
dx =
_
2

/
_

/
+
1
2
_
(
1
2
)
_
d
,
by an elementary calculation. We will use the same technique as in the type A
case, rst converting the integral to one over the sphere S
d1
, a compact set. Let
=d (d 1) +
/
d. Using spherical polar coordinates,
(2)
d/2
_
R
d
d

i=1
[x
i
[
2
/

i<j

x
2
i
x
2
j

2
e
|x|
2
/2
dx
= 2

1
d1

_
d
2
+
_

_
d
2
_
_
S
d1
d

i=1
[x
i
[
2
/

i<j

x
2
i
x
2
j

2
d(x);
11.3 Polynomial Eigenfunctions of Type B 375
the normalizing constant (see (4.1.3)) satises
1
d1
_
S
d1 d = 1. Fix
/
0 and
dene the analytic function
() =
_
d[(d 1) +
/
+
1
2
]
_
d

j=1
_
( +1)(
1
2
)
( j +1)(( j 1) +
/
+
1
2
)

1
d1

_
d
2
_
_
S
d1
d

i=1
[x
i
[
2
/

i<j

x
2
i
x
2
j

2
d(x)
_
.
The formula for b(,
/
)/b( +1,
/
) shows that ( +1) =() and also, that
(0) = 1. By analytic continuation is entire and periodic. On the strip : 1
Re 2 the factors of in the second line are uniformly bounded. Apply the
asymptotic formula (fromthe proof of Theorem10.6.17), valid in <arg <
and a, b > 0,
log(a +b) = a (log +loga1)
+
_
b
1
2
_
(log +loga) +
1
2
log(2) +O
_
[z[
1
_
to the rst line of the expression for ():
log
_
d[(d 1) +
/
+
1
2
]
_
+(d 1)log( +1)

j=2
log( j +1) +d log(
1
2
)
d1

j=0
log( j +
/
+
1
2
)
= [d(d 1) +d
/
+
d1
2
] log[d (d 1)]
d

j=2
( j +
1
2
)log j

d1

j=1
_
j +
/
_
log j
d2
2
log(2) log(
/
+
1
2
) +
d
2
log
+
_
d(d 1) +(d 1)
d

j=2
j
d1

j=1
j
_
(log 1)
+
_
d
/
+
d1
2
+
d1
2

d1
2
(d 1)
/
_
log +O
_
[[
1
_
= C
1
(d,
/
) +C
2
(d) +(
/
+
d1
2
)log +O
_
[[
1
_
,
where C
1
,C
2
are real constants depending on d,
/
. Therefore, in the strip :
1 Re 2, we have () = O([[

/
+(d1)/2
). By periodicity the same bound
applies for all , implying that is a periodic polynomial and hence constant.
This proves the formula for b(,
/
).
Note that the most important part of the above asymptotic calculation was
to show that the coefcient of log is zero. The value of the integral in
Theorem 11.3.12 is the type B case of a general result for root systems of Mac-
donald [1982]; it was conjectured by Mehta [1991] yet turned out to be a limiting
376 Orthogonal Polynomials Associated with Octahedral Groups
case of Selbergs integral (an older result, Selberg [1944]). The proof given here
could be said to be almost purely algebraic but, of course, it does use asymptotics
for the gamma function and some elementary entire-function theory.
11.4 Generalized Binomial Coefcients
The use of generalized binomial coefcients exploits the facts that the vector
(1, 1, . . . , 1) R
d
is invariant under S
d
, or equivalently, orthogonal to all the roots

j
, i < j, and that

d
i=1
D
A
i
=

d
i=1
/y
i
. For consistency, we will continue
to use y as the variable involved in type-A actions. Baker and Forrester [1998]
introduced the concept, motivated by the binomial coefcients previously dened
in connection with the Jack polynomials.
Denition 11.4.1 For , N
d
0
and parameter 0, the binomial coefcient
_

is dened by the expansion

(y +1
d
)

(1
d
)
=

(y)

(1
d
)
.
We will show that
_

= 0 unless
+

+
(this ordering on partitions means
the inclusion of Ferrers diagrams, that is,
+
i

+
i
for 1 i d). Also, there
is a recurrence relation which needs only the values of
_

at [[ = [[ 1 at
the start (however, even these are complicated to obtain; no general formula is
as yet available). Ordinary calculus and homogeneity properties provide some
elementary identities. Of course, when = 0 the polynomials

reduce to y

and
_

_
0
=
d
i=1
_

i
_
, a product of ordinary binomial coefcients.
Proposition 11.4.2 For N
d
0
and 0, s R, j N
0
,

(y +s1
d
)

(1
d
)
=

s
[[[[

(y)

(1
d
)
and
_
d

i=1
D
A
i
_
j

(y) = j!

[[=[[j
_

(1
d
)

(1
d
)

(y).
Proof For the rst part, suppose s ,=0; then s
[[

(y+s1
d
) =

(s
1
y+1
d
). In
Denition 11.4.1 replace y by s
1
y and observe that

is homogeneous of degree
[[. Consider the rst equation as a Taylors series in s at s = 0; then

s
=
d

i=1

y
i
=
d

i=1
D
A
i
.
11.4 Generalized Binomial Coefcients 377
Proposition 11.4.3 For , N
d
0
, 0 and [[ < m <[[,
_

_
[[ [[
m[[
_
=

[[=m
_

.
Proof For arbitrary s, t R,

(s +t)
[[[[

(y)

(1
d
)
=

(y +(s +t)1
d
)

(1
d
)
=

t
[[[[

(y +s1
d
)

(1
d
)
=

t
[[[[
s
[[[[

(y)

(1
d
)
.
The coefcient of s
m[[
t
[[m
in the equations gives the claimed formula.
This shows that the investigation of
_

begins by setting [[ =[[ 1, that is,


by using the expansion of
d
i=1
D
A
i

. The relevance of the binomial coefcients


lies in the computation of the type B Hermite polynomials e

h
/2
x
E

(y) (where
E =j : 1 j k for some k).
Proposition 11.4.4 For N
d
0
and 0, j N
0
,
_
d

i=1
y
i
_
j

(y) = j!

[[=[[+j
_

E
+
()h(
+
, +1)
E
+
()h(
+
, +1)

(y).
Proof The adjoint of multiplication by (
d
i=1
y
i
)
j
in the h inner product is
(

d
i=1
D
A
i
)
j
. In an abstract sense, if T is a linear operator with matrix M

, where
T

, then the adjoint T

has matrix
M

= M

_
h

)
h
.
By Proposition 10.6.5 and Theorem 10.6.6,

(1
d
) =E

()(d +1)

+/h(
+
, 1)
and

_
h
=E
+
()E

()(d +1)

+h(
+
, +1)/h(
+
, 1) for any N
d
0
.
In the B inner product the adjoint of multiplication by (
d
i=1
y
i
)
j
= (
d
i=1
x
2
i
)
j
is
j
h
. For N
d
0
and 0 k d, let
E
k
=j : 1 j k,

k
=
k

i=1

i
= (1, 1, . . . , 1, 0, . . . , 0).
378 Orthogonal Polynomials Associated with Octahedral Groups
Proposition 11.4.5 For N
d
0
and 0, j N
0
, 0 k d,
(4
j
j!)
1

j
h
x
E
k

(y)
=

[[=[[j
__

(d +1)

+((d 1) +
/
+
1
2
)
(+
k
)
+ h(, 1)
(d +1)

+((d 1) +
/
+
1
2
)
(+
k
)
+ h(, 1)
x
E
k

(y)
_
.
Proof We use the adjoint formula from Proposition 11.4.4. From Theorem
11.3.10, for any N
d
0
,

x
E
k

(y), x
E
k

(y)
_
B
= 2
2[[+k
(d +1)

+((d 1) +
/
+
1
2
)
(+
k
)
+
h(, +1)
h(, 1)
.
This proves the formula.
Corollary 11.4.6 For N
d
0
and [[ = m, the lowest-degree term in the
expression e

h
/2
x
E
k

(y) is
(
1
2

h
)
m
m!
x
E
k

(y)
= (2)
m
(d +1)

+
((d 1) +
/
+
1
2
)
(+
k
)
+
((d 1) +
/
+
1
2
)

k
E

()
h(
+
, 1)
x
E
k
= (2)
m
((d 1) +
/
+
1
2
)
(+
k
)
+

k
i=1
[(d i) +
/
+
1
2
]
x
E
k

(1
d
).
Proof It is clear from the denition that
_

0
_

= 1, where 0 N
d
0
. Further,

(1
d
) =E

()(d +1)

+/h(
+
, 1).
We turn to the problem posed by

d
i=1
D
A
i

. Recall from Chapter 8 the def-


inition of the invariant subspace E

associated with N
d,P
0
, namely, E

=
spanw

: w S
d
= span

:
+
= . This space is invariant under S
d
and
the operators D
A
i
y
i
. Also recall, from Section 10.8, the intertwining operator V,
with D
A
i
V =V/y
i
, and the utility operator , where p

= y

/!.
Theorem 11.4.7 For N
d,P
0
, if q(y) E

and 1 i d then
D
A
i
q =

j
>
j+1
((d j +1) +
j
)q
j
,
where q
j
E

j
and D
A
i

i
q
j
= (
j
() 1)q
j
, for j such that
j
>
j+1
.
Proof Expand q in the p-basis; then q =
i
g +q
0
, where D
A
i
q
0
= 0. This is
obtained by setting q
0
as the part of the expansion in which all appearing p

11.4 Generalized Binomial Coefcients 379


have
i
= 0 (and in
i
g each term p

has
i
1, by the denition of
i
). Expand
g as a sum

of projections onto E

with [[ =[[ 1, that is,


q =
i

+q
0
, D
A
i
q =D
A
i

i

.
Apply V to both sides of the rst equation. It was shown in Section 10.8 that
V is scalar on each E

with eigenvalue 1/(d +1)

. Also, (/y
i
) p

(y) = 0
if
i
= 0 and (/y
i
) p

(y) = p

i
if
i
1. Thus [1/(d +1)

]D
A
i
q =
D
A
i
Vq =V(/y
i
)(
i

+q
0
) =V

[1/(d +1)

]g

, and so
D
A
i
q =

(d +1)

(d +1)

D
A
i

i
g

.
Since each E

is invariant under D
A
i

i
, this shows that g

is an eigenfunc-
tion of D
A
i

i
with eigenvalue (d +1)

/(d +1)

. The eigenvalues of D
A
i

i
=
(1, i)D
A
1

1
(1, i) are the same as those of U
A
1
=D
A
1

1
, namely,
1
() for N
d
0
;
additionally, each eigenvalue is a linear polynomial in d (and for xed

,
that is,
+
= , the coefcients of

in the p-basis are independent of d, where

s
= 0 for all s > k and d k), so we conclude that (d +1)

is a factor of
(d +1)

as a polynomial in d. This implies that


j

j
for each j. Combine
this with the restriction that N
d,P
0
and [[ = [[ 1 to show that =
j
for some j with
j
>
j+1
; this includes the possibility that
d
> 0. Also,
(d +1)

(d +1)

j
= (d j +1) +
j
=
j
() 1.
Corollary 11.4.8 For , N
d
0
and 0,
_

,= 0 implies that
+

+
.
Proof Theorem 11.4.7 shows that
_

,= 0 and [[ =[[ 1 implies that


+
=

j
for some j with
+
j
>
+
j+1
, that is,
+

+
. Propositions 11.4.2 and
11.4.3 and an inductive argument nish the proof.
Next we consider a stronger statement that can be made about the expression

d
i=1
D
A
i

for N
d,P
0
. The following is a slight modication of Lemma 10.5.3.
Lemma 11.4.9 For N
d
0
and 1 i, k d the following hold:
if i < k then
_
D
A
i

i

j<i
( j, i) (i, k)
_
D
A
k

=
i
()D
A
k

;
if i > k then
U
A
i
D
A
k

=
i
()D
A
k

+(k, i)D
A
i

.
380 Orthogonal Polynomials Associated with Octahedral Groups
Further,
D
A
k

k
D
A
k

= [
k
() 1]D
A
k

j>k
(k, j)D
A
j

.
Proof The case i < k is part (i) of Lemma 10.5.3. For i > k, we have
D
A
i

i
D
A
k

=D
A
k
D
A
i

i

+(i, k)D
A
k

and

j<i
( j, i)D
A
k

=D
A
k
j<i, j,=k
( j, i)

(k, i)D
A
k

.
Add the two equations to obtain
U
A
i
D
A
k

=D
A
k
_
D
A
i

i


j<i, j,=k
( j, i)
_

=D
A
k
U
A
i

+D
A
k
(k, i)

.
This proves the second part.
Finally, D
A
k

k
D
A
k

= D
A
k
[D
A
k

k

j,=k
( j, k) 1]

= D
A
k
(U
A
k
1)

D
A
k

j>k
( j, k)

and D
A
k
( j, k) = ( j, k)D
A
j
.
It is convenient in the following to have a notation for projections. For
N
d,P
0
, denote the orthogonal projection of f (y)
d
onto E

by () f , so that
f =

() f and () f E

.
The formulae in Lemma 11.4.9 can be applied to the projections (
j
) of
D
A
k

, for =
+
and
j
>
j+1
, because E

j
is invariant under D
A
i

i
and
elements of S
d
.
Proposition 11.4.10 For N
d,P
0
, if 1 j < k and
j
>
j+1
then we have
(
j
)D
A
k

= 0.
Proof Let g = (
j
)D
A
k

. By Lemma 11.4.9, [D
A
i

i

j<i
( j, i)
(i, k)]g =
i
()g for all i < k. The operators on the left-hand side can be
expressed as w
1
U
A
i+1
w for the cyclic permutation w = (1, 2, . . . , k) S
d
, with
eigenvalues drawn from
s
(
j
) : 1 s d on the space E

j
. These
eigenvalues are
1
(),
2
(), . . . ,
j
() 1, . . . ,
k
(), . . .;
j
() does not appear
in this list, hence g = 0.
Note that this argument uses the fact that is a partition in a crucial way.
Lemma 11.4.11 For N
d,P
0
,
j
>
j+1
and k j, let g
k
= (
j
)D
A
k

.
Then
11.4 Generalized Binomial Coefcients 381
_
D
A
i

i

s<i
(s, i)(i, k)
_
g
k
=
i
()g
k
for i < k;
D
A
k

k
g
k
= [
k
() 1]g
k

i=k+1
(k, i)g
i
;
U
A
i
g
k
=
i
()g
k
+(k, i)g
i
for k < i j;
U
A
i
g
k
=
i
()g
k
for i > j.
Proof This proceeds by an application of the projection (
j
) to each for-
mula in Lemma 11.4.9 in combination with the fact that (
j
)D
A
i

= 0 for
j < i (Proposition 11.4.10).
As in Denition 10.5.1, let
j
= (1, 2)(2, 3) ( j 1, j) S
d
. The following
has almost the same proof as Theorem 10.5.5.
Proposition 11.4.12 For N
d,P
0
and
j
>
j+1
, set

= (
j
1,
1
, . . . ,
j1
,
j+1
, . . .);
then
(
j
)D
A
j

= [(d j +1) +
j
]
1
j

.
Proof Let g =(
j
)D
A
j

. By Lemma 11.4.11, g satises the following:


_
D
A
i

i

s<i
(s, i) (i, k)
_
g =
i
()g, for i < j;
D
A
j

j
g = [
j
() 1]g,
U
A
i
g =
i
()g for i > j.
Just as in Lemma 10.5.3, these equations show that
j
g is a scalar multiple of

,
where the multiple is the coefcient of p

j
in D
A
j

. Use Lemma 10.5.4 (the


hypothesis m = () can be replaced by
j
>
j+1
), which implies that p

j
appears with a nonzero coefcient in D
A
j
p

if one of the following holds: (1)


=, with coefcient (d j +1) +
j
; (2) = +k(
j

i
) with coefcient
where 1 k
i

j
(this implies that i > j and ~ +k(
j

i
) or, more
precisely, for k <
i

j
, ~[ +k(
j

i
)]
+
and for k =
i

j
, ~(i, j) =
+k(
j

i
), so this case cannot occur); (3) = +k(
i

j
) with coefcient
and 1 k
j

i
1, thus
i

j
2 and i > j, but this case cannot occur
either, because ~ +k(
i

j
) when i > j.
The part of Lemma 11.4.11 dealing with D
A
k

k
g
k
actually contains more
information than appears at rst glance. It was shown in Theorem 11.4.7 that
(
j
)D
A
k

is an eigenfunction of D
A
k

k
with eigenvalue
j
() 1. Thus
382 Orthogonal Polynomials Associated with Octahedral Groups
the lemma implies that [
k
()
j
()]g
k
=

j
i=k+1
(k, i)g
i
; but this provides
an algorithm for computing g
j1
, g
j2
, . . . , g
1
from g
j
, which was obtained in
Proposition 11.4.12.
Algorithm 11.4.13 For N
d,P
0
and
j
>
j+1
,
(
j
)
d

i=1
D
A
i

= [(d j +1) +
j
]
j1
where

1j
=

j
,

1i
=
_
(i, i +1)

i
()
j
() +1
_

i
for i = j 1, j 2, . . . , 1,

i
=
_
(i, i +1) +

i
()
j
()
_

i1
for i = 1, 2, . . . , j 1.
Proof Let g
i
= [(d j +1) +
j
]
1
(
j
)D
A
i

; then g
i
= 0 for i > j (by
Proposition 11.4.10), so we require to show that
j1
=
j
i=1
g
i
. Note that these
polynomials have been rescaled from those in Lemma 11.4.11, but the same equa-
tions hold. For convenience let b
i
=
i
()
j
() =
i

j
+( j i), 1 i < j.
Then the lemma shows that
b
i
g
i
=
j

s=i+1
(i, s)g
s
.
By Proposition 11.4.12,
g
j
=
_
1
(1, j)
b
1
+1
__
1
(2, j)
b
2
+1
_

_
1
( j 1, j)
b
j1
+1
_

j
.
Next we show that
g
i
=

b
i
_
1+

b
j1
( j 1, j)
_

_
1+

b
i+1
(i +1, j)
_
(i, j)g
j
,
j

s=i
g
s
=
_
1+

b
j1
( j 1, j)
__
1+

b
j2
( j 2, j)
_

_
1+

b
i
(i, j)
_
g
j
,
arguing inductively for i = j 1, j 2, . . . , 1. The rst step, i = j 1, fol-
lows immediately from Lemma 11.4.11. Suppose that the formulae are valid for
g
i+1
, . . . , g
j
; then
g
i
=

b
i
j

s=i+1
(i, s)g
s
=

b
i
j

s=i+1
_
1+

b
j1
( j 1, j)
_

_
1+

b
s+1
(s +1, j)
_

b
s
(i, s)(s, j)g
j
11.5 Hermite Polynomials of Type B 383
=

b
i
j

s=i+1
_
1+

b
j1
( j 1, j)
_

_
1+

b
s+1
(s +1, j)
_

b
s
(s, j)(i, j)g
j
=

b
i
_
1+

b
j1
( j 1, j)
_

_
1+

b
i+1
(i +1, j)
_
(i, j)g
j
.
This uses (s, i)(s, j) = (s, j)(i, j); the last step follows using simple algebra (for
example, (1+t
1
)(1+t
2
)(1+t
3
) = 1 +t
1
+ (1 +t
1
)t
2
+ (1 +t
1
)(1 +t
2
)t
3
). The
formula for

j
s=i
g
s
follows easily from that for g
i
.
To obtain the stated formulae, replace g
j
by
1
j
(
j
g
j
). Then

j
g
j
=
_
(1, 2)

b
1
+1
__
(2, 3)

b
2
+1
_

_
( j 1, j)

b
j1
+1
_

j
,
and the remaining factors are transformed as follows:
_
1+

b
j1
( j 1, j)
__
1+

b
j2
( j 2, j)
_

_
1+

b
1
(1, j)
_
( j 1, j). . . (2, 3)(1, 2)
=
_
( j 1, j) +

b
j1
__
( j 2, j 1) +

b
j2
_

_
(1, 2) +

b
1
_
.
This completes the proof.
To compute the expansion of (
j
)
d
i=1
D
A
i

in terms of

with
+
=

j
, the algorithm can be considered to have the starting point
0
=

. Each
step involves an adjacent transposition which has a relatively simple action on
each

(recall Proposition 10.4.5). Indeed, let = (i, i +1). Then:

if

i
=
i+1
;

+c

if
i
>
i+1
;

= (1c
2
)

+c

if
i
<
i+1
,
where c = [
i
()
i+1
()]
1
. So, in general (with distinct parts of ), the
expansion consists of 2
j1
distinct

.
11.5 Hermite Polynomials of Type B
In this section we examine the application of the operator exp(u
h
) to produce
orthogonal polynomials for weights of Gaussian type, namely,
w(x)
2
e
v|x|
2
, x R
d
where
w(x) =
d

i=1
[x
i
[

j<k

x
2
j
x
2
k

and uv =
1
4
, v > 0. In the following let = degw(x) = d
/
+d(d 1).
Lemma 11.5.1 Suppose that f P
d
m
, g P
d
n
for some m, n N
0
, v > 0 and
u = 1/(4v); then
384 Orthogonal Polynomials Associated with Octahedral Groups
f , g)
B
= (2v)
+(m+n)/2
_
v

_
d/2
b
_
,
/
_

_
R
d
_
e
u
h
f
__
e
u
h
g
_
w(x)
2
e
v|x|
2
dx.
Proof In the formula to be proved, with v =
1
2
, make the change of variables
x = (2v)
1/2
z, that is, set x
i
= (2v)
1/2
z
i
for each i. Then dx = (2v)
d/2
dz, and the
weight function contributes the factor (2v)

. For j
n
2
let f
j
(x) =
j
h
f (x); then
e

h
/2
f (x) =
j
1
j!
(
1
2
)
j
f
j
(x), which transforms to

j
1
j!
_
1
2
_
j
(2v)
n/2j
f
j
(z) = (2v)
n/2

j
1
j!
_
1
4v
_
j
f
j
(z) = (2v)
n/2
e
u
h
f .
A similar calculation for g nishes the proof.
Of course, it is known that f , g)
B
= 0 for m ,= n.
As in Section 10.9, the commuting self-adjoint operators U
i
can be transformed
to commuting self-adjoint operators on L
2
(R
d
; w
2
e
v|x|
2
dx). For v > 0 and u =
1/(4v), 1 i d, let
U
v
i
= e
u
h
U
i
e
u
h
.
Proposition 11.5.2 For 1 i d,
U
v
i
=D
i
x
i
2uD
2
i

j<i
(
ji
+
ji
).
Proof Because
h
commutes with each w W
d
it is clear that e
u
h
(
ji
+

ji
)e
u
h
=
ji
+
ji
for j < i. From the commutation
h
x
i
x
i

h
= 2D
i
, we argue
inductively that
n
h
x
i
x
i

n
h
= 2nD
i

n1
h
for n N
0
. By summing this equation
(multiplying it by (u)
n
/n!) we obtain
e
u
h
x
i
= (x
i
2uD
i
)e
u
h
.
Finally, apply D
i
to the left-hand side of this equation to prove the stated formula.
The construction of an orthogonal basis labeled by N
d
0
in terms of the
nonsymmetric Jack polynomials proceeds as follows. For N
d
0
,
1. let E =i :
i
is odd;
2. if E is empty let w = 1, otherwise let k = #(E), E
k
= i : 1 i k and let
w be the unique permutation ( S
d
) such that w(E
k
) = E and w(i) < w( j)
whenever 1 i < j k or k +1 i < j d;
3. let = w
1
(), that is,
i
=
w(i)
for each i;
4. let H

(x; ,
/
, v) = e
u
h
wx
E
k

e()
(y) (recall that e()
i
=
_
1
2

i
_
).
11.6 CalogeroSutherland Systems 385
By Propositions 11.3.4 and 11.5.2, H

(x) is a U
v
i
simultaneous eigenfunc-
tion such that
U
v
w(i)
H

(x) = 2[
i
() ]H

(x) for 1 i k,
U
v
w(i)
H

(x) = 2[
i
() +
/

1
2
]H

(x) for k < i d.


Since w acts isometrically, we have
(2v)

_
v

_
d/2
b
_
,
/
_
_
R
d
H

(x; ,
/
, v)
2
w
2
(x)e
v|x|
2
dx
= (2v)
[[

x
E
k

e()
(y), x
E
k

e()
(y)
_
B
= v
[[
(d +1)
e()
+((d 1) +
/
+
1
2
)
o()
+
E
+
(e())E

(e())
h(e()
+
, +1)
h(e()
+
, 1)
.
The substitutions e()
+
=e()
+
, o()
+
= o()
+
are valid since =w(). The
most common specializations of v are v =
1
2
and v = 1 (when the polynomials
H

(x) reduce to the products of ordinary Hermite polynomials for = 0 =


/
).
11.6 CalogeroSutherland Systems
We begin with the time-independent Schr odinger wave equation for d particles
in a one-dimensional space: particle i has mass m
i
, coordinate x
i
and is subject to
an external potential V
i
(x
i
) and the interaction between particles i, j is given by
V
i j
(x
i
x
j
), i < j. The force acting on i due to j is F
i j
= (/x
i
)V
i j
(x
i
x
j
),
which equals F
ji
=(/x
j
)V
i j
(x
i
x
j
) according to Newtons third law. The
Hamiltonian of the system is
H =
h
2
2
d

i=1
1
m
i
_

x
i
_
2
+
d

i=1
V
i
(x
i
) +

1i<jd
V
i j
(x
i
x
j
),
and the Schr odinger equation is
H (x) = E(x),
where the eigenvalue E is the energy level associated with the wave function .
The quantum mechanical interpretation of is that [(x)[
2
is the probability
density function for the location of the particles (which is independent of time by
hypothesis). This interpretation requires that the integral of [(x)[
2
over the state
space must be 1 (this is one reason for the importance of computing L
2
-norms
of orthogonal polynomials!). Plancks constant is used in the form h = h/(2).
We will consider only systems for which the Hamiltonian is invariant under the
action of the symmetric group S
d
and the interactions are multiples of

x
i
x
j

2
(called 1/r
2
interactions, where r denotes the distance between two particles). For
386 Orthogonal Polynomials Associated with Octahedral Groups
systems on the line R the external potential which we will consider is that corre-
sponding to harmonic connement (producing a simple harmonic oscillator). In
the mathematical analysis of H it is customary to change the scales and units
in such a way that the equations have as few parameters as possible. The term
complete integrability refers to the existence of d independent and commuting
differential operators, one of which is H . The relevance to orthogonal poly-
nomials is that CalogeroSutherland systems with 1/r
2
interactions have wave
functions of the form p(x)
0
(x), where p is a polynomial and
0
(x) is the base
state (the state of lowest energy). This situation can be illustrated by the harmonic
oscillator.
11.6.1 The simple harmonic oscillator
Consider a particle of mass m moving on a line subject to the restoring force
F(x) =kx or, equivalently, with potential V(x) =kx
2
/2, x R. The Schr odinger
wave equation is

h
2
2m
d
2
dx
2
(x) +
kx
2
2
(x) = E(x).
Write the equation as
H
1
(x) =
d
2
dx
2
(x) +v
2
x
2
(x) =(x),
where v is a positive constant and denotes the rescaled energy level. Ignoring
the question of normalization for now, let the base state be
0
(x) =exp(vx
2
/2);
then by an easy calculation it can be shown that
H
1
(p(x)
0
(x)) =
0
(x)[(d/dx)
2
+2xvd/dx +v]p(x).
For each n N
0
let p(x) = H
n
(

vx); then by using the corresponding differen-


tial equation for the Hermite polynomials, Subsection 1.4.1, namely H
//
n
(x) +
2xH
/
n
(x) = 2nH
n
(x), we obtain
H
1
[H
n
(

vx)
0
(x)] = (2n+1)vH
n
(

vx)
0
(x).
This shows that the energy levels are evenly spaced and
(2
n
n!
_
/v)
1/2
H
n
(

vx)exp(vx
2
/2) : n N
0

is a complete set of wave functions, an orthonormal basis for L


2
(R).
The machinery of Section 11.5 for the case d = 1,
/
= 0 asserts that the
appropriately scaled Hermite polynomials are obtained fromexp[u(d/dx)
2
]x
n

(with u = 1/(4v), n N
0
); indeed
exp
_

1
4v
_
d
dx
_
2
x
n
_
= 2
n
v
n/2
H
n
(

vx),
using the formula in Proposition 1.4.3.
11.6 CalogeroSutherland Systems 387
11.6.2 Root systems and the Laplacian
The following calculation is useful for any root system related to a Calogero
model. Recall the notation of Section 4.4, with a multiplicity function
v
dened
on a root system R.
Lemma 11.6.1 Let (x) =
vR
+
[x, v)[

v
and let f be a smooth function on
R
d
; then
(x)
1
[(x) f (x)]
=f (x) +2

vR
+

v
f (x), v)
x, v)
+

vR
+

v
(
v
1)|v|
2
x, v)
2
f (x).
Proof From the product rule for we have

1
=+2
d

i=1
1

x
i

x
i
+
1

.
The middle term follows from
1

x
i
=

vR
+

v
v
i
x, v)
.
It remains to compute /. Indeed,
1

_

x
i
_
2
=
_

vR
+

v
v
i
x, v)
_
2


vR
+

v
v
2
i
x, v)
2
and
1

=

vR
+

v
v, v)
x, v)
2
+

u,vR
+

v
u, v)
x, u)x, v)
.
In the second sum the subset of terms for which
u

v
= w for a xed rotation
,= 1 adds to zero, by Lemma 6.4.6. This leaves the part with u = v and thus
/ =

vR
+
_

v
(
v
1)v, v)/x, v)
2

. The proof is complete.


Lemma 11.6.1 will be used here just for operator types A and B.
11.6.3 Type A models on the line
Suppose that there are d identical particles on the line R, each subject to an
external potential v
2
x
2
(the simple harmonic oscillator potential), with 1/r
2
interactions and with parameter . The Hamiltonian is
H =+v
2
|x|
2
+2

1i<jd
1
(x
i
x
j
)
2
.
The base state is
0
(x) = exp(v|x|
2
/2)

i<j
[x
i
x
j
[

for x R
d
. Let a(x) =

i<j
(x
i
x
j
). To facilitate the computations we will introduce two commutation
388 Orthogonal Polynomials Associated with Octahedral Groups
rules (the second is the type A case of Lemma 11.6.1; f (x) is a smooth function
on x R
d
). The commutation rules are as follows:
exp(v|x|
2
/2)(exp(v|x|
2
/2) f (x))
=
_
2v
d

i=1
x
i

x
i
+v(v|x|
2
d)
_
f (x), (11.6.1)
[a(x)[

[[a(x)[

f (x)]
=
_
+2

i<j
1
x
i
x
j
_

x
i


x
j
_
+2( 1)

i<j
1
(x
i
x
j
)
2
_
f (x).
Combining the two equation (and noting that
d
i=1
x
i
(/x
i
)[a(x)[

=
[
1
2
d(d 1)[a(x)[

], we obtain

0
(x)
1
H [
0
(x) f (x)] =
_
2

1i<jd
1
x
i
x
j
_

x
i


x
j
_
+2v
d

i=1
x
i

x
i
+dv[1+(d 1)]
_
f (x).
We recall the type-A Laplacian

h
=
d

i=1
(D
A
i
)
2
=+2

1i<jd
_
1
x
i
x
j
_

x
i


x
j
_

1(i, j)
(x
i
x
j
)
2
_
,
and observe its close relationship to
1
0
H
0
; for symmetric polynomials the
terms 1(i, j) vanish. Further (Proposition 6.5.3),
d

i=1
x
i
D
A
i
f (x) =
d

i=1
x
i

x
i
f (x) +

i<j
[1(i, j)] f (x).
It was shown (Section 10.8) that e
u
h
U
A
i
e
u
h
= U
A
i
2u(D
A
i
)
2
, so that, with
uv =
1
4
,
2ve
u
h
d

i=1
U
A
i
e
u
h
=
h
+2v
d

i=1
U
A
i
=
h
+2v
_
d

i=1
x
i

x
i
+d +

2
d(d +1)
_
.
Thus, when restricted to symmetric polynomials the conjugate of the Hamiltonian
can be expressed as

1
0
H
0
= 2ve
u
h
d

i=1
U
A
i
e
u
h
dv(1+2).
For any partition N
d,P
0
there is an invariant eigenfunction j

of

d
i=1
U
A
i
with eigenvalue

d
i=1

i
() = d +

2
d(d +1) +[[ (where [[ =

i

i
). Further,
11.6 CalogeroSutherland Systems 389
in Denition 10.7.3 we constructed invariant differential operators commuting
with
d
i=1
U
A
i
by restricting T
j
to symmetric polynomials, where
d
j=1
t
j
T
j
=

d
i=1
(1 +t U
A
i
) (with a formal variable t). Thus T
/
j
= e
u
h
T
j
e
u
h
is the ele-
mentary symmetric polynomial of degree j in e
u
h
U
A
i
e
u
h
: 1 i d; each
e
u
h
U
A
i
e
u
h
is a second-order differentialdifference operator and T
/
j
maps
symmetric polynomials onto themselves. By the same proof as that of Theo-
rem 10.7.4, T
/
j
when restricted to symmetric polynomials acts as a differential
operator of degree 2j. This shows the complete integrability of H ; each operator

0
T
/
j

1
0
commutes with it.
Given N
d,P
0
, let

= (e
u
h
j

)
0
; then H

= v[2[[ +d +d(d
1)]

. This provides a complete orthogonal decomposition of the symmetric


wave functions for indistinguishable particles.
Next we consider a simple modication to the Hamiltonian to allow non-
symmetric wave functions. The physical interpretation is that each particle has
a two-valued spin, which can be exchanged with another particle. Refer to the
expression for
h
, and add
2

i<j
1(i, j)
(x
i
x
j
)
2
to the potential. The modied Hamiltonian is
H
/
=+v
2
|x|
2
+2

1i<jd
(i, j)
(x
i
x
j
)
2
.
This operator has the same symmetric eigenfunctions as H and in addition has
eigenfunctions (e
u
h

)
0
, where

is the nonsymmetric Jack polynomial for


N
d
0
; the eigenvalues are given by v[2[[ +d +d(d 1)]. The transpositions
(i, j) are called exchange operators in the physics context, since they exchange
the spin values of two particles.
This establishes the complete integrability of the linear CalogeroMoser sys-
tem with harmonic connement, 1/r
2
interactions and exchange terms. The wave
functions involve Hermite polynomials of type A. The normalizations can be
determined from the results in Section 10.7.
11.6.4 Type A models on the circle
Consider d identical particles located on the unit circle at polar coordinates
(
1
,
2
, . . . ,
d
), with no external potential and inverse-square law (1/r
2
) inter-
actions. The chordal distance between particles j and k is 2

sin[
1
2
(
j

k
)]

. The
unique nature of the 1/r
2
interaction is illustrated by the identity
1
4sin
2 1
2

=

nZ
1
( 2n)
2
,
390 Orthogonal Polynomials Associated with Octahedral Groups
which shows that the particles can be considered to be on a line. They can be con-
sidered as the countable union of copies of one particle; these copies are located
in 2-periodically repeating positions on the line and they all interact according
to the 1/r
2
law. The coupling constant in the potential is written as 2( 1),
and > 1. The Hamiltonian is
H

=
d

i=1
_

i
_
2
+

2

1i<jd
1
sin
2
[
1
2
(
i

j
)]
.
Now change the coordinate system to x
j
=exp(i
j
) for 1 j d; this transforms
trigonometric polynomials to Laurent polynomials in x (polynomials in x
1
i
), and
we have (/
i
)
2
= (x
i

i
)
2
. Further, sin
2
[
1
2
(
i

j
] =
1
4
(x
i
x
j
)(x
1
i
x
1
j
)
for i ,= j. In the x-coordinate system the Hamiltonian is
H =
d

i=1
(x
i

i
)
2
2( 1)

1i<jd
x
i
x
j
(x
i
x
j
)
2
.
The (non-normalized) base state is

0
(x) =

1i<jd

(x
i
x
j
)(x
1
i
x
1
j
)

/2
=

1i<jd

x
i
x
j

i=1
[x
i
[
(d1)/2
.
Even though [x
i
[ = 1 on the unit torus, the function
0
is to be interpreted as a
positively homogeneous function of degree 0 on C
d
.
Proposition 11.6.2 For any smooth function f on C
d
,

0
(x)
1
d

i=1
(x
i

i
)
2
[
0
(x) f (x)]
=
d

i=1
(x
i

i
)
2
f (x) +2

1i<jd
x
i
x
j
_

j
x
i
x
j

1
(x
i
x
j
)
2
_
f (x)
+(d 1)
d

i=1
x
i

i
f (x) +

2
12
d(d
2
1) f (x).
Proof Dene the operator
i
=
1
0
x
i

0
; then, by logarithmic differentiation,

i
= x
i

i
+

2

j,=i
x
i
+x
j
x
i
x
j
and so

2
i
= (x
i

i
)
2
+2

j,=i
x
i
x
j

i
x
i
x
j
+(d 1)x
i

j,=i
x
i
x
j
(x
i
x
j
)
2
+

2
4
_

j,=i
x
i
+x
j
x
i
x
j
_
2
.
11.6 CalogeroSutherland Systems 391
Nowsumthis expression over 1 i d. Expand the squared-sumtermas follows:
d

i=1

j,k,=i
_
x
2
i
(x
i
x
j
)(x
i
x
k
)
+
x
i
x
j
+x
i
x
k
+x
j
x
k
(x
i
x
j
)(x
i
x
k
)
_
.
When j =k, the sum contributes d(d 1)+8
i<j
x
i
x
j
/(x
i
x
j
)
2
(each pair i, j
with i < j appears twice in the triple sum, and (x
i
+x
j
)
2
= (x
i
x
j
)
2
+4x
i
x
j
).
When i, j, k are all distinct, the rst terms each contributes 2
_
d
3
_
and the second
terms cancel out (the calculations are similar to those in the proof of Theorem
10.7.9). Finally,

2
4
(d(d 1) +
1
3
d(d 1)(d 2)) =
1
12

2
d(d
2
1).
Using Proposition 10.7.8 and Theorem 10.7.9 the transformed Hamiltonian can
be expressed in terms of U
A
i
. Indeed,
d

i=1
_
U
A
i
1
1
2
(d +1)
_
2
=
d

i=1
(x
i

i
)
2
+2

1i<jd
x
i
x
j
_

j
x
i
x
j

1(i, j)
(x
i
x
j
)
2
_
+(d 1)
d

i=1
x
i

i
+
1
12

2
d(d
2
1).
To show this, write
d

i=1
[U
A
i
1
1
2
(d +1)]
2
=
d

i=1
(U
A
i
1d)
2
+(d 1)
d

i=1
(U
A
i
1d) +d[
1
2
(d 1)]
2
and use Proposition 10.7.8. As for the line model, the potential can be modi-
ed by exchange terms (interchanging spins of two particles). The resulting spin
Hamiltonian is
H
/
=
d

i=1
(x
i

i
)
2
2

1i<jd
x
i
x
j
(x
i
x
j
)
2
[ (i, j)]
and, for a smooth function f (x),

0
(x)
1
H
/
[
0
(x) f (x)] =
d

i=1
_
U
A
i
1
1
2
(d +1)
_
2
f (x).
This immediately leads to a complete eigenfunction decomposition for H
/
in terms of nonsymmetric Jack polynomials. Let e
d
=

d
i=1
x
i
(where e
d
is an
392 Orthogonal Polynomials Associated with Octahedral Groups
elementary symmetric polynomial); then U
A
i
[e
m
d
f (x)] =e
m
d
(U
A
i
+m) f (x) for any
m Z (see Proposition 6.4.12) and the eigenvalue is shifted by m.
Lemma 11.6.3 Let N
d
0
and m N
0
; then e
m
d

is a scalar multiple of

,
where = +m1
d
.
Proof Both e
m
d

and

are eigenfunctions of U
A
i
with eigenvalue
i
() +m,
for 1 i d.
Corollary 11.6.4 The Laurent polynomials e
m
d

with m Z and N
d
0
with
min
i

i
= 0 form a basis for the set of all Laurent polynomials.
Proof For any Laurent polynomial g(x) there exists m Z such that e
m
d
g(x)
is a polynomial in x
1
, . . . , x
d
. Lemma 11.6.3 shows that there are no linear
dependences in the set.
This leads to the determination of the energy levels.
Theorem 11.6.5 Let N
d,P
0
,
d
= 0, m Z and
+
=; then
(x) =
0
(x)e
m
d

(x)
is a wave function for H
/
, and
H
/
=
d

i=1
_

i
+m+
1
2
(d +12i)
_
2
.
The normalization constant for was found in Theorems 10.6.13 and 10.6.16.
The eigenvalues can also be expressed as
d
i=1
(
i
+m)[
i
+m+(d +12i)] +
1
2

2
d(d
2
1). The complete integrability of H (as an invariant differential
operator) is a consequence of Theorem 10.7.4
11.6.5 Type B models on the line
Suppose that there are d identical particles on the line at coordinates x
1
, x
2
, . . . , x
d
,
subject to an external harmonic connement potential, with 1/r
2
interactions
between both the particles and their mirror images; as well, there is a barrier
at the origin, and spin can be exchanged between particles. The Hamiltonian with
exchange terms for this spin type-B Calogero model is
H = +v
2
|x|
2
+
d

i=1

/
(
/

i
)
x
2
i
+2

1i<jd
_

i j
(x
i
x
j
)
2
+

i j
(x
i
+x
j
)
2
_
,
11.6 CalogeroSutherland Systems 393
where v, ,
/
are positive parameters. The base state is

0
(x) = e
v|x|
2
/2

1i<jd

x
2
i
x
2
j

i=1
[x
i
[

/
.
By Lemma 11.6.1 specialized to type-B models, and the commutation for
e
v|x|
2
/2
from equation (11.6.1), we have

0
(x)
1
H
0
(x) =
/
d

i=1
_
2
x
i

1
i
x
2
i
_
2

1i<jd
_

j
x
i
x
j

1
i j
(x
i
x
j
)
2
+

i
+
j
x
i
+x
j

1
i j
(x
i
+x
j
)
2
_
+2v
d

i=1
x
i

i
+v[d +2d(d 1) +2
/
d]
=
h
+2v
d

i=1
x
i

i
+vd[1+2(d 1) +2
/
].
Using Propositions 6.4.10 and 6.5.3 we obtain
d

i=1
D
i
x
i


1i<jd
(
i j
+
i j
)
=
d

i=1
x
i

i
+d[1+
/
+(d 1)] +
/
d

i=1

i
.
Let u = 1/(4v). Then, by Proposition 11.5.2,
2ve
u
h
d

i=1
U
i
e
u
h
=
h
+2v
d

i=1
x
i

i
+2vd[1+
/
+(d 1)] +2v
/
d

i=1

i
.
Thus

0
(x)
1
H
0
(x) = ve
u
h
_
2
d

i=1
U
i
d 2
/
d

i=1

i
_
e
u
h
.
Apply this operator to the Hermite polynomial H

(x; ,
/
, v), N
d
0
, con-
structed in Section 11.5; thus
H (
0
H

) = 2v([[ +d[
1
2
+(d 1) +
/
)]
0
H

.
The normalization constants were obtained in Section 11.5. The type-B version of
Theorem 10.7.4 is needed to establish the complete integrability of this quantum
model. As in Denition 10.7.3, dene the operators T
j
by
394 Orthogonal Polynomials Associated with Octahedral Groups
d

i=1
t
i
T
i
=
d

j=1
(1+tU
j
).
Theorem 11.6.6 For 1 i d, the operators T
i
commute with each w W
d
.
Proof We need only consider the reections in W
d
. Clearly
j
U
i
=U
i

j
for each
j. It now sufces to consider adjacent transpositions
j, j+1
, since
i

i j

i
=
i j
.
Fix j and let =
j, j+1
; then U
i
= U
i
whenever i ,= j, j +1. Also, U
j
=
U
j+1
+( +
j, j+1
). It is now clear that (U
j
+U
j+1
) = U
j
+U
j+1
and
(U
j
U
j+1
) =U
j
U
j+1
(note that
i j
=
i, j+1
for i ,= j, j +1).
The operators
0
e
u
h
T
i
e
u
h

1
0
commute with the Hamiltonian H . When
restricted to W
d
-invariant functions they all coincide with invariant differential
operators.
11.7 Notes
Symmetries of the octahedral type have appeared in analysis for decades, if not
centuries. Notably, the quantum mechanical descriptions of atoms in certain crys-
tals are given as spherical harmonics with octahedral symmetry (once called
cubical harmonics). Dunkl [1984c, 1988] studied the type-B weight functions
but only for invariant polynomials, using what is now known as the restriction
of the h-Laplacian. Later, in Dunkl [1999c] most of the details for Sections 9.2
and 9.3 were worked out. The derivation of the MacdonaldMehtaSelberg inte-
gral in Section 9.3 is essentially based on the technique in Opdam [1991], where
Opdam proved Macdonalds conjectures about integrals related to the crystallo-
graphic root systems and situated on the torus (more precisely, Euclidean space
modulo the root lattice). The original proof for the type-B integral used a lim-
iting argument on Selbergs integral (see Askey [1980]). For the importance of
Selbergs integral and its numerous applications, see the recent survey by For-
rester and Warnaar [2008]. Generalized binomial coefcients for nonsymmetric
Jack polynomials were introduced by Baker and Forrester [1998], but Algorithm
11.4.13 is new. There is a connection between these coefcients and the the-
ory of shifted, or nonhomogeneous, Jack polynomials. The Hermite polynomials,
which are called Laguerre polynomials by some authors in cases where the par-
ities are the same for all variables, have appeared in several articles. The earlier
articles, by Lassalle [1991a, b, c], Yan [1992] and Baker and Forrester [1997a],
dealt with the invariant cases only. Later, when the nonsymmetric Jack polyno-
mials became more widely known, the more general Hermite polynomials were
studied, again by Baker and Forrester [1998], R osler and Voit [1998] and Ujino
and Wadati [1995, 1997], who called them Hi-Jack polynomials. The idea of
11.7 Notes 395
using e
u
h
to generate bases for Gaussian-type weight functions appeared in
Dunkl [1991].
The study of CalogeroSutherland systems has been carried on by many
authors. We refer to the book CalogeroMoserSutherland models, edited by van
Diejen and Vinet [2000] for a more comprehensive and up-to-date bibliography.
We limited our discussion to the role of the type-A and type-B polynomi-
als in the wave functions and did not attempt to explain the physics of the
theory or physical objects which can be usefully modeled by these systems.
Calogero [1971] and Sutherland [1971, 1972] proved that inverse square poten-
tials for identical particles in a one-dimensional space produce exactly solvable
models. Before their work, only the Dirac-delta interaction (collisions) model
had been solved. Around the same time, the Jack polynomials were constructed
(Jack [1970/71], Stanley [1989]). Eventually they found their way into the Suther-
land systems, and Lapointe and Vinet [1996] used physics concepts such as
annihilation and creation operators to prove new results on the Jack polynomials
and also applied them to the wave function problem. Other authors whose papers
have a signicant component dealing with the use of orthogonal polynomials are
Kakei [1998], Ujino and Wadati [1999], Nishino, Ujino and Wadati [1999] and
Uglov [2000]. The type-B model on the line with spin terms was rst studied by
Yamamoto [1995]. Kato and Yamamoto [1998] used group-invariance methods
to evaluate correlation integrals for these models.
Genest, Ismail, Vinet and Zhedanov [2013] analyzed in great detail the
Hamiltonians associated with the differentialdifference operators for Z
2
2
with
weight function [x
1
[

1
[x
2
[

2
. This work was continued by Genest, Vinet and
Zhedanov [2013]. The generalized Hermite polynomials appear in their analyses
of the wave functions.
The theory of nonsymmetric Jack polynomials has been extended to the family
of complex reection groups named G(n, 1, d), that is, the group of d d permu-
tation matrices whose nonzero entries are the nth roots of unity (so G(1, 1, d) is
the same as the symmetric group S
d
and G(2, 1, d) is the same as W
d
) in Dunkl
and Opdam [2003].
References
Abramowitz, M. and Stegun, I. (1970). Handbook of Mathematical Functions, Dover, New
York.
Agahanov, C. A. (1965). A method of constructing orthogonal polynomials of two vari-
ables for a certain class of weight functions (in Russian), Vestnik Leningrad Univ. 20,
no. 19, 510.
Akhiezer, N. I. (1965). The Classical Moment Problem and Some Related Questions in
Analysis, Hafner, New York.
Aktas, R. and Xu, Y. (2013) Sobolev orthogonal polynomials on a simplex, Int. Math. Res.
Not., 13, 30873131.
de

Alvarez, M., Fern andez, L., P erez, T. E. and Pi nar, M. A. (2009). A matrix Rodrigues
formula for classical orthogonal polynomials in two variables, J. Approx. Theory 157,
3252.
Andrews, G. E., Askey, R. and Roy, R. (1999). Special Functions, Encyclopedia of
Mathematics and its Applications 71, Cambridge University Press, Cambridge.
Aomoto, K. (1987). Jacobi polynomials associated with Selberg integrals, SIAM J. Math.
Anal. 18, no. 2, 545549.
Appell, P. and de F eriet, J. K. (1926). Fonctions hyperg eom etriques et hypersph eriques,
polynomes dHermite, Gauthier-Villars, Paris.
Area, I., Godoy, E., Ronveaux, A. and Zarzo, A. (2012). Bivariate second-order linear par-
tial differential equations and orthogonal polynomial solutions. J. Math. Anal. Appl.
387, 11881208.
Askey, R. (1975). Orthogonal Polynomials and Special Functions, Regional Conference
Series in Applied Mathemathics 21, SIAM, Philadelphia.
Askey, R. (1980). Some basic hypergeometric extensions of integrals of Selberg and
Andrews, SIAM J. Math. Anal. 11, 938951.
Atkinson, K. and Han, W. (2012). Spherical Harmonics and Approximations on the
Unit Sphere: An Introduction, Lecture Notes in Mathematics 2044, Springer,
Heidelberg.
Atkinson, K. and Hansen, O. (2005). Solving the nonlinear Poisson equation on the unit
disk, J. Integral Eq. Appl. 17, 223241.
References 397
Axler, S., Bourdon, P. and Ramey, W. (1992). Harmonic Function Theory, Springer, New
York.
Bacry, H. (1984). Generalized Chebyshev polynomials and characters of GL(N, C) and
SL(N, C), in Group Theoretical Methods in Physics, pp. 483485, Lecture Notes in
Physics, 201, Springer-Verlag, Berlin.
Badkov, V. (1974). Convergence in the mean and almost everywhere of Fourier series in
polynomials orthogonal on an interval, Math. USSR-Sb. 24, 223256.
Bailey, W. N. (1935). Generalized Hypergeometric Series, Cambridge University Press,
Cambridge.
Baker, T. H., Dunkl, C. F. and Forrester, P. J. (2000). Polynomial eigenfunctions of the
CalogeroSutherlandMoser models with exchange terms, in CalogeroSutherland
Moser Models, pp. 3751, CRM Series in Mathematical Physics, Springer, New York.
Baker, T. H. and Forrester, P. J. (1997a). The CalogeroSutherland model and generalized
classical polynomials, Comm. Math. Phys. 188, 175216.
Baker, T. H. and Forrester, P. J. (1997b). The CalogeroSutherland model and polynomials
with prescribed symmetry, Nuclear Phys. B 492, 682716.
Baker, T. H. and Forrester, P. J. (1998). Nonsymmetric Jack polynomials and integral
kernels, Duke Math. J. 95, 150.
Barrio, R., Pe na, J. M. and Sauer, T. (2010). Three term recurrence for the evaluation of
multivariate orthogonal polynomials, J. Approx. Theory 162, 407420.
Beerends, R. J. (1991). Chebyshev polynomials in several variables and the radial part of
the LaplaceBeltrami operator, Trans. Amer. Math. Soc. 328, 779814.
Beerends, R. J. and Opdam, E. M. (1993). Certain hypergeometric series related to the root
system BC, Trans. Amer. Math. Soc. 339, 581609.
Berens, H., Schmid, H. and Xu, Y. (1995a). On two-dimensional denite orthogonal sys-
tems and on lower bound for the number of associated cubature formulas, SIAM J.
Math. Anal. 26, 468487.
Berens, H., Schmid, H. and Xu, Y. (1995b). Multivariate Gaussian cubature formula, Arch.
Math. 64, 2632.
Berens, H. and Xu, Y. (1996). Fej er means for multivariate Fourier series, Math. Z. 221,
449465.
Berens, H. and Xu, Y. (1997). 1 summability for multivariate Fourier integrals and
positivity, Math. Proc. Cambridge Phil. Soc. 122, 149172.
Berg, C. (1987). The multidimensional moment problem and semigroups, in Moments
in Mathematics, pp. 110124, Proceedings of Symposia in Applied Mathematics 37,
American Mathemathical Society, Providence, RI.
Berg, C., Christensen, J. P. R. and Ressel, P. (1984). Harmonic Analysis on Semigroups,
Theory of Positive Denite and Related Functions, Graduate Texts in Mathematics
100, Springer, New YorkBerlin.
Berg, C. and Thill, M. (1991). Rotation invariant moment problems, Acta Math. 167,
207227.
Bergeron, N. and Garsia, A. M. (1992). Zonal polynomials and domino tableaux, Discrete
Math. 99, 315.
Bertran, M. (1975). Note on orthogonal polynomials in v-variables, SIAM. J. Math. Anal.
6, 250257.
398 References
Bojanov, B. and Petrova, G. (1998). Numerical integration over a disc, a new Gaussian
quadrature formula, Numer. Math. 80, 3950.
Bos, L. (1994). Asymptotics for the Christoffel function for Jacobi like weights on a ball
in R
m
, New Zealand J. Math. 23, 99109.
Bos, L., Della Vecchia, B. and Mastroianni, G. (1998). On the asymptotics of Christof-
fel functions for centrally symmetric weights functions on the ball in R
n
, Rend. del
Circolo Mat. di Palermo, 52, 277290.
Bracciali, C. F., Delgado, A. M., Fern andez, L., P erez, T. E. and Pi nar, M. A. (2010).
New steps on Sobolev orthogonality in two variables, J. Comput. Appl. Math. 235,
916926.
Braaksma, B. L. J. and Meulenbeld, B. (1968) Jacobi polynomials as spherical harmonics,
Indag. Math. 30, 384389.
Buchstaber, V., Felder, G. and Veselov, A. (1994). Elliptic Dunkl operators, root systems,
and functional equations, Duke Math. J. 76, 885891.
Calogero, F. (1971). Solution of the one-dimensional N-body problems with
quadratic and/or inversely quadratic pair potentials, J. Math. Phys. 12,
419436.
zu Castell, W., Filbir, F. and Xu, Y. (2009). Ces` aro means of Jacobi expansions on the
parabolic biangle, J. Approx. Theory 159, 167179.
Cheney, E. W. (1998). Introduction to Approximation Theory, reprint of the 2nd edn
(1982), AMS Chelsea, Providence, RI.
Cherednik, I. (1991). A unication of KnizhnikZamolodchikov and Dunkl operators via
afne Hecke algebras, Invent. Math. 106, 411431.
Chevalley, C. (1955). Invariants of nite groups generated by reections, Amer. J. Math.
77, 777782.
Chihara, T. S. (1978). An Introduction to Orthogonal Polynomials, Mathematics and its
Applications 13, Gordon and Breach, New York.
Cicho n, D., Stochel, J. and Szafraniec, F. H. (2005). Three termrecurrence relation modulo
ideal and orthogonality of polynomials of several variables, J. Approx. Theory 134,
1164.
Connett, W. C. and Schwartz, A. L. (1995). Continuous 2-variable polynomial hyper-
groups, in Applications of Hypergroups and Related Measure Algebras (Seattle, WA,
1993), pp. 89109, Contemporary Mathematics 183, American Mathematical Society,
Providence, RI.
van der Corput, J. G. and Schaake, G. (1935). Ungleichungen f ur Polynome und
trigonometrische Polynome, Compositio Math. 2, 321361.
Coxeter, H. S. M. (1935). The complete enumeration of nite groups of the form R
2
i
=
_
R
i
R
j
_
k
i j
= 1, J. London Math. Soc. 10, 2125.
Coxeter, H. S. M. (1973). Regular Polytopes, 3rd edn, Dover, New York.
Coxeter, H. S. M. and Moser, W. O. J. (1965). Generators and Relations for Discrete
groups, 2nd edn, Springer, BerlinNew York.
Dai, F. and Wang, H. (2010). A transference theorem for the Dunkl transform and its
applications, J. Funct. Anal. 258, 40524074.
Dai, F. and Xu, Y. (2009a). Ces` aro means of orthogonal expansions in several variables,
Const. Approx. 29, 129155.
References 399
Dai, F. and Xu, Y. (2009b). Boundedness of projection operators and Ces` aro means in
weighted L
p
space on the unit sphere, Trans. Amer. Math. Soc. 361, 31893221.
Dai, F. and Xu, Y. (2013). Approximation Theory and Harmonic Analysis on Spheres and
Balls, Springer, BerlinNewYork.
Debiard, A. and Gaveau, B. (1987). Analysis on root systems, Canad. J. Math. 39, 1281
1404.
Delgado, A. M., Fern andez, L, P erez, T. E., Pi nar, M. A. and Xu, Y. (2010). Orthogonal
polynomials in several variables for measures with mass points, Numer. Algorithm 55,
245264.
Delgado, A., Geronimo, J. S., Iliev, P. and Marcell an, F. (2006). Two variable orthogonal
polynomials and structured matrices, SIAM J. Matrix Anal. Appl. 28, 118147.
Delgado, A. M., Geronimo, J. S., Iliev, P. and Xu, Y. (2009). On a two variable class of
BernsteinSzego measures, Constr. Approx. 30, 7191.
DeVore, R. A. and Lorentz, G. G. (1993). Constructive approximation, Grundlehren der
Mathematischen Wissenschaften 303, Springer, Berlin.
van Diejen, J. F. (1999). Properties of some families of hypergeometric orthogonal
polynomials in several variables, Trans. Amer. Math. Soc. 351, no. 1, 233270.
van Diejen, J. F. and Vinet, L. (2000). CalogeroSutherlandMoser models, CRM Series
in Mathematical Physics, Springer, New York.
Dieudonn e, J. (1980). Special Functions and Linear Representations of Lie Groups, CBMS
Regional Conference Series in Mathematics 42, American Mathematical Society,
Providence, RI.
Dijksma, A. and Koornwinder, T. H. (1971). Spherical harmonics and the product of two
Jacobi polynomials, Indag. Math. 33, 191196.
Dubiner, M. (1991). Spectral methods on triangles and other domains, J. Sci. Comput. 6,
345390.
Dunkl, C. F. (1981). Cube group invariant spherical harmonics and Krawtchouk polyno-
mials, Math. Z. 92, 5771.
Dunkl, C. F. (1982). An additional theorem for Heisenberg harmonics, in Proc. Conf.
on Harmonic Analysis in Honor of Antoni Zygmund, pp. 688705, Wadsworth
International, Belmont, CA.
Dunkl, C. F. (1984a). The Poisson kernel for Heisenberg polynomials on the disk, Math.
Z. 187, 527547.
Dunkl, C. F. (1984b). Orthogonal polynomials with symmetry of order three, Canad. J.
Math. 36, 685717.
Dunkl, C. F. (1984c). Orthogonal polynomials on the sphere with octahedral symmetry,
Trans. Amer. Math. Soc. 282, 555575.
Dunkl, C. F. (1985). Orthogonal polynomials and a Dirichlet problem related to the Hilbert
transform, Indag. Math. 47, 147171.
Dunkl, C. F. (1986). Boundary value problems for harmonic functions on the Heisenberg
group, Canad. J. Math. 38, 478512.
Dunkl, C. F. (1987). Orthogonal polynomials on the hexagon, SIAM J. Appl. Math. 47,
343351.
Dunkl, C. F. (1988). Reection groups and orthogonal polynomials on the sphere, Math.
Z. 197, 3360.
400 References
Dunkl, C. F. (1989a). Differentialdifference operators associated to reection groups,
Trans. Amer. Math. Soc. 311, 167183.
Dunkl, C. F. (1989b). Poisson and Cauchy kernels for orthogonal polynomials with
dihedral symmetry, J. Math. Anal. Appl. 143, 459470.
Dunkl, C. F. (1990). Operators commuting with Coxeter group actions on polynomi-
als, in Invariant Theory and Tableaux (Minneapolis, MN, 1988), pp. 107117, IMA
Mathematics and Applications 19, Springer, New York.
Dunkl, C. F. (1991). Integral kernels with reection group invariance, Canad. J. Math. 43,
12131227.
Dunkl, C. F. (1992). Hankel transforms associated to nite reection groups, in Hyper-
geometric Functions on Domains of Positivity, Jack Polynomials, and Applications
(Tampa, FL, 1991), pp. 123138, Contemporary Mathematics 138, American Mathe-
matical Society, Providence, RI.
Dunkl, C. F. (1995). Intertwining operators associated to the group S
3
, Trans. Amer. Math.
Soc. 347, 33473374.
Dunkl, C. F. (1998a). Orthogonal polynomials of types A and B and related Calogero
models, Comm. Math. Phys. 197, 451487.
Dunkl, C. F. (1998b). Intertwining operators and polynomials associated with the symmet-
ric group, Monatsh. Math. 126, 181209.
Dunkl, C. F. (1999a). Computing with differentialdifference operators, J. Symbolic
Comput. 28, 819826.
Dunkl, C. F. (1999b). Planar harmonic polynomials of type B, J. Phys. A: Math. Gen. 32,
80958110.
Dunkl, C. F. (1999c). Intertwining operators of type B
N
, in Algebraic Methods and q-
Special Functions (Montr eal, QC, 1996), pp. 119134, CRM Proceedings Lecture
Notes 22, American Mathematical Society, Providence, RI.
Dunkl, C. F. (2007). An intertwining operator for the group B2, Glasgow Math. J. 49,
291319.
Dunkl, C. F., de Jeu, M. F. E. and Opdam, E. M. (1994). Singular polynomials for nite
reection groups, Trans. Amer. Math. Soc. 346, 237256.
Dunkl, C. F. and Hanlon, P. (1998). Integrals of polynomials associated with tableaux and
the GarsiaHaiman conjecture. Math. Z. 228, 537567.
Dunkl, C. F. and Luque, J.-G. (2011). Vector-valued Jack polynomials from scratch,
SIGMA 7, 026, 48 pp.
Dunkl, C. F. and Opdam, E. M. (2003). Dunkl operators for complex reection groups,
Proc. London Math. Soc. 86, 70108.
Dunkl, C. F. and Ramirez, D. E. (1971). Topics in Harmonic Analysis, Appleton-
Century Mathematics Series, Appleton-Century-Crofts (Meredith Corporation),
New York.
Dunn, K. B. and Lidl, R. (1982). Generalizations of the classical Chebyshev polynomials
to polynomials in two variables, Czechoslovak Math. J. 32, 516528.
Eier, R. and Lidl, R. (1974). Tschebyscheffpolynome in einer und zwei Variablen, Abh .
Math. Sem. Univ. Hamburg 41, 17-27.
Eier, R. and Lidl, R. (1982). A class of orthogonal polynomials in k variables, Math. Ann.
260, 9399.
References 401
Eier, R., Lidl, R. and Dunn, K. B. (1981). Differential equations for generalized Chebyshev
polynomials, Rend. Mat. 1, no. 7, 633646.
Engelis, G. K. (1974). Certain two-dimensional analogues of the classical orthogonal
polynomials (in Russian), Latvian Mathematical Yearbook 15, 169202, 235, Izdat.
Zinatne, Riga.
Engels, H. (1980). Numerical Quadrature and Cubature, Academic Press, New York.
Erd elyi, A. (1965). Axially symmetric potentials and fractional integration, SIAM J. 13,
216228.
Erd elyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F. G. (1953). Higher Transcen-
dental Functions, McGraw-Hill, New York.
Etingof, E. (2010). A uniform proof of the MacdonaldMehtaOpdam identity for nite
Coxeter groups, Math. Res. Lett. 17, 277284.
Exton, H. (1976). Multiple Hypergeometric Functions and Applications, Halsted, New
York.
Fackerell E. D. and Littler, R. A. (1974). Polynomials biorthogonal to Appells polynomi-
als, Bull. Austral. Math. Soc. 11, 181195.
Farouki, R. T., Goodman, T. N. T. and Sauer, T. (2003). Construction of orthogonal bases
for polynomials in Bernstein form on triangular and simplex domains, Comput. Aided
Geom. Design 20, 209230.
Fern andez, L., P erez, T. E. and Pi nar, M. A. (2005). Classical orthogonal polynomials in
two variables: a matrix approach, Numer. Algorithms 39, 131142.
Fern andez, L., P erez, T. E. and Pi nar, M. A. (2011). Orthogonal polynomials in two
variables as solutions of higher order partial differential equations, J. Approx. Theory
163, 8497.
Folland, G. B. (1975). Spherical harmonic expansion of the PoissonSzeg o kernel for the
ball, Proc. Amer. Math. Soc. 47, 401408.
Forrester, P. J. and Warnaar, S. O. (2008). The importance of the Selberg integral Bull.
Amer. Math. Soc. 45, 489534.
Freud, G. (1966). Orthogonal Polynomials, Pergamon, New York.
Fuglede, B. (1983). The multidimensional moment problem, Expos. Math. 1, 4765.
Gasper, G. (1972). Banach algebras for Jacobi series and positivity of a kernel, Ann. Math.
95, 261280.
Gasper, G. (1977). Positive sums of the classical orthogonal polynomials, SIAM J. Math.
Anal. 8, 423447.
Gasper, G. (1981). Orthogonality of certain functions with respect to complex valued
weights, Canad. J. Math. 33, 12611270.
Gasper, G. and Rahman, M. (1990). Basic Hypergeometric Series, Encyclopedia of
Mathematics and its Applications 35, Cambridge University Press, Cambridge.
Gekhtman, M. I. and Kalyuzhny, A. A. (1994). On the orthogonal polynomials in several
variables, Integr. Eq. Oper. Theory 19, 404418.
Genest, V., Ismail, M., Vinet, L. and Zhedanov, A. (2013). The Dunkl oscillator in the
plane I: superintegrability, separated wavefunctions and overlap coefcients, J. Phys.
A: Math. Theor. 46, 145 201.
Genest, V., Vinet, L. and Zhedanov, A. (2013). The singular and the 2:1 anisotropic Dunkl
oscillators in the plane, J. Phys. A: Math. Theor. 46, 325 201.
402 References
Ghanmi, A. (2008). A class of generalized complex Hermite polynomials, J. Math. Anal.
Appl. 340, 13951406.
Ghanmi, A. (2013). Operational formulae for the complex Hermite polynomials H
p,q
(z, z),
Integral Transf. Special Func. 24, 884895.
G orlich, E. and Markett, C. (1982). A convolution structure for Laguerre series, Indag.
Math. 14, 161171.
Grifths, R. (1979). A transition density expansion for a multi-allele diffusion model. Adv.
Appl. Prob. 11, 310325.
Grifths, R. and Span` o, D. (2011). Multivariate Jacobi and Laguerre polynomials, innite-
dimensional extensions, and their probabilistic connections with multivariate Hahn
and Meixner polynomials, Bernoulli 17, 10951125.
Grifths, R. and Span` o, D. (2013). Orthogonal polynomial kernels and canonical correla-
tions for Dirichlet measures, Bernoulli 19, 548598.
Groemer, H. (1996). Geometric Applications of Fourier Series and Spherical Harmonics,
Cambridge University Press, New York.
Grove, L. C. (1974). The characters of the hecatonicosahedroidal group, J. Reine Angew.
Math. 265, 160169.
Grove, L. C. and Benson, C. T. (1985). Finite Reection Groups, 2nd edn, Graduate Texts
in Mathematics 99, Springer, BerlinNew York.
Grundmann, A. and M oller, H. M. (1978). Invariant integration formulas for the n-simplex
by combinatorial methods, SIAM. J. Numer. Anal. 15, 282290.
Haviland, E. K. (1935). On the momentum problem for distributions in more than one
dimension, I, II, Amer. J. Math. 57, 562568; 58, 164168.
Heckman, G. J. (1987). Root systems and hypergeometric functions II, Compositio Math.
64, 353373.
Heckman, G. J. (1991a). A remark on the Dunkl differentialdifference operators, in Har-
monic Analysis on Reductive Groups (Brunswick, ME, 1989), pp. 181191, Progress
in Mathematics 101, Birkh auser, Boston, MA.
Heckman, G. J. (1991b). An elementary approach to the hypergeometric shift operators of
Opdam, Invent. Math. 103, 341350.
Heckman, G. J. and Opdam, E. M. (1987). Root systems and hypergeometric functions, I,
Compositio Math. 64, 329352.
Helgason, S. (1984). Groups and Geometric Analysis, Academic Press, New York.
Higgins, J. R. (1977). Completeness and Basis Properties of Sets of Special Functions,
Cambridge University Press, Cambridge.
Hoffman, M. E. and Withers, W. D. (1988). Generalized Chebyshev polynomials associ-
ated with afne Weyl groups, Trans. Amer. Math. Soc. 308, 91104.
Horn, R. A. and Johnson, C. R. (1985). Matrix Analysis, Cambridge University Press,
Cambridge.
Hua, L. K. (1963). Harmonic Analysis of Functions of Several Complex Variables
in the Classical Domains, Translations of Mathematical Monographs 6, American
Mathematical Society, Providence, RI.
Humphreys, J. E. (1990). Reection Groups and Coxeter Groups, Cambridge Studies in
Advanced Mathematics 29, Cambridge University Press, Cambridge.
Ignatenko, V. F. (1984). On invariants of nite groups generated by reections, Math.
USSR Sb. 48, 551563.
References 403
Ikeda, M. (1967). On spherical functions for the unitary group, I, II, III, Mem. Fac. Engrg
Hiroshima Univ. 3, 1775.
Iliev, P. (2011). KrallJacobi commutative algebras of partial differential operators, J.
Math. Pures Appl. 96, no. 9, 446461.
Intissar, A. and Intissar, A. (2006). Spectral properties of the Cauchy transform on
L
2
(C; e
[z[
2
d), J. Math. Anal. Appl. 313, 400418.
Ismail, M. (2005). Classical and Quantum Orthogonal Polynomials in One Variable,
Encyclopedia of Mathematics and its Applications 71, Cambridge University Press,
Cambridge.
Ismail, M. (2013). Analytic properties of complex Hermite polynomials. Trans. Amer.
Math. Soc., to appear.
Ismail, M. and P. Simeonov (2013). Complex Hermite polynomials: their combinatorics
and integral operators. Proc. Amer. Math. Soc., to appear.
It o, K. (1952). Complex multiple Wiener integral, Japan J. Math. 22, 6386.
Ivanov, K., Petrushev, P. and Xu, Y. (2010). Sub-exponentially localized kernels and
frames induced by orthogonal expansions. Math. Z. 264, 361397.
Ivanov, K., Petrushev, P. and Xu, Y. (2012). Decomposition of spaces of distributions
induced by tensor product bases, J. Funct. Anal. 263, 11471197.
Jack, H. (1970/71). A class of symmetric polynomials with a parameter, Proc. Roy. Soc.
Edinburgh Sect. A. 69, 118.
Jackson, D. (1936). Formal properties of orthogonal polynomials in two variables, Duke
Math. J. 2, 423434.
James, A. T. and Constantine, A. G. (1974). Generalized Jacobi polynomials as spherical
functions of the Grassmann manifold, Proc. London Math. Soc. 29, 174192.
de Jeu, M. F. E. (1993). The Dunkl transform, Invent. Math. 113, 147162.
Kakei, S. (1998). Intertwining operators for a degenerate double afne Hecke algebra and
multivariable orthogonal polynomials, J. Math. Phys. 39, 49935006.
Kalnins, E. G., Miller, W., Jr and Tratnik, M. V. (1991). Families of orthogonal and
biorthogonal polynomials on the n-sphere, SIAM J. Math. Anal. 22, 272294.
Kanjin, Y. (1985). Banach algebra related to disk polynomials, T ohoku Math. J. 37,
395404.
Karlin S. and McGregor, J. (1962). Determinants of orthogonal polynomials, Bull. Amer.
Math. Soc. 68, 204209.
Karlin, S. and McGregor, J. (1975). Some properties of determinants of orthogonal polyno-
mials, in Theory and Application of Special Functions, ed. R. A. Askey, pp. 521550,
Academic Press, New York.
Kato, Y. and Yamamoto, T. (1998). Jack polynomials with prescribed symmetry
and hole propagator of spin CalogeroSutherland model, J. Phys. A, 31, 9171
9184.
Kerkyacharian, G., Petrushev, P., Picard, D. and Xu, Y. (2009). Decomposition of Triebel
Lizorkin and Besov spaces in the context of Laguerre expansions, J. Funct. Anal. 256,
11371188.
Kim, Y. J., Kwon, K. H. and Lee, J. K. (1998). Partial differential equations having
orthogonal polynomial solutions, J. Comput. Appl. Math. 99, 239253.
Knop, F. and Sahi, S (1997). A recursion and a combinatorial formula for Jack polynomi-
als. Invent. Math. 128, 922.
404 References
Koelink, E. (1996). Eight lectures on quantum groups and q-special functions, Rev.
Colombiana Mat. 30, 93180.
Kogbetliantz, E. (1924). Recherches sur la sommabilit e des s eries ultrasph eriques par la
m ethode des moyennes arithm etiques, J. Math. Pures Appl. 3, no. 9, 107187.
Koornwinder, T. H. (1974a). Orthogonal polynomials in two variables which are eigen-
functions of two algebraically independent partial differential operators, I, II, Proc.
Kon. Akad. v. Wet., Amsterdam 36, 4866.
Koornwinder, T. H. (1974b). Orthogonal polynomials in two variables which are eigen-
functions of two algebraically independent partial differential operators, III, IV, Proc.
Kon. Akad. v. Wet., Amsterdam 36, 357381.
Koornwinder, T. H. (1975). Two-variable analogues of the classical orthogonal polynomi-
als, in Theory and Applications of Special Functions, ed. R. A. Askey, pp. 435495,
Academic Press, New York.
Koornwinder, T. H. (1977). Yet another proof of the addition formula for Jacobi polyno-
mials, J. Math. Anal. Appl. 61, 136141.
Koornwinder, T. H. (1992). AskeyWilson polynomials for root systems of type BC, in
Hypergeometric Functions on Domains of Positivity, Jack Polynomials, and Applica-
tions, pp. 189204, Contemporary Mathematics 138, American Mathematical Society,
Providence, RI.
Koornwinder T. H. and Schwartz, A. L. (1997). Product formulas and associated hyper-
groups for orthogonal polynomials on the simplex and on a parabolic biangle, Constr.
Approx. 13, 537567.
Koornwinder, T. H. and Sprinkhuizen-Kuyper, I. (1978). Generalized power series expan-
sions for a class of orthogonal polynomials in two variables, SIAM J. Math. Anal. 9,
457483.
Kowalski, M. A. (1982a). The recursion formulas for orthogonal polynomials in n
variables, SIAM J. Math. Anal. 13, 309315.
Kowalski, M. A. (1982b). Orthogonality and recursion formulas for polynomials in n
variables, SIAM J. Math. Anal. 13, 316323.
Krall, H. L. and Sheffer, I. M. (1967). Orthogonal polynomials in two variables, Ann. Mat.
Pura Appl. 76, no. 4, 325376.
Kro o, A. and Lubinsky, D. (2013a). Christoffel functions and universality on the boundary
of the ball, Acta Math. Hungarica 140, 117133.
Kro o, A. and Lubinsky, D. (2013b). Christoffel functions and universality in the bulk for
multivariate orthogonal polynomials, Can. J. Math. 65, 600620.
Kwon, K. H., Lee, J. K. and Littlejohn, L. L. (2001). Orthogonal polynomial eigenfunc-
tions of second-order partial differential equations, Trans. Amer. Math. Soc. 353,
36293647.
Lapointe, L. and Vinet, L. (1996). Exact operator solution of the CalogeroSutherland
model, Comm. Math. Phys. 178, 425452.
Laporte, O. (1948). Polyhedral harmonics, Z. Naturforschung 3a, 447456.
Larcher, H. (1959). Notes on orthogonal polynomials in two variables. Proc. Amer. Math.
Soc. 10, 417423.
Lassalle, M. (1991a). Polyn omes de Hermite g en eralis e (in French) C. R. Acad. Sci. Paris
S er. I Math. 313, 579582.
References 405
Lassalle, M. (1991b). Polyn omes de Jacobi g en eralis e (in French) C. R. Acad. Sci. Paris
S er. I Math. 312, 425428.
Lassalle, M. (1991c). Polyn omes de Laguerre g en eralis e (in French) C. R. Acad. Sci. Paris
S er. I Math. 312, 725728.
Lasserre, J. (2012). The existence of Gaussian cubature formulas, J. Approx. Theory, 164,
572585.
Lebedev, N. N. (1972). Special Functions and Their Applications, revised edn, Dover,
New York.
Lee J. K. and Littlejohn, L. L. (2006). Sobolev orthogonal polynomials in two vari-
ables and second order partial differential equations, J. Math. Anal. Appl. 322,
10011017.
Li, H., Sun, J. and Xu, Y. (2008). Discrete Fourier analysis, cubature and interpolation on
a hexagon and a triangle, SIAM J. Numer. Anal. 46, 16531681.
Li, H. and Xu, Y. (2010). Discrete Fourier analysis on fundamental domain and simplex
of A
d
lattice in d-variables, J. Fourier Anal. Appl. 16, 383433.
Li, Zh.-K. and Xu, Y. (2000). Summability of product Jacobi expansions, J. Approx.
Theory 104, 287301.
Li, Zh.-K. and Xu, Y. (2003). Summability of orthogonal expansions, J. Approx. Theory,
122, 267333.
Lidl, R. (1975). Tschebyscheff polynome in mehreren variabelen, J. Reine Angew. Math.
273, 178198.
Littlejohn, L. L. (1988). Orthogonal polynomial solutions to ordinary and partial differ-
ential equations, in Orthogonal Polynomials and Their Applications (Segovia, 1986),
Lecture Notes in Mathematics 1329, 98124, Springer, Berlin.
Logan, B. and Shepp, I. (1975). Optimal reconstruction of a function from its projections,
Duke Math. J. 42, 649659.
Lorentz, G. G. (1986). Approximation of Functions, 2nd edn, Chelsea, New York.
Lyskova, A. S. (1991). Orthogonal polynomials in several variables, Dokl. Akad.
Nauk SSSR 316, 13011306; translation in Soviet Math. Dokl. 43 (1991),
264268.
Macdonald, I. G. (1982). Some conjectures for root systems, SIAM, J. Math. Anal. 13,
9881007.
Macdonald, I. G. (1995). Symmetric Functions and Hall Polynomials, 2nd edn, Oxford
University Press, New York.
Macdonald, I. G. (1998). Symmetric Functions and Orthogonal Polynomials, University
Lecture Series 12, American Mathematical Society, Providence, RI.
Maiorov, V. E. (1999). On best approximation by ridge functions, J. Approx. Theory 99,
6894.
Markett, C. (1982). Mean Ces` aro summability of Laguerre expansions and norm estimates
with shift parameter, Anal. Math. 8, 1937.
Marr, R. (1974). On the reconstruction of a function on a circular domain from a sampling
of its line integrals, J. Math. Anal. Appl., 45, 357374.
M at e, A., Nevai, P. and Totik, V. (1991). Szeg os extremum problem on the unit circle,
Ann. Math. 134, 433453.
Mehta, M. L. (1991). Random Matrices, 2nd edn, Academic Press, Boston, MA.
406 References
M oller, H. M. (1973). Polynomideale und Kubaturformeln, Thesis, University of Dort-
mund.
M oller, H. M. (1976). Kubaturformeln mit minimaler Knotenzahl, Numer. Math. 25,
185200.
Moody, R. and Patera, J. (2011). Cubature formulae for orthogonal polynomials in terms of
elements of nite order of compact simple Lie groups, Adv. Appl. Math. 47, 509535.
Morrow, C. R. and Patterson, T. N. L. (1978). Construction of algebraic cubature rules
using polynomial ideal theory, SIAM J. Numer. Anal. 15, 953976.
M uller, C. (1997). Analysis of Spherical Symmetries in Euclidean Spaces, Springer, New
York.
Mysovskikh, I. P. (1970). A multidimensional analog of quadrature formula of Gaussian
type and the generalized problem of Radon (in Russian), Vopr. Vychisl. i Prikl. Mat.,
Tashkent 38, 5569.
Mysovskikh, I. P. (1976). Numerical characteristics of orthogonal polynomials in two
variables, Vestnik Leningrad Univ. Math. 3, 323332.
Mysovskikh, I. P. (1981). Interpolatory Cubature Formulas, Nauka, Moscow.
Narcowich, F. J., Petrushev, P, and Ward, J. D. (2006) Localized tight frames on spheres,
SIAM J. Math. Anal. 38, 574594.
Nelson, E. (1959). Analytic vectors, Ann. Math. 70, no. 2, 572615.
Nevai, P. (1979). Orthogonal polynomials, Mem. Amer. Math. Soc. 18, no. 213.
Nevai, P. (1986). G eza Freud, orthogonal polynomials and Christoffel functions. A case
study, J. Approx. Theory 48, 3167.
Nesterenko, M., Patera, J., Szajewska, M. and Tereszkiewicz, A. (2010). Orthogonal poly-
nomials of compact simple Lie groups: branching rules for polynomials, J. Phys. A
43, no. 49, 495 207, 27 pp.
Nishino, A., Ujino, H. and Wadati, M. (1999). Rodrigues formula for the nonsymmetric
multivariable Laguerre polynomial, J. Phys. Soc. Japan 68, 797802.
Noumi, M. (1996). Macdonalds symmetric polynomials as zonal spherical functions on
some quantum homogeneous spaces, Adv. Math. 123, 1677.
Nussbaum, A. E. (1966). Quasi-analytic vectors, Ark. Mat. 6, 179191.
Okounkov, A. and Olshanski, G. (1998). Asymptotics of Jack polynomials as the number
of variables goes to innity, Int. Math. Res. Not. 13, 641682.
Opdam, E. M. (1988). Root systems and hypergeometric functions, III, IV, Compositio
Math. 67, 2149, 191209.
Opdam, E. M. (1991). Some applications of hypergeometric shift operators, Invent. Math.
98, 118.
Opdam, E. M. (1995). Harmonic analysis for certain representations of graded Hecke
algebras, Acta Math. 175, 75121.
P erez, T. E., Pi nar, M. A. and Xu, Y. (2013). Weighted Sobolev orthogonal polynomials
on the unit ball, J. Approx. Theory, 171, 84104.
Petrushev, P. (1999). Approximation by ridge functions and neural networks, SIAM J.
Math. Anal. 30, 155189.
Petrushev, P. and Xu, Y. (2008a). Localized polynomial frames on the ball, Constr. Approx.
27, 121148.
References 407
Petrushev, P. and Xu, Y. (2008b). Decomposition of spaces of distributions induced by
Hermite expansions, J. Fourier Anal. Appl. 14, 372414.
Pi nar, M. and Xu, Y. (2009). Orthogonal polynomials and partial differential equations on
the unit ball, Proc. Amer. Math. Soc. 137, 29792987.
Podkorytov, A. M. (1981). Summation of multiple Fourier series over polyhedra, Vestnik
Leningrad Univ. Math. 13, 6977.
Proriol, J. (1957). Sur une famille de polynomes ` a deux variables orthogonaux dans un
triangle, C. R. Acad. Sci. Paris 245, 24592461.
Putinar, M. (1997). A dilation theory approach to cubature formulas, Expos. Math. 15,
183192.
Putinar, M. (2000). A dilation theory approach to cubature formulas II, Math. Nach. 211,
159175.
Putinar, M. and Vasilescu, F. (1999). Solving moment problems by dimensional extension,
Ann. Math. 149, 10871107.
Radon, J. (1948). Zur mechanischen Kubatur, Monatsh. Math. 52, 286300.
Reznick, B. (2000). Some concrete aspects of Hilberts 17th problem, in Real Algebraic
Geometry and Ordered Structures, pp. 251272, Contemporary Mathematics 253,
American Mathematical Society, Providence, RI.
Ricci, P. E. (1978). Chebyshev polynomials in several variables (in Italian), Rend. Mat. 11,
no. 6, 295327.
Riesz, F. and Sz. Nagy, B. (1955). Functional Analysis, Ungar, New York.
Rosengren, H. (1998). Multilinear Hankel forms of higher order and orthogonal polyno-
mials, Math. Scand. 82, 5388.
Rosengren, H. (1999). Multivariable orthogonal polynomials and coupling coefcients for
discrete series representations, SIAM J. Math. Anal. 30, 232272.
Rosier, M. (1995). Biorthogonal decompositions of the Radon transform, Numer. Math.
72, 263283.
R osler, M. (1998). Generalized Hermite polynomials and the heat equation for Dunkl
operators, Comm. Math. Phys. 192, 519542.
R osler, M. (1999). Positivity of Dunkls intertwining operator, Duke Math. J., 98,
445463.
R osler, M. and Voit, M. (1998). Markov processes related with Dunkl operators. Adv. Appl.
Math. 21, 575643.
Rozenblyum, A. V. and Rozenblyum, L. V. (1982). A class of orthogonal polynomials of
several variables, Differential Eq. 18, 12881293.
Rozenblyum, A. V. and Rozenblyum, L. V. (1986). Orthogonal polynomials of several
variables, which are connected with representations of groups of Euclidean motions,
Differential Eq. 22, 13661375.
Rudin, W. (1991). Functional Analysis, 2nd edn., International Series in Pure and Applied
Mathematics, McGraw-Hill, New York.
Sahi, S. (1996). A new scalar product for nonsymmetric Jack polynomials, Int. Math. Res.
Not. 20, 9971004.
Schmid, H. J. (1978). On cubature formulae with a minimum number of knots, Numer.
Math. 31, 281297.
408 References
Schmid, H. J. (1995). Two-dimensional minimal cubature formulas and matrix equations,
SIAM J. Matrix Anal. Appl. 16, 898921.
Schmid, H. J. and Xu, Y. (1994). On bivariate Gaussian cubature formula, Proc. Amer.
Math. Soc. 122, 833842.
Schm udgen, K. (1990). Unbounded Operator Algebras and Representation Theory,
Birkh auser, Boston, MA.
Selberg, A. (1944). Remarks on a multiple integral (in Norwegian), Norsk Mat. Tidsskr.
26, 7178.
Shephard, G. C. and Todd, J. A. (1954). Finite unitary reection groups, Canad. J. Math.
6, 274304.
Shishkin, A. D. (1997). Some properties of special classes of orthogonal polynomials in
two variables, Integr. Transform. Spec. Funct. 5, 261272.
Shohat, J. and Tamarkin, J. (1943). The Problem of Moments, Mathematics Surveys 1,
American Mathematical Society, Providence, RI.
Sprinkhuizen-Kuyper, I. (1976). Orthogonal polynomials in two variables. A further anal-
ysis of the polynomials orthogonal over a region bounded by two lines and a parabola,
SIAM J. Math. Anal. 7, 501518.
Stanley, R. P. (1989). Some combinatorial properties of Jack symmetric functions, Adv.
Math. 77, 76115.
Stein, E. M. (1970). Singular Integrals and Differentiability Properties of Functions,
Princeton University Press, Princeton, NJ.
Stein, E. M. and Weiss, G. (1971). Introduction to Fourier Analysis on Euclidean Spaces,
Princeton University Press, Princeton, NJ.
Stokman, J. V. (1997). Multivariable big and little q-Jacobi polynomials, SIAM J. Math.
Anal. 28, 452480.
Stokman, J. V. and Koornwinder, T. H. (1997). Limit transitions for BC type multivariable
orthogonal polynomials, Canad. J. Math. 49, 373404.
Stroud, A. (1971). Approximate Calculation of Multiple Integrals, Prentice-Hall, Engle-
wood Cliffs, NJ.
Suetin, P. K. (1999). Orthogonal Polynomials in Two Variables, translated from the 1988
Russian original by E. V. Pankratiev, Gordon and Breach, Amsterdam.
Sutherland, B. (1971). Exact results for a quantum many-body problem in one dimension,
Phys. Rev. A 4, 20192021.
Sutherland, B. (1972). Exact results for a quantum many-body problem in one dimension,
II, Phys. Rev. A 5, 13721376.
Szeg o, G. (1975). Orthogonal Polynomials, 4th edn, American Mathematical Society
Colloquium Publication 23, American Mathematical Society, Providence, RI.
Thangavelu, S. (1992). Transplantaion, summability and multipliers for multiple Laguerre
expansions, T ohoku Math. J. 44, 279298.
Thangavelu, S. (1993). Lectures on Hermite and Laguerre Expansions, Princeton Univer-
sity Press, Princeton, NJ.
Thangavelu, S. and Xu, Y. (2005). Convolution operator and maximal function for the
Dunkl transform, J. Anal. Math. 97, 2555.
Tratnik, M.V. (1991a). Some multivariable orthogonal polynomials of the Askey tableau
continuous families, J. Math. Phys. 32, 20652073.
References 409
Tratnik, M.V. (1991b). Some multivariable orthogonal polynomials of the Askey tableau
discrete families, J. Math. Phys. 32, 23372342.
Uglov, D. (2000). Yangian GelfandZetlin bases, gl
N
-Jack polynomials, and computa-
tion of dynamical correlation functions in the spin CalogeroSutherland model, in
CalogeroSutherlandMoser Models, pp. 485495, CRM Series in Mathematical
Physics, Springer, New York.
Ujino, H. and Wadati, M. (1995). Orthogonal symmetric polynomials associated with the
quantum Calogero model, J. Phys. Soc. Japan 64, 27032706.
Ujino, H. and Wadati, M. (1997). Orthogonality of the Hi-Jack polynomials associated
with the Calogero model, J. Phys. Soc. Japan 66, 345350.
Ujino, H. and Wadati, M. (1999). Rodrigues formula for the nonsymmetric multivariable
Hermite polynomial, J. Phys. Soc. Japan 68, 391395.
Verlinden, P. and Cools, R. (1992). On cubature formulae of degree 4k + 1 attain-
ing M ollers lower bound for integrals with circular symmetry, Numer. Math. 61,
395407.
Vilenkin, N. J. (1968). Special Functions and the Theory of Group Representations, Amer-
ican Mathematical Society Translation of Mathematics Monographs 22, American
Mathematical Society, Providence, RI.
Vilenkin, N. J. and Klimyk, A. U. (1991a). Representation of Lie Groups and Special
Functions. Vol. 1. Simplest Lie Groups, Special Functions and Integral Transforms,
Mathematics and its Applications (Soviet Series) 72, Kluwer, Dordrecht.
Vilenkin, N. J. and Klimyk, A. U. (1991b). Representation of Lie Groups and Special
Functions. Vol. 2. Class I Representations, Special Functions, and Integral Trans-
forms, Mathematics and its Applications (Soviet Series) 72, Kluwer, Dordrecht,
Netherlands.
Vilenkin, N. J. and Klimyk, A. U. (1991c). Representation of Lie Groups and Spe-
cial Functions. Vol. 3. Classical and Quantum Groups and Special Functions,
Mathematics and its Applications (Soviet Series) 72, Kluwer, Dordrecht, Netherlands.
Vilenkin, N. J. and Klimyk, A. U. (1995). Representation of Lie Groups and Special Func-
tions. Recent Advances, Mathematics and its Applications 316, Kluwer, Dordrecht,
Netherlands.
Volkmer, H. (1999). Expansions in products of HeineStieltjes polynomials, Constr.
Approx. 15, 467480.
Vretare, L. (1984). Formulas for elementary spherical functions and generalized Jacobi
polynomials, SIAM. J. Math. Anal. 15, 805833.
Wade, J. (2010). A discretized Fourier orthogonal expansion in orthogonal polynomials
on a cylinder, J. Approx. Theory 162, 15451576.
Wade, J. (2011). Ces` aro summability of Fourier orthogonal expansions on the cylinder,
J. Math. Anal. Appl. 402 (2013), 446452.
Waldron, S. (2006). On the BernsteinB ezier form of Jacobi polynomials on a simplex,
J. Approx. Theory 140, 8699.
Waldron, S. (2008). Orthogonal polynomials on the disc, J. Approx. Theory 150,
117131.
Waldron, S. (2009). Continuous and discrete tight frames of orthogonal polynomials for
a radially symmetric weight. Constr. Approx. 30, 3352.
410 References
W unsche, A. (2005). Generalized Zernike or disc polynomials, J. Comp. Appl. Math. 174,
135163.
Xu, Y. (1992). Gaussian cubature and bivariable polynomial interpolation, Math. Comput.
59, 547555.
Xu, Y. (1993a). On multivariate orthogonal polynomials, SIAM J. Math. Anal. 24,
783794.
Xu, Y. (1993b). Unbounded commuting operators and multivariate orthogonal
polynomials, Proc. Amer. Math. Soc. 119, 12231231.
Xu, Y. (1994a). Multivariate orthogonal polynomials and operator theory, Trans. Amer.
Math. Soc. 343, 193202.
Xu, Y. (1994b). Block Jacobi matrices and zeros of multivariate orthogonal polynomials,
Trans. Amer. Math. Soc. 342, 855866.
Xu, Y. (1994c). Recurrence formulas for multivariate orthogonal polynomials, Math.
Comput. 62, 687702.
Xu, Y. (1994d). On zeros of multivariate quasi-orthogonal polynomials and Gaussian
cubature formulae, SIAM J. Math. Anal. 25, 9911001.
Xu, Y. (1994e). Solutions of three-term relations in several variables, Proc. Amer. Math.
Soc. 122, 151155.
Xu, Y. (1994f). Common Zeros of Polynomials in Several Variables and Higher Dimen-
sional Quadrature, Pitman Research Notes in Mathematics Series 312, Longman,
Harlow.
Xu, Y. (1995). Christoffel functions and Fourier series for multivariate orthogonal
polynomials, J. Approx. Theory 82, 205239.
Xu, Y. (1996a). Asymptotics for orthogonal polynomials and Christoffel functions on a
ball, Meth. Anal. Appl. 3, 257272.
Xu, Y. (1996b). Lagrange interpolation on Chebyshev points of two variables, J. Approx.
Theory 87, 220238.
Xu, Y. (1997a). On orthogonal polynomials in several variables, in Special Functions,
q-Series and Related Topics, pp. 247270, The Fields Institute for Research in
Mathematical Sciences, Communications Series 14, American Mathematical Society,
Providence, RI.
Xu, Y. (1997b). Orthogonal polynomials for a family of product weight functions on the
spheres, Canad. J. Math. 49, 175192.
Xu, Y. (1997c). Integration of the intertwining operator for h-harmonic polynomials
associated to reection groups, Proc. Amer. Math. Soc. 125, 29632973.
Xu, Y. (1998a). Intertwining operator and h-harmonics associated with reection groups,
Canad. J. Math. 50, 193209.
Xu, Y. (1998b). Orthogonal polynomials and cubature formulae on spheres and on balls,
SIAM J. Math. Anal. 29, 779793.
Xu, Y. (1998c). Orthogonal polynomials and cubature formulae on spheres and on
simplices, Meth. Anal. Appl. 5, 169184.
Xu, Y. (1998d). Summability of Fourier orthogonal series for Jacobi weight functions on
the simplex in R
d
, Proc. Amer. Math. Soc. 126, 30273036.
Xu, Y. (1999a). Summability of Fourier orthogonal series for Jacobi weight on a ball in
R
d
, Trans. Amer. Math. Soc. 351, 24392458.
References 411
Xu, Y. (1999b). Aymptotics of the Christoffel functions on a simplex in R
d
, J. Approx.
Theory 99, 122133.
Xu, Y. (1999c). Cubature formulae and polynomial ideals, Adv. Appl. Math. 23,
211233.
Xu, Y. (2000a). Harmonic polynomials associated with reection groups, Canad. Math.
Bull. 43, 496507.
Xu, Y. (2000b). FunkHecke formula for orthogonal polynomials on spheres and on balls,
Bull. London Math. Soc. 32, 447457.
Xu, Y. (2000c). Constructing cubature formulae by the method of reproducing kernel,
Numer. Math. 85, 155173.
Xu, Y. (2000d). A note on summability of Laguerre expansions, Proc. Amer. Math. Soc.
128, 35713578.
Xu, Y. (2000e). A product formula for Jacobi polynomials, in Special Functions (Hong
Kong, 1999), pp. 423430, World Science, River Edge, NJ.
Xu, Y. (2001a). Orthogonal polynomials and summability in Fourier orthogonal
series on spheres and on balls, Math. Proc. Cambridge Phil. Soc. 31 (2001),
139155.
Xu, Y. (2001b). Orthogonal polynomials on the ball and the simplex for weight functions
with reection symmetries, Constr. Approx. 17, 383412.
Xu, Y. (2004). On discrete orthogonal polynomials of several variables, Adv. Appl. Math.
33, 615632.
Xu, Y. (2005a). Weighted approximation of functions on the unit sphere, Constr. Approx.
21, 128.
Xu, Y. (2005b). Second order difference equations and discrete orthogonal polynomials
of two variables, Int. Math. Res. Not. 8, 449475.
Xu, Y. (2005c). Monomial orthogonal polynomials of several variables, J. Approx. Theory,
133, 137.
Xu, Y. (2005d). Rodrigues type formula for orthogonal polynomials on the unit ball, Proc.
Amer. Math. Soc. 133, 19651976.
Xu, Y. (2006a). A direct approach to the reconstruction of images from Radon projections,
Adv. Applied Math. 36, 388420.
Xu, Y. (2006b). A family of Sobolev orthogonal polynomials on the unit ball, J. Approx.
Theory 138, 232241.
Xu, Y. (2006c). Analysis on the unit ball and on the simplex, Electron. Trans. Numer.
Anal., 25, 284301.
Xu, Y. (2007). Reconstruction from Radon projections and orthogonal expansion on a
ball, J. Phys. A: Math. Theor. 40, 72397253.
Xu, Y. (2008). Sobolev orthogonal polynomials dened via gradient on the unit ball, J.
Approx. Theory 152, 5265.
Xu, Y. (2010). Fourier series and approximation on hexagonal and triangular domains,
Const. Approx. 31, 115138.
Xu, Y. (2012). Orthogonal polynomials and expansions for a family of weight functions
in two variables. Constr. Approx. 36, 161190.
Xu, Y. (2013). Complex vs. real orthogonal polynomials of two variables. Integral
Transforms Spec. Funct., to appear. arXiv:1307.7819.
412 References
Yamamoto, T. (1995). Multicomponent Calogero model of B
N
-type conned in a
harmonic potential, Phys. Lett. A 208, 293302.
Yan, Z. M. (1992). Generalized hypergeometric functions and Laguerre polynomials in
two variables, in Hypergeometric Functions on Domains of Positivity, Jack Polyno-
mials, and Applications, pp. 239259, Contemporary Mathematics 138, American
Mathematical Society, Providence, RI.
Zernike, F. and Brinkman, H. C. (1935). Hypersph arishe Funktionen und die in
sph arischen Bereichen orthogonalen Polynome, Proc. Kon. Akad. v. Wet., Amsterdam
38, 161170.
Zygmund, A. (1959). Trigonometric Series, Cambridge University Press, Cambridge.
Author Index
Abramowitz, M., 169, 308, 396
Agahanov, C. A., 55, 396
Akhiezer, N. I., 87, 112, 396
Aktas, R, 171173, 396
de

Alvarez, M., 55, 396
Andrews, G. E., xvi, 27, 396
Aomoto, K., 396
Appell, P., xv, 27, 54, 55, 136, 143, 148, 153,
171, 288, 396
Area, I., 112, 396
Askey, R., xvi, 27, 234, 299, 315, 394, 396
Atkinson, K., 136, 172, 396
Axler, S., 136, 397
Bacry, H., 172, 397
Badkov, V., 257, 397
Bailey, W. N., xv, 27, 250, 311, 397
Baker, T. H., 288, 363, 376, 397
Barrio, R., 112, 397
Beerends, R. J., 172, 353, 362, 397
Benson, C. T., 174, 207, 402
Berens, H., 55, 113, 172, 313, 314, 397
Berg, C., 68, 69, 112, 397
Bergeron, N., 353, 397
Bertran, M., 112, 397
Bojanov, B., 287, 398
Bos, L., 315, 398
Bourdon. B., 136, 397
Braaksma, B. L. J., 136, 398
Bracciali, C., 172, 398
Brinkman, H. C., 55, 412
Buchstaber, V., 207, 398
Calogero, F., 395, 398
Chen, K. K., 316
Cheney, E. W., 315, 398
Cherednik, I., 362, 398
Chevalley, C., 207, 398
Chihara, T. S., 27, 398
Christensen, J. P. R., 68, 397
Cicho n, D., 112, 398
Connett, W. C., 55, 398
Constantine, A. G., 353, 403
Cools, R., 113, 409
van der Corput, J. G., 133, 398
Coxeter, H. S. M., 174, 178, 207, 398
Dai, F., 136, 257, 316, 317, 398, 399
Debiard, A, 399
Delgado, A. M., 56, 112, 172, 398, 399
Della Vecchia, B., 315, 398
DeVore, R. A., 315, 399
Didon, F., 136, 143
van Diejen, J. F., 172, 395, 399
Dieudonn e, J., xvi, 399
Dijksma, A., 136, 399
Dubiner, M., 55, 399
Dunkl, C. F., 55, 56, 136, 193, 207, 256, 257,
287, 362, 363, 394, 395, 397, 399, 400
Dunn, K. B., 172, 400, 401
Eier, R., 172, 400, 401
Engelis, G. K., 55, 401
Engels, H., 113, 401
Erd elyi, A., xv, xvi, 27, 54, 57, 102, 136, 143,
148, 153, 171, 247, 283, 286, 288, 316, 401
Etingof, P, 363, 401
Exton, H., 5, 27, 401
Fackerell, E. D., 55, 401
Farouki, R. T., 171, 401
Felder, G., 207, 398
Fern andez, L., 55, 172, 396, 398, 399, 401
de F eriet, J. K., xv, 27, 54, 55, 136, 143, 148,
153, 171, 288, 396
Filbir, F., 55, 398
Folland, G. B., 55, 401
Forrester, P. J., 288, 363, 376, 394, 397, 401
Freud, G., 27, 69, 315, 401
Fuglede, B., 68, 112, 401
414 Author Index
G orlich, E., 308, 402
Garsia, A. M., 353, 397
Gasper, G., xvi, 311, 312, 315, 401
Gaveau, B., 399
Gekhtman, M. I., 112, 401
Genest, V., 395, 401
Geronimo, J. S., 56, 112, 399
Ghanmi, A., 402
Ghanmi, A., 55
Godoy, E., 112, 396
Goodman, T. N. T., 171, 401
Grifths, R., 171, 402
Groemer, H., 136, 402
Grove, L. C., 174, 207, 402
Grundmann, A., 171, 402
Han, W., 136, 396
Hanlon, P., 363, 400
Hansen, O., 172, 396
Haviland, E. K., 68, 402
Heckman, G. J., 203, 207, 362, 402
Helgason, S., 136, 257, 402
Hermite, C., 136, 143
Higgins, J. R., 70, 402
Hoffman, M. E., 402
Horn, R. A., 66, 74, 402
Hua, L. K., xvi, 402
Humphreys, J. E., 174, 207
Ignatenko, V. F., 187, 402
Ikeda, M., 56, 403
Iliev, P., 56, 112, 172, 399, 403
Intissar, A., 55, 403
Ismail, M., 55, 395, 401, 403
It o, K., 55, 403
Ivanov, K., 317, 403
Jack, H., 353, 395, 403
Jackson, D., 57, 65, 112, 403
James, A. T., 353, 403
de Jeu, M. F. E., 193, 400, 403
Johnson, C. R., 66, 74, 402
Kakei, S., 395, 403
Kalnins, E. G., 257, 403
Kalyuzhny, A. A., 112, 401
Kanjin, Y., 55, 403
Karlin, S., 172, 403
Kato, Y., 395, 403
Kerkyacharian, G., 317, 403
Kim, Y. J., 55, 403
Klimyk, A. U., xvi, 56, 409
Knop, K, 363, 403
Koelink, E., xvii, 404
Kogbetliantz, E., 299, 404
Koornwinder, T. H., xv, xvi, 5456, 136, 171,
399, 404, 408
Koschmieder, L, 316
Kowalski, M. A., 112, 404
Krall, H. L., 37, 55, 57, 112, 404
Kro o, A., 315, 404
Kwon, K. H., 55, 403
Lapointe, L., 362, 395, 404
Laporte, O., 207, 404
Larcher, H., 55, 404
Lassalle, M., 363, 394, 404, 405
Lasserre, J., 113, 405
Lebedev, N. N., xv, 350, 405
Lee, J. K., 55, 172, 403, 405
Li, H., 56, 172, 405
Li, Zh.-K., 316, 317, 405
Lidl, R., 56, 172, 400, 401, 405
Littlejohn, L. L., 55, 172, 405
Littler, R. A., 55, 401
Logan, B., 54, 287, 405
Lorentz, G. G., 291, 315, 399, 405
Lubinsky, D. S., 315, 404
Luque, J-G., 363, 400
Lyskova, A. S., 55, 405
M oller, H. M., 113, 171, 402, 406
M uller, C., 136, 256, 406
M at e, A., 315, 405
Macdonald, I. G., 161, 256, 353, 375, 405
Maiorov, V. E., 171, 405
Marcell an, F, 112, 399
Markett, C., 308, 402, 405
Marr, R., 54, 405
Mastroianni, G., 315, 398
McGregor, J., 172, 403
Mehta, M. L., 405
Meulenbeld, B., 136, 398
Miller, W. Jr., 257, 403
Moody, R., 172, 406
Morrow, C. R., 113, 406
Moser, W. O. J., 207
Mysovskikh, I. P., 108, 109, 113, 406
Nagy, B. Sz., 83, 407
Narcowich, F., 317, 406
Nelson, E., 87, 406
Nesterenko, M., 172, 406
Nevai, P., 257, 291, 315, 405, 406
Nishino, A., 395, 406
Noumi, M., xvi, 406
Nussbaum, A. E., 68, 406
Okounkov, A., 362, 406
Olshanski, G., 362, 406
Opdam, E. M., 172, 193, 207, 353, 362, 394,
395, 397, 400, 402, 406
P erez, T. E., 55, 172, 396, 398, 399, 401, 406
Patera, J., 172, 406
Patterson, T. N. L., 113, 406
Pe na, J. M., 112, 397
Petrova, G., 287, 398
Petrushev, P., 171, 317, 403, 406, 407
Picard, D., 317, 403
Author Index 415
Pi nar, M. A., 55, 172, 396, 398, 399, 401, 406,
407
Podkorytov, A. M., 314, 407
Proriol, J., 55, 407
Putinar, M., 112, 113, 407
Radon, J., 113, 407
Rahman, M., xvi, 401
Ramey, W., 136, 397
Ramirez, D. E., 136, 400
Ressel, P., 68, 397
Reznick, B., 112, 407
Ricci, P. E., 172, 407
Riesz, F., 83, 407
Ronveaux, A., 112, 396
Rosengren, H., 171, 407
Rosier, M., 171, 407
R osler, M., 200, 207, 222, 287, 288, 394, 407
Roy, R., xvi, 27, 396
Rozenblyum, A. V., 407
Rozenblyum, L. V., 407
Rudin, W., 83, 298, 407
Sahi, S, 363, 403, 407
Sauer, T., 112, 171, 397, 401
Schaake, G., 133, 398
Schm udgen, K., 68, 408
Schmid, H., 55, 56, 113, 172, 397
Schwartz, A. L., 55, 171, 398, 404
Selberg, A., 376, 408
Sheffer, I. M., 37, 55, 57, 112, 404
Shephard, G. C., 184, 408
Shepp, I., 54, 287, 405
Shishkin, A. D., 56
Shohat, J., 68, 112, 408
Simeonov, P., 55, 403
Span` o, D. , 171, 402
Sprinkhuizen-Kuyper, I., 56, 404, 408
Stanley, R. P., 351, 353, 395, 408
Stegun, I., 169, 308, 396
Stein, E. M., 136, 257, 299, 314, 408
Stochel, J., 112, 398
Stokman, J. V., xvii, 408
Stroud, A., 113, 408
Suetin, P. K., 55, 56, 65, 112, 408
Sun, J., 56, 405
Sutherland, B., 395, 408
Szafraniec, F. H., 112, 398
Szajewska, M., 172, 406
Szeg o, G., xv, 27, 292, 298, 307, 408
Tamarkin, J., 68, 112, 408
Tereszkiewicz, A., 172, 406
Thangavelu, S., 257, 308, 310, 316, 408
Thill, M., 69, 397
Totik, V., 315, 405
Tratnik, M. V., 257, 403, 408, 409
Uglov, D., 395, 409
Ujino, H., 394, 395, 406, 409
Vasilescu, F., 112, 407
Verlinden, P., 113, 409
Veselov,A., 207, 398
Vilenkin, N. J., xvi, 56, 409
Vinet, L., 362, 395, 399, 401, 404
Voit, M., 394, 407
Volkmer, H., 257, 409
Vretare, L., 172, 409
W unsche, A., 54, 56, 410
Wadati, M., 394, 395, 406, 409
Wade, J., 287, 317, 409
Waldron, S., 54, 171, 409
Wang, H., 257, 398
Ward, J., 317, 406
Warnaar, S. O., 394, 401
Weiss, G., 136, 257, 299, 314, 408
Withers, W. D., 402
Xu, Y., 5456, 87, 112, 113, 136, 171173, 256,
257, 287, 313317, 396399, 403,
405408, 410, 411
Yamamoto, T., 395, 403, 412
Yan, Z. M., 394, 412
Zarzo, A., 112, 396
Zernike, F., 55, 412
Zhedanov, A., 395, 401
zu Castell, W., 55, 398
Zygmund, A., 293, 412
Symbol Index
a
R
(x), 177
a

, 332
ad(S), 204
c
h
, 216
c
/
h
, 216
[[ f [[
A
, 199
h

(), 351
h

(), 351
h

, 208
j

, 332
p(D), 203
p

(x), 323
proj
n,h
P, 213
p, q)

, 217
p, q)
h
, 218
p, q)
p
, 342
p, q)
B
, 365
q

(w; s), 195


q
i
(w; s), 194
r
d
n
, 59
(t)

, 337
[x[, 150
x

, 58
(x)
n
, 2
w
T

, 150
A(B
d
), 199
B(x, y), 1
B
d
, 119, 141
C

n
(x), 17
C
(,)
n
(x), 25
D
i
, 188
D

i
, 215

i
, 216
D
u
, 188

, 192
E

, 331
E

, 331
F
A
, 6
F
B
, 6
F
C
, 6
F
D
, 6
2
F
1
, 3
G
n
, 62
H
n
(x), 13
H

n
(x), 24
H
d
n
, 115
H
d
n
(h
2

), 210
H
d+1
n
(H), 122
T
i
, 352
J
A
(t), 253
J

(x; 1/), 334, 354


J
i
, 82
J
n,i
, 104
K
W
(x, y), 202
K
n
(x, y), 97
L

n
(x), 14
L
n
, 71
L
n,i
, 71
L(
d
), 203
L
,n
(
d
), 203
L

(
d
), 203
L
s
, 60
M, 67
N
0
, 58
N
d,P
0
, 319
O(d), 174
P

(W

; x), 143
P
n,1/2

(x), 155
P
n,1/2

(x), 155
P
n
(H; x, y), 124
P

n
(x), 16
P
(,)
n
(x), 20
P
j,
(W
B

; x), 142
P
n
, 61
P
d
n
, 58
Symbol Index 417
P
n
( f ; x), 97
P
n
(x, y), 97
R, 176
R(w), 175
R
+
, 176
SO(d), 174
S
d1
, 114
S
d
, 179
#S
d
(), 332
S
n
( f ), 97
T
d
, 129, 150
T
d
hom
, 131
T
n
(x), 19
T
d
, 343
U

(x), 145, 152


U
n
(x), 20
U
i
, 321
V, 198
V
(x)
, 201
V

(x), 144, 151


V
d
n
, 60
V
d
n
(W), 121
W
a,b
(x), 138
W
d
, 180
W
H
(x), 139
W
B
H
(x), 120
W
m
H
(x), 125
W
L

(x), 141
W
T

(x), 150
W
B

, 141
[[, 58

j
, 143

+
, 319

R
, 319
_

, 376

, 208
(x), 1
, 115

h
, 191

, 328

m
, 334

, 334

, 208

n
(x), 100

i
, 327

C
, 88

d
, 58

d
n
, 58

W
, 184

d1
, 116

u
, 175
d, 116
Subject Index
adjoint map, 204
adjoint operator
of D
i
on R
d
, 216
of D
i
on S
d1
, 215
admissible
operator, 37
weight function, 125
alternating polynomial, 177
Appell polynomials, 152
Bessel function, 253
-Bessel function, 202
beta function, 1
biorthogonal bases on the ball, 147
biorthogonal polynomials, 102, 137
on B
d
, 145
on T
d
, 152
block Jacobi matrices, 82
truncated, 104
CalogeroSutherland systems, 385
Cauchy kernel, 248
Ces` aro means, 293
chambers, 178
fundamental, 178
Chebyshev polynomials, 19
ChristoffelDarboux formula
one-variable, 10
several-variable, 98
Christoffel function, 100
ChuVandermonde sum, 4
collection, 58
common zeros, 103
complete integrability, 386
completely monotone function, 312
composition (multi-index), 319
Coxeter group, 176, 178
cubature formula, 107, 113
Gaussian, 108
positive, 108
dihedral group, 180, 240
Dirichlet integral, 150
dominance order, 319
Dunkl operator, 174, 188, 272
for abelian group Z
d
2
, 228
for dihedral groups, 241
Dunkl transform, 250
Favards theorem, 73, 86
Ferrers diagram, 338
nite reection group, 178
irreducible, 178
1-form
-closed, 196
-exact, 196
Fourier orthogonal series, 96, 101
FunkHecke formula
for h-harmonics, 223
for ordinary harmonics, 119
for polynomials on B
d
, 269
gamma function, 1
Gaussian cubature, 157
Gaussian quadrature, 12
Gegenbauer polynomials, 16
generalized
binomial coefcients, 376
Gegenbauer polynomials, 25
Hermite polynomials, 24
Pochhammer symbol, 337
harmonic oscillator, 386
harmonic polynomials, 114
h-harmonics, 209
projection operator, 213
space of, 210
Hermite polynomials, 13
Cartesian, 279
spherical polar, 278
homogeneous polynomials, 58
Subject Index 419
hook-length product
lower, 351
upper, 351
hypergeometric function, 3
Gegenbauer polynomials, 16, 18
Jacobi polynomials, 20
hyperoctahedral group, 180
hypersurface, 108
inner product
biorthogonal type, 341
h inner product, 218
permissible, 321
torus, 343
intertwining operator, 198
invariant polynomials, 184
Jack polynomials, 334, 354
Jacobi polynomials, 20, 138
on the disk, 39, 81
on the square, 39, 79
on the triangle, 40, 80
joint matrix, 71
Koornwinder polynomials, 156
Laguerre polynomials, 14
LaplaceBeltrami operator, 118
Laplace operator, 114
Laplace series, 123, 296
h-Laplacian, 191
Lauricella functions, 5
Lauricella series, 145
leading-coefcient matrix, 62
Legendre polynomials, 19
length function, 183
Mehler formula, 280
moment, 67
Hamburgers theorem, 68
matrix, 61
moment functional, 60
positive denite, 63
quasi-denite, 73
moment problem, 67
determinate, 67
moment sequences, 67
monic orthogonal basis, 137
monomial, 58
multi-index notation, 58
multiple Jacobi polynomials, 138
multiplicity function, 188
nonsymmetric JackHermite polynomials, 361
nonsymmetric Jack polynomials, 328
order
dominance, 319
graded lexicographic, 59
lexicographic, 59
total, 59
orthogonal group, 174
orthogonal polynomials
Appells, 143
complex Hermite, 43
disk, 44
for radial weight, 40
generalized Chebyshev, 52, 162
generalized Hermite, 278
generalized Laguerre, 283
in complex variables, 41
Koornwinder, rst, 45
Koornwinder, second, 50
monic, 65
monic on ball, 144, 267
monic on simplex, 151, 276
multiple Hermite, 139
multiple Laguerre, 141
one-variable, 6
on parabolic domain, 40
on the ball, 141, 258
on the simplex, 150, 271
on the triangle, 35
on the unit disk, 30
orthonormal, 64
product Hermite, 30
product Jacobi, 30
product Laguerre, 30
product type, 29
product weight, 138
radial weight, 138
Sobolev type, 165
two-variable, 28
via symmetric functions, 154
parabolic subgroup, 182
partition, 319
Pochhammer symbol (shifted factorial), 2
Poincar e series, 184
Poisson kernel, 222
positive roots, 176
raising operator, 325
reproducing kernel, 97
associated with Z
d
2
, 233
on the ball, 124, 148, 265, 268
on the simplex, 153, 274, 275
on the sphere, 124
h-spherical harmonics, 221
Rodrigues formula
on the ball, 145, 263
on the disk, 32
on the simplex, 152
on the triangle, 36
root system, 176
indecomposable, 178
irreducible, 178
rank of, 178
reduced, 176
rotation-invariant weight function, 138
420 Subject Index
Saalsch utz formula, 5
SelbergMacdonald integral, 373
simple roots, 178
special orthogonal group, 174
spherical harmonics, 114, 139
spherical polar coordinates, 116
structural constant, 9
summability of orthogonal expansion
for Hermite polynomials, 306
for Jacobi polynomials, 311
for Laguerre polynomials, 306
on the ball, 299
on the simplex, 304
on the sphere, 296
surface area, 116
symmetric group, 179
three-term relation, 70
commutativity conditions, 82
for orthogonal polynomials on the disk, 81
for orthogonal polynomials on the square, 79
for orthogonal polynomials on the triangle, 80
rank conditions, 72
torus, 343
total degree, 58
type B operators, 365
Van der CorputSchaake inequality, 133
weight function
admissible, 125
centrally symmetric, 76
dihedral-invariant, 240
Z
d
2
-invariant, 228
quasi-centrally-symmetric, 78
reection-invariant, 208
S-symmetric, 119
zonal harmonic, 119

Potrebbero piacerti anche