Sei sulla pagina 1di 8

Convolution Kernels for Natural Language

Michael Collins Nigel Duffy


AT&T Labs–Research Department of Computer Science
180 Park Avenue, New Jersey, NJ 07932 University of California at Santa Cruz
mcollins@research.att.com nigeduff@cse.ucsc.edu

Abstract

We describe the application of kernel methods to Natural Language Pro-


cessing (NLP) problems. In many NLP tasks the objects being modeled
are strings, trees, graphs or other discrete structures which require some
mechanism to convert them into feature vectors. We describe kernels for
various natural language structures, allowing rich, high dimensional rep-
resentations of these structures. We show how a kernel over trees can
be applied to parsing using the voted perceptron algorithm, and we give
experimental results on the ATIS corpus of parse trees.

1 Introduction

Kernel methods have been widely used to extend the applicability of many well-known al-
gorithms, such as the Perceptron [1], Support Vector Machines [6], or Principal Component
Analysis [15]. A key property of these algorithms is that the only operation they require
 into a new
is the evaluation of dot products between pairs of examples. One may therefore replace
the dotproduct with a Mercer kernel, implicitly mapping feature vectors in
space , and applying the original algorithm in this new feature space. Kernels provide
an efficient way to carry out these calculations when  is large or even infinite.
This paper describes the application of kernel methods to Natural Language Processing

(NLP) problems. In many NLP tasks the input domain cannot be neatly formulated as a sub-
set of . Instead, the objects being modeled are strings, trees or other discrete structures
which require some mechanism to convert them into feature vectors. We describe kernels
for various NLP structures, and show that they allow computationally feasible representa-
tions in very high dimensional feature spaces, for example a parse tree representation that
tracks all subtrees. We show how a tree kernel can be applied to parsing using the percep-
tron algorithm, giving experimental results on the ATIS corpus of parses. The kernels we
describe are instances of “Convolution Kernels”, which were introduced by Haussler [10]
and Watkins [16], and which involve a recursive calculation over the “parts” of a discrete
structure. Although we concentrate on NLP tasks in this paper, the kernels should also be
useful in computational biology, which shares similar problems and structures.

1.1 Natural Language Tasks

Figure 1 shows some typical structures from NLP tasks. Each structure involves an “ob-
served” string (a sentence), and some hidden structure (an underlying state sequence or
tree). We assume that there is some training set of structures, and that the task is to learn
a) Lou Gerstner is chairman of IBM
[S [NP Lou Gerstner ] [VP is [NP chairman [PP of [NP IBM ] ] ] ] ]
b) Lou Gerstner is chairman of IBM Lou/SP Gerstner/CP is/N chairman/N of/N IBM/SC
c) Lou/N Gerstner/N is/V chairman/N of/P IBM/N

Figure 1: Three NLP tasks where a function is learned from a string to some hidden struc-
ture. In (a), the hidden structure is a parse tree. In (b), the hidden structure is an under-
lying sequence of states representing named entity boundaries (SP = Start person, CP =
Continue person, SC = Start company, N= No entity). In (c), the hidden states represent
part-of-speech tags (N = noun, V = verb, P = preposition,).

the mapping from an input string to its hidden structure. We refer to tasks that involve trees
as parsing problems, and tasks that involve hidden state sequences as tagging problems.
In many of these problems ambiguity is the key issue: although only one analysis is plau-
sible, there may be very many possible analyses. A common way to deal with ambiguity
is to use a stochastic grammar, for example a Probabilistic Context Free Grammar (PCFG)
for parsing, or a Hidden Markov Model (HMM) for tagging. Probabilities are attached to
rules in the grammar – context-free rules in the case of PCFGs, state transition probabili-
ties and state emission probabilities for HMMs. Rule probabilities are typically estimated
using maximum likelihood estimation, which gives simple relative frequency estimates.
Competing analyses for the same sentence are ranked using these probabilities. See [3] for
an introduction to these methods.
This paper proposes an alternative to generative models such as PCFGs and HMMs. Instead
of identifying parameters with rules of the grammar, we show how kernels can be used to
form representations that are sensitive to larger sub-structures of trees or state sequences.
The parameter estimation methods we describe are discriminative, optimizing a criterion
that is directly related to error rate.
While we use the parsing problem as a running example in this paper, kernels over NLP
structures could be used in many ways: for example, in PCA over discrete structures, or
in classification and regression problems. Structured objects such as parse trees are so
prevalent in NLP that convolution kernels should have many applications.

2 A Tree Kernel
The previous section introduced PCFGs as a parsing method. This approach essentially
counts the relative number of occurences of a given rule in the training data and uses these
counts to represent its learned knowledge. PCFGs make some fairly strong independence
assumptions, disregarding substantial amounts of structural information. In particular, it
does not appear reasonable to assume that the rules applied at level  in the parse tree are
unrelated to those applied at level  .
As an alternative we attempt to capture considerably more structural information by con-
sidering all tree fragments that occur in a parse tree. This allows us to capture higher order
dependencies between grammar rules. See figure 2 for an example. As in a PCFG the new
representation tracks the counts of single rules, but it is also sensitive to larger sub-trees.

Conceptually we begin by enumerating all tree fragments that occur in the training data
 
    . Note that this is done only implicitly. Each tree is represented by an  dimen-
sional vector where the  ’th component counts the number of occurences of the  ’th tree
fragment. Let us define the function   to be the number of occurences of the  ’th tree
fragment in tree  , so that  is now represented as     !
 
 "  #$ .
a) S b) NP NP D N NP NP

NP VP D N D N the apple D N D N

N the apple the apple


V NP

Jeff ate D N

the apple

Figure 2: a) An example tree. b) The sub-trees of the NP covering the apple. The tree in
(a) contains all of these sub-trees, and many others. We define a sub-tree to be any sub-
graph which includes more than one node, with the restriction that entire (not partial) rule
productions must be included. For example, the fragment [NP [D the ]] is excluded
because it contains only part of the production NP D N.

Note that  will be huge (a given tree will have a number of subtrees that is exponential in
its size). Because of this we would like design algorithms whose computational complexity
does not depend on  .
Representations of this kind have been studied extensively by Bod [2]. However, the work
in [2] involves training and decoding algorithms that depend computationally on the num-

ber of subtrees involved. The parameter estimation techniques described in [2] do not
correspond to maximum-likelihood estimation or a discriminative criterion: see [11] for
discussion. The methods we propose show that the score for a parse can be calculated in
polynomial time in spite of an exponentially large number of subtrees, and that efficient pa-
rameter estimation techniques exist which optimize discriminative criteria that have been
well-studied theoretically.
Goodman [9] gives an ingenious conversion of the model in [2] to an equivalent PCFG
whose number of rules is linear in the size of the training data, thus solving many of the
computational issues. An exact implementation of Bod’s parsing method is still infeasible,
but Goodman gives an approximation that can be implemented efficiently. However, the
method still suffers from the lack of justification of the parameter estimation techniques.
The key to our efficient use of this high dimensional representation is the definition of an
appropriate kernel. We begin by examining the inner product between  two trees   and
  under this representation,        #   #   . To compute we first define
the set of nodes in trees   and   as   and   respectively. We define the indicator
function     to be  if sub-tree  is seen rooted at node  and 0 otherwise. It follows
that #     
      and     
      . The first step to efficient
computation of the inner product is the following property (which can be proved with some
simple algebra): # # # #
   "!$#&% '(%  )*! ! !2#43 6573 6589:! ! 65-=>5-
+ -,/.0 + 1,/. + 9,/.0 +  ,;.2$<
where we define ?        @     A     . Next, we note that ?        can be
computed in polynomial time, due to the following recursive definition:
B If the productions at   and  are different ?     
   DC .
B If the productions at     are the same, and
and    and   are pre-terminals, then
?           .
In training, a parameter is explicitly estimated for each sub-tree. In searching for the best parse,
calculating the score for a parse in principle requires summing over an exponential number of deriva-
 underlying a tree, and in practice is approximated using Monte-Carlo techniques.
tions
Pre-terminals are nodes directly above words in the surface string, for example the N, V, and D
B Else if the productions at   and  are the same and   and  are not pre-terminals,
  
?        $"? 

    !
   $ 
 

where    
 is the number of children of   in the tree; because the productions at   /
are the same, we have    
      . The  ’th child-node of   is
   $  .
To see that this recursive definition is correct, note that ?        simply counts the number
of common subtrees that are found rooted at both   and   . The first two cases are trivially
correct. The last, recursive, definition follows because a common subtree for   and  can
be formed by taking the production at   / , together with a choice at each child of simply
taking the non-terminal at that child, or any one of the common sub-trees at that child.
       


Thus there are  "? 


       $$ possible choices at the  ’th child. (Note
that a similar recursion is described by Goodman [9], Goodman’s application being the
conversion of Bod’s model [2] to an equivalent PCFG.)
It is clear from the identity  #           ?        , and the recursive definition
of ?      , that  #       can be calculated in         time: the matrix of
?      values can be filled in, then summed. This can be a pessimistic estimate of
the runtime. A more useful characterization is that it runs in time linear in the number of
members            such that the productions at   and   are the same. In our
data we have found a typically linear number of nodes with identical productions, so that
most values of ? are 0, and the running time is close to linear in the size of the trees.
This recursive kernel structure, where a kernel between two objects is defined in terms
of kernels between its parts is quite a general idea. Haussler [10] goes into some detail
describing which construction operations are valid in this context, i.e. which operations
maintain the essential Mercer conditions. This paper and previous work by Lodhi et al. [12]
examining the application of convolution kernels to strings provide some evidence that
convolution kernels may provide an extremely useful tool for applying modern machine
learning techniques to highly structured objects. The key idea here is that one may take
a structured object and split it up into parts. If one can construct kernels over the parts
then one can combine these into a kernel over the whole object. Clearly, this idea can be
extended recursively so that one only needs to construct kernels over the “atomic” parts of
a structured object. The recursive combination of the kernels over parts of an object retains
information regarding the structure of that object.
Several issues remain with  the kernel we describe over trees and convolution kernels in
general. First, the value of   $ 
 will depend greatly on the size of the trees      .
One may normalize the kernel by using
     
      
         # $  
which also satisfies the essential Mercer conditions. Second, the value of the kernel when
applied to two copies of the same tree can be extremely large (in our experiments on the
order of  C! ) while the value of the kernel between two different trees is typically much
smaller (in our experiments the typical pairwise comparison is of order 100). By analogy
with a Gaussian kernel we say that the kernel is very peaked. If one constructs a model
which is a linear combination of trees, as one would with an SVM [6] or the perceptron,
the output will be dominated by the most similar tree and so the model will behave like
a nearest neighbor rule. There are several possible solutions to this problem. Following
Haussler [10] we may radialize the kernel, however, it is not always clear that the result is
still a valid kernel. Radializing did not appear to help in our experiments.
These problems motivate two simple modifications to the tree kernel. Since there will
be many more tree fragments of larger size – say depth four versus depth three – and
symbols in Figure 2.
consequently less training data, it makes sense to downweight the contribution of larger
tree fragments to the kernel. The first method for doing this is to simply restrict the depth
of the tree fragments we consider. The second method is to scale the relative importance of
tree fragments with their size. This can be achieved by introducing a parameter C , 
and modifying the base case and recursive case of the definitions of ? to be respectively
   
?        
and ?      
 

 "? 

    !
  $$ 

This corresponds to a modified kernel      #    


  #  $    , where 
 
 is the number of rules in the  ’th fragment. This kernel downweights the contribution
of tree fragments exponentially with their size.
It is straightforward to design similar kernels for tagging problems (see figure 1) and for
another common structure found in NLP, dependency structures. See [5] for details. In the
tagging kernel, the implicit feature representation tracks all features consisting of a subse-
quence of state labels, each with or without an underlying word. For example, the paired se-

quence Lou/SP Gerstner/CP is/N chairman/N of/N IBM/SC would in- 
  
clude features such as SP CP , SP Gerstner/CP N , SP CP is/N N of/N   
and so on.

3 Linear Models for Parsing and Tagging

This section formalizes the use of kernels for parsing and tagging problems. The method
is derived by the transformation from ranking problems to a margin-based classification
problem in [8]. It is also related to the Markov Random Field methods for parsing suggested
in [13], and the boosting methods for parsing in [4]. We consider the following set-up:
B
    
Training data is a set of example input/output pairs. In parsing we would have training

examples  where each is a sentence and each is the correct tree for that sentence.
B
 
We assume some way of enumerating a set of candidates for a particular sentence. We

use   to denote the ’th candidate for the  ’th sentence in training data, and   
 
  
    
  to denote the set of candidates for .
B 
Without loss of generality we take  to be the correct parse for (i.e.,    ).   
B Each candidate  is represented by a feature vector    in the space  . The param-
 
eters of the model are also a vector 
 . We then define the “ranking score” of each
!
! 
example as    . This score is interpreted as an indication of the plausibility of the


     .  "$#&%(')"$*
+ $, ! 
candidate. The output of the model on a training or test example is
!
When considering approaches to training the parameter vector , note that a ranking func-

the conditions $    !


tion that correctly ranked the correct parse above all competing candidates would satisfy
- /. 
    4C  10 32 42 576
. It is simple to modify the Perceptron
9! 8
and Support Vector Machine algorithms to treat this problem. For example, the SVM opti-
which minimizes     subject to !
the constraints       !
mization problem (hard margin version) is to find the

    - :. 
  ;5 <2 =2 5>6
! . Rather than explicitly calculating
, the perceptron algorithm and Support Vector Machines can be formulated as a search
? This can be achieved using a modified dynamic programming table where 65  =>5  =@  stores
the number of common subtrees at nodes
5)-=(58 of depth @ or less. The recursive <definition of can
A A context-free
be modified appropriately. <
grammar – perhaps taken straight from the training examples – is one way of
enumerating candidates.5 Another choice is to use a hand-crafted grammar (such as the LFG grammar
in [13]) or to take the most probable parses from an existing probabilistic parser (as in [4]).
 -
Define:                   - :. 
        
 -
Initialization: Set dual parameters   DC 
For     
    6
 
 
- 10 -
If        do nothing, Else        

Figure 3: The perceptron algorithm for ranking problems.

Depth 1 2 3 4 5 6
Score  
    
    
Improvement          ! 

Table 1: Score shows how the parse score varies with the maximum depth of sub-tree
considered by the perceptron. Improvement is the relative reduction in error in comparison
to the PCFG, which scored 74%. The numbers reported are the mean and standard deviation
over the 10 development sets.

for “dual parameters”   which determine the optimal weights ! 8


! 8 
!      -  :.   $ (1)
  

(we use      as shorthand for     ). It follows that the score of a parse can be
calculated using the dual parameters, and inner products between feature vectors, without
having to explicitly deal with feature or parameter vectors in the space :
! 8  
!
     
  .   
 
  
For example, see figure 3 for the perceptron algorithm applied to this problem.

4 Experimental Results
To demonstrate the utility of convolution kernels for natural language we applied our tree
kernel to the problem of parsing the Penn treebank ATIS corpus [14]. We split the treebank
randomly into a training set of size 800, a development set of size 200 and a test set of size
336. This was done 10 different ways to obtain statistically significant results. A PCFG
was trained on the training set, and a beam search was used to give a set of parses, with
PCFG probabilities, for each of the sentences. We applied a variant of the voted perceptron
algorithm [7], which is a more robust version of the original perceptron algorithm with
performance similar to that of SVMs. The voted perceptron can be kernelized in the same
way that SVMs can but it can be considerably more computationally efficient.
We generated a ranking problem by having the PCFG generate its top 100 candidate parse
trees for each sentence. The voted perceptron was applied, using the tree kernel described
previously, to this re-ranking problem. It was trained on 20 trees selected randomly from
the top 100 for each sentence and had to choose the best candidate from the top 100 on the
test set. We tested the sensitivity to two parameter settings: first, the maximum depth of
sub-tree examined, and second, the scaling factor used to down-weight deeper trees. For
each value of the parameters we trained on the training set and tested on the development
set. We report the results averaged over the development sets in Tables 1 and 2.
We report a parse score which combines precision and recall. Define to be the number
of correctly placed constituents in the  ’th test tree, " to be the number of constituents
Scale 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Score      
 
       
 
Imp.                 ! 

Table 2: Score shows how the parse score varies with the scaling factor for deeper sub-trees
is varied. Imp. is the relative reduction in error in comparison to the PCFG, which scored
74%. The numbers reported are the mean and standard deviation over the 10 development
sets.

proposed, and  to be the number of constistuents in the true parse tree. A constituent is
defined by a non-terminal label and its span. The score is then

 ! 
 C C   
  " 6
 
The precision and recall on the  ’th parse are /" and
/ respectively. The score is then
the average precision recall, weighted by the size of the trees  . We also give relative
improvements over the PCFG scores. If the PCFG score is
and the perceptron score is ,
the relative improvement is  C C
   .

 $ C C  .

 , i.e., the relative reduction in error.
We finally used the development set for cross-validation to choose the best parameter set-
tings for each split. We used the best parameter settings (on the development sets) for each
split to train on both the training and development sets, then tested on the test set. This gave
a relative goodness score of  C  with the best choice of maximum depth and a score
of
 C  with the best choice of scaling factor. The PCFG scored  on the test data.
All of these results were obtained by running the perceptron through the training data only
once. As has been noted previously by Freund and Schapire [7], the voted perceptron often
obtains better results when run multiple times through the training data. Running through
the data twice with a maximum depth of 3 yielded a relative goodness score of    ,
while using a larger number of iterations did not improve the results significantly.
In summary we observe that in these simple experiments the voted perceptron and an appro-
priate convolution kernel can obtain promising results. However there are other methods
which perform considerably better than a PCFG for NLP parsing – see [3] for an overview
– future work will investigate whether the kernels in this paper give performance gains over
these methods.

5 A Compressed Representation
When used with algorithms such as the perceptron, convolution kernels may be even more
computationally attractive than the traditional radial basis or polynomial kernels. The linear
combination of parse trees constructed by the perceptron algorithm can be viewed as a
weighted forest. One may then search for subtrees in this weighted forest that occur more
than once. Given a linear combination of two trees    "   which contain a common
subtree, we may construct a smaller weighted acyclic graph, in which the common subtree
occurs only once and has weight   . This process may be repeated until an arbitrary linear
combination of trees is collapsed into a weighted acyclic graph in which no subtree occurs
more than once. The perceptron may now be evaluated on a new tree by a straightforward
generalization of the tree kernel to weighted acyclic graphs of the form produced by this
procedure.
Given the nature of our data – the parse trees have a high branching factor, the words are
chosen from a dictionary that is relatively small in comparison to the size of the training
data, and are drawn from a very skewed distribution, and the ancestors of leaves are part
of speech tags – there are a relatively small number of subtrees in the lower levels of the
parse trees that occur frequently and make up the majority of the data. It appears that the
approach we have described above should save a considerable amount of computation. This
is something we intend to explore further in future work.

6 Conclusions
In this paper we described how convolution kernels can be used to apply standard kernel
based algorithms to problems in natural language. Tree structures are ubiquitous in natu-
ral language problems and we illustrated the approach by constructing a convolution kernel
over tree structures. The problem of parsing English sentences provides an appealing exam-
ple domain and our experiments demonstrate the effectiveness of kernel-based approaches
to these problems. Convolution kernels combined with such techniques as kernel PCA and
spectral clustering may provide a computationally attractive approach to many other prob-
lems in natural language processing. Unfortunately, we are unable to expand on the many
potential applications in this short note, however, many of these issues are spelled out in a
longer Technical Report [5].

References
[1] Aizerman, M., Braverman, E., and Rozonoer, L. (1964). Theoretical Foundations of the Potential
Function Method in Pattern Recognition Learning. Automation and Remote Control, 25:821–837.
[2] Bod, R. (1998). Beyond Grammar: An Experience-Based Theory of Language. CSLI Publica-
tions/Cambridge University Press.
[3] Charniak, E. (1997). Statistical techniques for natural language parsing. In AI Magazine, Vol. 18,
No. 4.
[4] Collins, M. (2000). Discriminative Reranking for Natural Language Parsing. Proceedings of the
Seventeenth International Conference on Machine Learning. San Francisco: Morgan Kaufmann.
[5] Collins, M. and Duffy, N. (2001). Parsing with a Single Neuron: Convolution Kernels for Natural
Language Problems. Technical report UCSC-CRL-01-01, University of California at Santa Cruz.
[6] Cortes, C. and Vapnik, V. (1995). Support–Vector Networks. Machine Learning, 20(3):273–297.
[7] Freund, Y. and Schapire, R. (1999). Large Margin Classification using the Perceptron Algorithm.
In Machine Learning, 37(3):277–296.
[8] Freund, Y., Iyer, R.,Schapire, R.E., & Singer, Y. (1998). An efficient boosting algorithm for com-
bining preferences. In Machine Learning: Proceedings of the Fifteenth International Conference.
San Francisco: Morgan Kaufmann.
[9] Goodman, J. (1996). Efficient algorithms for parsing the DOP model. In Proceedings of the
Conference on Empirical Methods in Natural Language Processing (EMNLP 96), pages 143-152.
[10] Haussler, D. (1999). Convolution Kernels on Discrete Structures. Technical report, University
of Santa Cruz.
[11] Johnson, M. The DOP estimation method is biased and inconsistent. To appear in Computa-
tional Linguistics.
[12] Lodhi, H., Christianini, N., Shawe-Taylor, J., and Watkins, C. (2001). Text Classification using
String Kernels. To appear in Advances in Neural Information Processing Systems 13, MIT Press.
[13] Johnson, M., Geman, S., Canon, S., Chi, S., & Riezler, S. (1999). Estimators for stochastic
‘unification-based” grammars. In Proceedings of the 37th Annual Meeting of the Association for
Computational Linguistics. San Francisco: Morgan Kaufmann.
[14] Marcus, M., Santorini, B., & Marcinkiewicz, M. (1993). Building a large annotated corpus of
english: The Penn treebank. Computational Linguistics, 19, 313-330.
[15] Scholkopf, B., Smola, A.,and Muller, K.-R. (1999). Kernel principal component analysis. In B.
Scholkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods – SV Learning,
pages 327-352. MIT Press, Cambridge, MA.
[16] Watkins, C. (2000). Dynamic alignment kernels. In A.J. Smola, P.L. Bartlett, B. Schlkopf, and
D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 39-50, MIT Press.

Potrebbero piacerti anche