Sei sulla pagina 1di 8

Development of a Probabilistic Methodology for Damage Stability regarding Bottom Damages

Felix-Ingo Kehren, Stefan Krger


Hamburg University of Technology, Institute of Ship Design and Ship Safety Hamburg, Germany

Abstract
Today damage stability calculations are carried out in three major steps: Firstly a damage generation, secondly a system response and thirdly a validation of the system response are carried out and therefore the generation of damages is highly time consuming. This paper suggests a methodology to automate the damage generation by using Monte Carlo simulations. This methodology is exemplified with bottom damages and based on the probability density functions given in the IMO/ MARPOL- Oil-outflow regulations. The paper shows on one hand the statistical inconsistency of these functions, but will on the other hand demonstrate that damage generation with the Monte Carlo simulations is applicable with very short calculation times. That opens the opportunity to compare different subdivision arrangements even in an early design stage of a vessel quickly. Furthermore this methodology can be extended to side damages, because the automization of the damage generating is in focus.

structure of the approach of each damage calculation depends closely on the chosen subdivision. Consequently minor, but at least medium changes in subdivision may cause a completely new and different approach due the distribution of damage-triangles and its shadow-areas. By this, to compare alternative subdivision concepts, is extensive. Damage calculations are carried out in three major steps, as shown in Fig.1. First of all the damages are to be generated. Several single damages were integrated in damage (leak) groups. These damages were transferred to further calculations to proof with own safety criterions or in most cases this will be given by current rules. Both have in common, that the subdivision, which can be compartments or tanks for instance, can be regarded as a response of the system subdivision.

Damage Generation

Keywords
Monte Carlo simulation; damage stability; damage safety; probability; probabilistic damage calculation; bottom damage; MARPOL
System Response

Introduction
The motivation of this paper is the aim of automated damage generation for use in damage stability or oil outflow calculations. Although the new harmonized damage stability rules, which will enter into force as from 2009, the damage calculations including the oil outflow calculations are and will stay very time- and labor-intensive - various reasons cause this complexity: The model of rooms, compartments, and the whole subdivision has to be carried out highly detailed, which causes a major disadvantage, especially in the early design stage of vessels. The procedure respectively the

Evaluation

Fig. 1: The three major steps in damage calculation

That automation of the damage generation is possible shall be demonstrated with bottom damages, for two reasons: Side damages will be calculated in the harmonized stability calculations then in a probabilistic way in

six cases. On three different drafts, each on starboard and on port side. But in case of bottom damage, just a minimum height of the double bottom has to be obtained. And bottom damages affecting the double bottom are not taken into account, which is still a deterministic concept. But towards a fully probabilistic approach to damage safety of merchant vessels, it may be useful to have alternative methods for bottom damages. The long term goal are comparable safety levels and in a second step safety standards. Today, this is not given. The second reason is the statistical data. Within the Interim guidelines for the approval of alternative methods of design and construction of oil tankers under regulation 13F(5) of Annex I of MARPOL 73/78 (IMO; MARPOL 73/78; Regulation 13F(5); 2002), probability density function (pdf) were provided. These functions are the base of Monte Carlo simulations, which is applied in this methodology. But getting sufficient and reliable statistical data of the frequency, of the location and of the extent of bottom damages is not easy, because in a statistical sense the absolute numbers of bottom damages are few. The in (IMO; MARPOL 73/78; Regulation 13F(5); 2002) given statistics are basing on only 52 collisions and 63 groundings (IMO; MARPOL 73/78; Explanatory Notes Regulation I/21). And especially minor accidents are documented not well enough, in terms of the exact position and the extent of the damage, which is the major difficulty. But nevertheless the basis of a statistical approach is the probability density function prepared normally by reviewed incidents.

Fig. 2: Longitudinal Location

Fig. 3: Transversal Location

Methodology
Statistical Data of Bottom Damages To describe bottom damages there are several opportunities available. One is to isolate the calculation of the location of a damage from the damage extent. The projected area of a vessel in the top view is planar and every point in this plane can be described by a longitudinal and a transversal coordinate. In (IMO; MARPOL 73/78; Regulation 13F(5); 2002) these coordinates are called Longitudinal Location and Transversal Location. Analogue to this the extent of the damage cuboid is described spacious by the three dimensions: Longitudinal Extent, Transversal Extent and because of regarding bottom damages the Vertical Penetration. By integrating the probability density function the probability distribution function results. It is to state, that the area below the probability density function has to be equal 1. The following Fig. 2 to Fig. 6 cross out these five probability density function and their corresponding probability distribution function.

Fig. 4: Longitudinal Extent

Fig. 5: Transversal Extent

Within those five values of location and extend, independently drawn, weighted by the probability distribution functions, a bounding cuboid of the damage is the output.
Probability Distribution Functions

Generation of Damage Cuboid Random number Random Number Generator

Start

Fig. 6: Vertical Penetration


CAD-Subdivision-Model Generation of Subdivision -Model

Monte Carlo Simulation The actual methodology of this simulation starts with integrating the probability density function to achieve the probability distribution function. Normally damage calculations end up with the attained subdivision index and finally with the required GM or the other way round with a limiting KGmax. Is the attained index insufficient to satisfy the required index, it is very difficult to analyze the impact of small changes in all required six damage cases of the new harmonized damage calculations; three drafts, each on starboard and port side. This generates the desire for an automated, fast and highly reliable damage stability calculation procedure. Strictly speaking for the early design stage, the application of damage calculations are carried out has room for improvement. For a given subdivision the probability to damage a compartment is calculated and further on the worst case of a damage group is determined. Unfortunately the Explanatory Notes according to this are not that clear as they should be. In other words one is seeking for damages, which are damaging a certain compartment. Actually it is much more sensible generating a damage and then have a look which compartments were damaged by this special damage. This is the same way a vessel becomes damaged at sea: By outer circumstances a damage occurs and opens one or more compartments. One should follow the pattern of real accidents to achieve a process closer to the actual damages. The methodology for damage stability regarding bottom damages presented in this paper follows this approach. That opens the possibility to evaluate different designs in an early design stage regarding damage stability by short calculation times within using an automated Monte Carlo simulation to quantify the attained survivability index as benchmark. For each probability distribution function, one pseudo random number equally distributed on the closed interval [0;1] is generated and entered the function on the ordinate. The particularly corresponding values are resulting. These values are changed back subsequently to the correct dimensions include length, freeboard or perpendicular, moulded breadth respectively for the penetration the ships depth.

Which Compartments are damaged?

Has the Leak Combination occured?

Interval of Confidence

Yes Not Satisfied Counter Analysis: Interval of Confidence Satisfied

Calculation of pi

Database

Level of Confidence reached Selection of Criterion/ Rules

Calculating (pir i)- Factor for each damage case

Calculation of the system response ri

Evaluation

Fig. 7: Flow-Chart of the Methodology of bottom damages with Monte Carlo simulations

In case of (IMO; MARPOL 73/78; Regulation 13F(5),2002) the location in longitudinal and transversal direction of a centre point is determined. This centre point is the middle, the crossing of the diagonals, of the cuboid in the top view. The next step is generating the extent of the cuboid. The longitudinal extent given by the random number and the corresponding probability distribution function, changed back subsequently to the correct dimensions length between perpendiculars. One half of the extent fore and one half behind the centre point - analogue to the transversal extent of the cuboid. A random number of the interval [0;1] is generated, which corresponds to 30% of the vessels depth in maximum. This is sensible, because from physical point

of view a deeper penetration one will not overcome this in terms of ships safety. By this technique a damage cuboid is created. The location and the extent of this cuboid is exported to a CAD-model in the ship design system E4 of the vessel under consideration. The ship design system E4 provides the hull geometry, subdivision and hydrostatic calculations tools necessary to calculate the ri-factor. Now it is checked which compartments were damaged by the damage cuboid. This combination of damaged compartments is transferred to a data base. This data base is checked in order to achieve a level of statistical confidence, ceased by the user or maybe fixed for all cases. Is this level confidence not reached, another damage cuboid is generated. By this a probability can be calculated for damaging certain damage combination, called pi-factor. Independently from calculating the pi -factor for each damage combination appearing a ri-factor is calculated. The ri -factor may be the survivability or oil outflow seen as the response of the system, the ship preformance. Therefore the user has to choose the criterion he wants to apply, like a certain minimum floating condition of the vessel. To process the data of the damage cuboid, which compartments are damaged, the following data is obtained in E4 as for further calculation:
-

The direct follow-up question is how many loops of this proceeding are necessary to be sure at a certain level reliability and the meaningfulness about the survivability index. Obviously the number of generated cuboids needed, depends on the level convenience.

Level of convenience and number of generated damage cuboids To answer the question of the number of damage cuboids needed to be generated and what level of confidence is necessary for this methodology, it has to be proved which type of distribution function is this statistical process. The following key aspects have to be regarded:
-

There is a fixed number of experiments being subjected in the random process, n is to be the quantum. The event X got the probability p:

P(X) = pi with i = 1,..., n


-

(1)

Hullform Subdivision

The result of the experiment can be divided in two groups: success and failure. In terms of this methodology failure is meant to be the compartment is not damaged and success is meant to be the compartment is damaged:

Intact floating condition However, after the level of confidence is satisfied, the (pi ri )-factor can be calculated of each damage combination. Finally the evaluation and the interpretation of the calculation results can be started.

0 failure Xi = 1 success
-

(2)

The probability of success is equal to ever experiment carried out. p is the probability of success and 1-p is the complement:

P(X i = 1 ) = p and P(X i = 0 ) = 1 p


Random Number Generator Backbone of reliable random processes is a sufficient, independent generator of random numbers. This methodology need to have equally distributed random number of the closed interval of zero to one [0;1]. Actual random numbers can only be generated by separate devices, but this process shall work on a computer with no further equipment. So a mathematical algorithm is started on the computer delivering the so called pseudo random numbers. It is not that simple to generate sufficient independent random numbers by an algorithm, but a well known and widely proven random number generator is the Mersenne Twister 19937 (Matsumoto; Nishimura; 1998) developed by Matsumoto and Nishimura. It was named after the French monk Mersenne of the 17th century, who mainly worked with theoretical arithmetic. So the Mersenne Twister is based on prime numbers. The random number generator can be used either with always new random numbers or with a default value, a seed, to start the random generation in order to result always the same random numbers. This can be necessary, when a reproducing of a certain calculation is wanted. So both opportunities are possible.
-

(3/4)

The X i are independent of each other.

X is exactly the sum of X i :


n

X= Xi
i= 1

(5)

These statements are pointing to a binominal distribution, the probability function is:

n P(X = x) = f(x; n, p) = p x ( 1 p)n x x

(6)

The simplest opportunities of constructing a confidence interval is the Tschebyscheff- Confidence-Interval based on a inequality (Ostvik; Tellkamp; 2002)

calculated to

is the level of confidence, the length of

P(p [p

]) = 1

(7)

can be

2 n

p( 1 p)z

(8)

is the ( 1

) -quantile of the standard normal dis-

tribution and by this, n becomes


z 1 2 (1 p ) n 4p p
2

(9)

And with no a-priory information about p the size of the random sample has to be with (Cramer; Tellkamp; 2002) determined to
z 1 n 2 p

To validate the Monte Carlo simulation, test samples up to 106 were carried out and three exemplary results were visualized in Fig. 8, Fig. 9 and Fig. 10. These graphs show 25, 250 and 10000 samples, whereas the 10000 samples obviously are well fitting. The confidence intervals are one of the key dimensions of this methodology. To explain the functioning some figures are given in example concerning the Wilson-Interval. Is the level of confidence chosen to 99%, the z-value is determined to 2.576, of a normal distribution. This distribution is a binominal distribution, but the error made in the estimation is normal distributed. At a number of 10000 samples the P for instance of the z/Ds-value 0.05 is constructed to [0.048152; 0.052511]. But at a drawing of 100,000 samples the hosepipe narrows to the interval of [0.049344; 0.050722].

An alternative is the Wilson Interval (Wilson; 1927) and especially well performing at small amount of samples and at probabilities close to zero and close to one.
1 1 1 2 z z p( 1 p) + 2 z 2 n 2n 1 4n 1 1 2 2 2 1 2 1+ z n 1 2

p+

Results of calculations First of all it is to state, that the direct approach and the simplified approach are utilizing exactly the same function in case of the vertical penetration. So to validate this methodology the order of the set of functions is arbitrary.

Fig. 8: Vertical Penetration, 25 Samples

(10)

(11)
Fig. 9: Vertical Penetration, 250 Samples

Fig. 10: Vertical Penetration, 10000 Samples

As the level of confidence is increased to 99.9% the hosepipe of P widen again: at 10000 samples the interval will be even [0.043322; 0.057652], but the interval is getting smaller at 100,000 samples at [0.047781; 0.052310]. It is in fact useful to start determine the width of the interval to achieve and by that the of the level of confidence. For approach of the direct calculation the Monte Carlo simulation is working very well, as shown exemplarily with the function of the vertical penetration.

Damage generation and evaluation of MEPC.141(54) From the probability distribution generated from the probability density functions, the following boundaries of a damage cuboid can be determined:

given in MARPOL and as derived from our Monte Carlo simulations is shown in Fig. 11.

X min = xloc 0.5 xextent X max = xloc + 0.5 xextent Ymin = y loc 0.5 y extent Ymax = y loc + 0.5 y extent
Z max= z penetration

(12) (13) (14) (15) (16)

It should be noted that if e.g. the location of a damage is selected arbitrarily from the related probability distributions, the damage extent is a dependent probability, which is limited as follows:

Fig. 11 : Comparison of the Monte Carlo simulated longitudinal probabilities and with the tabulated simplified functions of MARPOL

Extent = min loc, 1 loc

(17) Although the general trend of the curves is quite similar, the results show a significant difference especially towards the forward end. Further, it is to be noted that according to the simplified MARPOL functions, the probability PBa takes a value of 0.761 at x =1 instead ity PBf ( x = 1) equals to zero. This does practically mean

This assumes that no damage can span over the ends of the ship, where the ends of the ship have been originally defined as A.P. and LbP, as the basic probability density functions given in MARPOL 73/78, ANNEX1 have been made dimensionless with these values. So the boundaries of a damage cuboid can be generated by arbitrarily generating a location, and then, based on that location, generating a new dependent probability distribution where all physically possible damage extents cumulate to the total probability of 1. As the general concept of generating damages by using Monte Carlo simulations has proven to be successful in the previous section, it will now be checked against the simplified approach which is given in MARPOL I/21. There, the basic damage assumptions which were given by the probability density functions have been converted into probability tables. The probability that a compartment bounded by X min and X max is according to the simplified approach then calculated as

of extending to PBa ( x = 1) = 1 . Where the probabil-

time no damage is located in front of x=1 as PBf (1) = 0 .

that for 23.9% of a sample, the damage lies entirely in front of x =1, as 1 PBa (1) = 0.239 , but at the same This is of course inconsistent, and some modifications of the original probability density functions must have been introduced into the simplified approach. However, the simplified formulae can not serve as a basis for a Monte Carlo simulation concept as they do not extent to 1 on one hand and they both refer to different samples, as PBa excludes 23.9% of -the samples and PBf 3.1% of the samples. Further, to simulate a damage, it is of course more straight forward to really define where the damage actually is as to formulate where the damage is not. This is further demonstrated if we consider the longitudinal probability of having e.g. a damage with extremely small or even zero extent. The related probability then amounts to

P = 1 PBa ( X min ) PBf ( X max )

(18)

age is entirely located aft of X min and PBf ( X max ) de-

Where PBa ( X min ) denotes the probability that the dam-

notes the probability that the damage lies entirely in front of X max . As derived from the same basic damage information, both concepts, namely describing a damage by location and extend or by aft/front plane, should come to the same results. Therefore, we have generated 10000 samples of locations and dependent extends and from this sample, one can obtain the functions PBa and PBf by simply counting how many damages are located in front of or after a given x- value. The comparison of the probabilities as

P ( x, ext = 0 ) = 1 PBa ( x ) PBf (x )

(19)

The graphs of the probability as determined from the simplified figures and the direct Monte Carlo simulations are plotted in Fig. 12. The comparison shows that according to the simplified probability tables, a significant portion the damages are not covered by the sample, as the probability of having

a zero length damage should equal to 0 if we approach the ends. The main problem which may have lead to this inconsistency is the fact that in the original damage assumptions, the probability density functions have been referenced to A.P. and F.P, whereas one can imagine that a significant amount of damages may be located outside this interval.

tion should lead to the same basic probabilities as the simplified tabled functions.

Fig. 13 : Comparison of the Monte Carlo simulated longitudinal probabilities with 19.1% fore and 1.5% aft overhang with the tabulated simplified functions of MARPOL Fig. 12 : Longitudinal probability of having a zero length damage according to the Monte Carlo simulation and according to MARPOL I/21

This requires of course that the simplified tables are consistent with the basic probability density functions, which is at least doubtful.

This is exactly what Fig. 12 suggests, especially towards the forward end. The reason for not cumulating the probabilities to 1 was given in the MARPOL explanations due to the fact that damage zones towards the ends of the vessel may span beyond the vessel. However, then it should be possible to obtain exactly the probability functions of PBa and PBf as tabulated in MARPOL if our Monte Carlo approach allows for larger damages towards the ends. That a damage spans over the dimensions of the vessel can be achieved in our Monte Carlo simulation by allowing for larger damage lengths, which results in the following modified equation for the extent:
Extent = min loc + ao, 1 + fo loc

(20)

Fig. 14 : Longitudinal Probability of zero length damages

Where ao and fo denote the lengths of the fore and aft overhang. ao and fo can now be treated as independent variables, and they can be determined in such a way that the least square between the MARPOL and directly computed functions takes a minimum. Care must be taken as the least squares have to be a sum over both functions. This procedure results in a best fit where the fore overhang amounts to abt. 19.1 % and the aft overhang to abt. 1.5%. The results are plotted in Fig. 13 and it can clearly be seen that the simplified functions of MARPOL are now much better represented. Fig. 13 is a clear proof for the fact that the MARPOL simplified approach must have modified the basic probability density functions in a similar way. For the application of our Monte Carlo simulation concept, it is of course useful that the simula-

The same numerical experiment as before can now be carried out with the modified procedure of simulated damages, to determine the longitudinal probability of a zero length damage. The results are plotted in Fig. 14 and they clearly show that the differences between the simplified MARPOL functions and the probability density function- based simulation now has become quite small, and, and that is most important, small enough for practical applications. The maximum deviation which can be found at about x=0.7 now amounts to 0.2762 vs. 0.298, which is 0.0218. Maybe, a better representation of the simplified data can be achieved if the damage location is also allowed to be outside the interval [0,1], but this would then mean a modification of the basic probability density function. That the simplified approach has perhaps simplified too

much is illustrated in the next example, where the probabilities for the transverse coordinates of the damage extent are compared. To convert the probability density function - representation with location and extend to the simplified table values, we have applied the same procedure as for the longitudinal probabilities. The results are plotted in the following figure:

determine the damage probability by 1 Pa Pf resp.

1 Ps Pstb , as this procedure results in problems towards the ends of the ship.

Conclusions
The result of this paper can be seen from a bifocal perspective: First of all this methodology is functioning very well and can be applied to automate the generation of damage cuboids, and with very short calculation times. By this it is suitable for comparison of alimentative subdivision-concepts, especially in the early design stage of a vessel. Obviously there is a difference between the probability density functions presented at the beginning of the paper and the simplified functions of MARPOL. A consistent set of probability distributions, regardless whether expressed by a location plus extent or by fore plus aft terminal, is essentially required for the application of alternative methods.

References
Fig. 15 : Comparison of the Monte Carlo simulated transversal probabilities with the tabulated simplified functions of MARPOL

Cramer, H.; Tellkamp, J.; A Methology for Design Evaluation of Damage Stability; 6th Workshop on Ship Stability; Webb Institute; New York 2002 IMO, MARPOL 73/78, ANNEX II, Resolution MEPC.141(54), Amendments to the annex of the protocol of 1978 relating to the international convention for the prevention of the pollution from ships,1973 (Amendments to regulation 1, addition to regulation 12A, consequential amendments to the IOPP Certificate ans amendments to regulation 21 of the reviesed Annex I ofMARPOL73/78), Adopted on March 24 2006, London 2006 IMO, MARPOL 73/78, ANNEX 4, Explanatory Notes on Matters Related to the Accidental oil outflow Performance for MARPOL Regulation I/21 IMO, MARPOL 73/78, Consolidated Edition 2002, Unified Interpretations of Annex I, Appendix 7, Interim guidelines for the approval of alternative methods of design and construction of oil tankers under regulation 13F(5) of Annex I of MARPOL 73/78, adopted by MEPC.66(37), London 2002 Matsumoto, M.; Nishimura T.; Mersenne twister: A 623-dimensionally equidistributed uniform pseudorandom number generator; ACM Trans. on Modeling and Computer Simulations; 1998 Ostvik, I.; Tellkamp, J.; Application of a Monte Carlo Simulation for Damage Stability Calculations; Hiper 2002; Bergen 2002 Wilson, E.B.; Probable inference, the law of succesion, and statistical inference; Journal of the American Statistical Association 22; 209-212; 1927

The comparison shows the same problem as before for the longitudinal probabilities, as the simplified values do not cumulate to 1. But additionally, the slope of the curves is different, too. In the original damage assumptions represented by the probability density functions, the distribution of the transversal location is equal. This means, that specific characteristics of the damage extent probability density function should also be notable in the simplified functions Pps and Pstb . E.g., the extent probability density function (see Fig. 15) has two distinct knuckles: One knuckle at 0.9B and another at 0.3B. Beyond 0.9B and below 0.3B, the gradient is significantly steeper. This specific trend can be noted in principal at the Monte Carlo simulated probabilities, but not at the simplified curves. This shows that the original damage assumptions must have been modified in a certain way to obtain the tabulates values. For an alternative approach such as proposed, namely by generating damages from the probability distributions which have been obtained from integrating the original damage probability density functions, this inconsistency causes some problems. Because the tabulated simplified functions can for principal reasons not be used as input for a Monte Carlo simulation, and the simulations obtained from the original probability density functions do not lead to the same probabilities as stated. On the other hand, it could clearly be shown that the Monte Carlo simulation results are fully in line with the tabulated values for the vertical penetration, as both concepts have made use of the same basic damage probability density functions. With respect to longitudinal and the transversal distribution, is seems not to be a straightforward approach to

Potrebbero piacerti anche