Sei sulla pagina 1di 14

1.

About the Maximum Retry Period The Integration Server uses the same server thread for the initial service execution and the subsequent retry attempts. The Integration Server returns the thread to the server thread pool only when the service executes successfully or the retry attempts are exhausted. To prevent the execution and re-execution of a single service from monopolizing a server thread for a long time, the Integration Server enforces a maximum retry period when you configure service retry properties. The maximum retry period indicates the total amount of time that can elapse if the Integration Server makes the maximum retry attempts. By default, the maximum retry period is 15,000 milliseconds (15 seconds). Note: If service auditing is also configured for the service, the Integration Server adds an entry to the service log for each failed retry attempt. Each of these entries will have a status of Retried and an error message of Null. However, if the Integration Server makes the maximum retry attempts and the service still fails the final service log entry for the service will have a status of Failed and will display the actual error message. When you configure service retry, the Integration Server verifies that the retry period for that service will not exceed the maximum retry period. The Integration Server determines the retry period for the service by multiplying the maximum retry attempts by the retry interval. If this value exceeds the maximum retry period, Developer displays an error indicating that either the maximum attempts or the retry interval needs to be modified. Note: The watt.server.invoke.maxRetryPeriod server parameter specifies the maximum retry period. To change the maximum retry period, change the value of this parameter. 2. Example of Service Level Retry

Setting Service Retry Properties When configuring service retry, keep the following points in mind: You can configure retry attempts for flow services, Java services, and C services only. Only top-level services can be retried. That is, a service can be retried only when it is invoked directly by a client request. The service cannot be retried when it is invoked by another service (that is, when it is a nested service). If a service is invoked by a trigger (that is, the service is functioning as a trigger service), the Integration Server uses the trigger retry properties instead of the service retry properties. Unlike triggers, you cannot configure a service to retry until successful. To catch a transient error and re-throw it as an ISRuntimeException, the service must do one of the following: If the service is a flow service, the service must invoke pub.flow:throwExceptionForRetry. If the service is written in Java, the service can use com.wm.app.b2b.server.ISRuntimeException ( ). The service retry period must be less than the maximum retry period.

3. Example of Retry through REPEAT Step

4. REPEAT The REPEAT step allows you to conditionally repeat a sequence of child steps based on the success or failure of those steps. You can use REPEAT to: Re-execute (retry) a set of steps if any step within the set fails. This option is useful to accommodate transient failures that might occur when accessing an external system (for example, databases, ERP systems, Web servers, or Web services) or device. Re-execute a set of steps until one of the steps within the set fails. This option is useful for repeating a process as long as a particular set of circumstances exists (for example, data items exist in a data set).

Specifying the REPEAT Condition When you build a REPEAT step, you set the Repeat on property to specify the condition (success or failure) that will cause its children to re-execute at run time.

If you set Repeat on to The REPEAT step FAILURE Re-executes the set of child steps if any step in the set fails. SUCCESS Re-executes the set of child steps if all steps in the set complete successfully. Setting the REPEAT Counter The REPEAT steps Count property specifies the maximum number of times the server reexecutes the child steps in the REPEAT step. If you set Count to The REPEAT step 0 Does not re-execute children. Any value > 0 Re-executes children up to this number of times. -1 or blank Re-executes children as long as the specified Repeat on condition is true. Important! Note that children of a REPEAT always execute at least once. The Count property specifies the maximum number of times the children can be re-executed. At the end of an iteration, the server checks to see whether the condition (that is, failure or success) for repeating is satisfied. If the condition is true and the Count is not met, the children are executed again. This process continues until the repeat condition is false or Count is met, whichever occurs first. (In other words, the maximum number of times that children of a REPEAT will execute when Count is > -1, is Count+1.) REPEAT Properties Scope - Optional. Specifies the name of a document (IData object) in the pipeline to which you want to restrict this step. If you want this step to have access to the entire pipeline, leave this property blank. Timeout - Optional. Specifies the maximum number of seconds that this step should run. If this time elapses before the step completes, Integration Server issues a FlowTimeoutException and execution continues with the next step in the service. If you want to use the value of a pipeline variable for this property, type the variable name between % symbols. For example, %expiration%. The variable you specify must be a String. If you do not need to specify a time-out period, leave Timeout blank. The Timeout property for the REPEAT step specifies the amount of time in which the entire REPEAT step, including all of its possible iterations, must complete. Timeout value of 0 means no timeout limit. Label - Optional. (Required if you are using this step as a target for a BRANCH or EXIT step.) Specifies a name for this specific step, or a null, unmatched, or empty string ($null, $default, blank). Count - Required. Specifies the maximum number of times the server re-executes the child steps in the REPEAT step. Set Count to 0 (zero) to instruct the server that the child steps should not be re-executed. Set Count to a value greater than zero to instruct the server to re-execute the child steps up to a specified number of times. Set Count to -1 to instruct the server to re-execute the child steps as long as the specified Repeat on condition is true. If you want to use the value of a pipeline variable for this property, type the variable name between % symbols. For example, %servicecount%. The variable you specify must be a String.

Repeat interval - Optional. Specifies the number of seconds the server waits before re-executing the child steps. Specify 0 (zero) to re-execute the child steps without a delay. If you want to use the value of a pipeline variable for this property, type the variable name between % symbols. For example, %waittime%. The variable you specify must be a String. Repeat on - Required. Specifies when the server re-executes the REPEAT child steps. Select SUCCESS to re-execute the child steps when the all the child steps complete successfully. Select FAILURE to re-execute the child steps when any one of the child steps fails. 5. Using REPEAT as target of branch step

6. Difference between Loop and Repeat 1. Loop - For Loop we need to give the input as document. You should perform a loop operation on array of items(String list/Document list/Object list). Loop operation processes each and every array element in the array. The input for a loop is an array value. 2. Repeat - For repeat we need to give the input as number. Repeat is a conditional based flow step. The execution of child steps will be depends on "REPEAT ON" value (SUCESS / FAILURE in property panel). 7. Pipeline When a Flow service is called, it receives a copy of the entire state of the pipeline from the calling service - not just the variables you explicitly pass in (this is why using scope can increase performance!). The Flow service modifies the pipeline, typically by adding new variables to it (the results), and then returns the modified pipeline to the calling service. The returned pipeline is then merged with the calling service's pipeline - any existing variables with the same name and type are over-written. webMethods pipeline explained Thought I'd describe a bit of how the pipeline concept works inside the Integration server after an interesting (from an academic view) thread on the webMethods advantage about what happens when you invoke a service and what happens in the pipeline.

What goes on from the outside world So for the background: webMethods Integration Server uses a pipeline structure to store, pass and return any variables. When you invoke a service from the outside world the following happens (simplified of course):

Transport protocol is verified and stripped away to work out what service was to be invoked (in the case of HTTP the URL is examined, if it's via an FTP port the directory that the file is uploaded to in the "virtual ftp server" will identify the service etc) Authentication/permissions and all of that stuff is worked out The Integration Server creates an empty pipeline structure (that is some object implementing the com.wm.data.IData interface for anyone interested in the java specifics) If there were any input parameters supplied, they are put into the pipeline structure (e.g. if you did a form submit via http and passed in myVariable it will end up in the pipeline (actually it'll assume for safety that it may have been a list also.. but that's another story and shall be told another time) The service is invoked with the brand new pipeline object passed in as the parameter When the service is finished, the resulting pipeline is queried and the results formatted (depending on the protocol this may be an HTML or XML page sent back, or a file created) and the results returned to the caller from the outside world.

What happens internally So that's what goes on from a high level from the outside world, on the inside when a service calls another service the way it passes in input parameters is via placing values in hte pipeline with a given name, which are then assumed to be there by the service being called. This is an important distinction from say java methods where the inputs for a method must be given. In services it is more of a "pull" notion. That is: if I declare a service will "take in" variable X, then it is really saying that in my service I will try to extract variable X from the pipeline and use it for whatever purpose I want. The pipeline It's best to think of the pipeline as a hash table wrapped up in a structure that allows access to it through a cursor. In flow it appears as a magical graphical set of keys which you can map, drop and set as you see fit. In java it is an IData which you grab an IDataCursor and then play with it via IDataUtils (which simplify pulling and pushing values into and out of it). The pipeline can contain documents (known as records prior to version 6), which are the same object/concept of a structure that contains key/value pairs. How to think of Service Inputs/Outputs The following gives you a quick idea of how to think of service inputs and outputs: Inputs "will be assumed to have been placed in the pipeline and may be extracted for use" Outputs "will be placed in the pipeline after execution" Mopping up needed

So be careful to cleanup anything that is not an input or an output. I suggest you leave the inputs undropped because the caller will have to clean the inputs up anyhow, so there is no advantage whatsoever to dropping them from the pipeline in the service itself. So take the standard "it's not my problem" approach with inputs and outputs and you'll be fine. Anything else however is your responsibility to cleanup. Simple solution is to put a MAP step at the end of each service with a comment "cleanup" and drop anything not in the inputs and outputs of the service.

Sneaky pipeline practices It is also possible if you wanted a maintenance nightmare (or perhaps just like self inflicted pain) to never declare inputs formally, but to just "expect" certain variables to be in the pipeline. There is no compile type checking to ensure that you only use declared input, and this can be quite handy in some cases, but on the average this is a bad idea. It is also possible (and quite common) in terms of returning "hidden" values by not declaring output variables, yet leaving values in the pipeline. The trouble with returning things that you haven't declared is that you may, by accident, return a value on the pipeline that has the same name as one that was in the pipeline of the parent. So if the service had declared that it returned a variable of that name when writing a service that called it you would have immediately noticed when writing the flow code that it was going to be overwritten and would need to be renamed or some other solution. But in the case of the hidden output: there is no visual queue without working it out by looking at the code of the inner service (rather than treating it as a black box). Mutating documents There is a possible theoretic problem that could occur (although Ive never actually witnessed this problem in a commercial project) is that if you have a document passed in, either as a direct input or as a hidden input, and you add fields and play with the structure, but then cleanup by dropping the document: because the caller had that document in the pipeline (and what happens after a service invoke is that the caller and the callee's pipelines get merged) that the caller will have their document modified, despite the callee service being "clean" and dropping any undeclared inputs/outputs. There are a number of solutions to stopping a service 100% from affecting the calling services' variables, but generally there is no need if things are coded correctly. This problem is not isolated to webMethods as pretty much any OO language that passes in structures may have that structure modified (e.g. a service is the equivalent of a java method that takes in a hash table.. unless you create a completely new hash table and clone the objects in it and then copy them into the hash table: there'll be the chance of some modification of those objects). Local variable holder pattern So a simple solution is that if there is a need for any local variables in a service and you are worried that you may be using the same document names as a caller (not usually a problem as you will generally be overwriting the whole document and so long as you clean up it's not a problem) is to put any temporary variables inside a document that has a name that is related to

the service itself (and thus not going to conflict with any other service's temporary document). This also simplifies the process for the aforementioned cleanup map step: you just blow away one thing which is the document that holds all the others. (see http://www.customware.net/repository/display/WMPATTERNS/Local+Variable+Holder for a bit more information.)

8. Flow step Timeout For more information about how Integration Server handles flow step timeouts, refer to the description of the watt.server.threadKill.timeout.enabled configuration parameter in webMethods Administering Integration Server. 9. webMethods File Structure 1. node.idf - Every package, folder and service will have its own folder. Each folder will have a node.idf file or I would guess something like "interface definition file". This file must be something similar to define that it is another branch. You might sometimes see the node namespace entries there, but still a mystery where that is needed. The file contains shared information about a java service. Sample node.idf content <?xml version="1.0" encoding="UTF-8"?> <Values version="2.0"> <value name="node_type">interface</value> <value name="node_nsName">OUK_ENDUSER_IVR.Util</value> <value name="encodeutf8">true</value> <value name="shared"></value> <value name="extends"></value> <array name="implements" type="value" depth="1"> </array> <array name="imports" type="value" depth="1"> <value>java.util.*</value> <value>java.text.*</value> </array> </Values> 2. node.ndf - For each flow service you will find the flow.xml file along with node.ndf. For my reference I use, ndf as "Node Definition File". This is really used to define the node in your name space. It contains the below information Input & output signature of the flow and java services (if you are not using specification reference in your input and output). Information like what kind of an artefact the element is - java service, flow service, document, trigger, folder etc Service access information Audit information

Service level properties (Runtime (Stateless, cache, prefetch etc), Retry, Universal Name, template etc)

node.ndf file will be present for the following webMethods elements Flow service (also available is flow.xml file) Java service (also available is java.frag file as well as source and class files) Adapter Connections (only node.ndf file) Adapter Listeners (only node.ndf file) Adapter Notification (only node.ndf file) Document (only node.ndf file) Schema (only node.ndf file) Specification (only node.ndf file) Trigger (only node.ndf file) Sample node.ndf content of a flow and java service <?xml version="1.0" encoding="UTF-8"?> <Values version="2.0"> <value name="svc_type">java</value> <value name="svc_subtype">unknown</value> <value name="svc_sigtype">java 3.5</value> <record name="svc_sig" javaclass="com.wm.util.Values"> <record name="sig_in" javaclass="com.wm.util.Values"> <value name="node_type">record</value> <value name="field_type">record</value> <value name="field_dim">0</value> <value name="nillable">true</value> <value name="form_qualified">false</value> <value name="is_global">false</value> <array name="rec_fields" type="record" depth="1"> <record javaclass="com.wm.util.Values"> <value name="node_type">record</value> <value name="node_comment"></value> <record name="node_hints" javaclass="com.wm.util.Values"> <null name="field_usereditable"/> <value name="field_largerEditor">false</value> <value name="field_password">false</value> </record> <value name="field_name">key</value> <value name="field_type">string</value> <value name="field_dim">0</value> <value name="nillable">true</value> <value name="form_qualified">false</value> <value name="is_global">false</value> </record> </array> </record> <record name="sig_out" javaclass="com.wm.util.Values"> <value name="node_type">record</value> <value name="field_type">record</value> <value name="field_dim">0</value>

<value name="nillable">true</value> <value name="form_qualified">false</value> <value name="is_global">false</value> <array name="rec_fields" type="record" depth="1"> <record javaclass="com.wm.util.Values"> <value name="node_type">record</value> <value name="node_comment"></value> <record name="node_hints" javaclass="com.wm.util.Values"> <null name="field_usereditable"/> <value name="field_largerEditor">false</value> <value name="field_password">false</value> </record> <value name="field_name">value</value> <value name="field_type">string</value> <value name="field_dim">0</value> <value name="nillable">true</value> <value name="form_qualified">false</value> <value name="is_global">false</value> </record> </array> </record> </record> <value name="node_comment">This is a utility service to read business exception codes</value> <value name="stateless">no</value> <value name="caching">no</value> <value name="prefetch">no</value> <value name="cache_ttl">15</value> <value name="prefetch_level">1</value> <value name="template">OUK_CF_PPS_EAI_Util_getBusinessException</value> <value name="template_type">html</value> <value name="audit_level">off</value> <value name="check_internal_acls">no</value> <value name="icontext_policy">$null</value> <value name="system_service">no</value> <value name="retry_max">0</value> <value name="retry_interval">0</value> <value name="svc_in_validator_options">none</value> <value name="svc_out_validator_options">none</value> <value name="auditoption">0</value> <record name="auditsettings" javaclass="com.wm.util.Values"> <value name="document_data">0</value> <value name="startExecution">false</value> <value name="stopExecution">false</value> <value name="onError">true</value> </record> </Values>

3. flow.xml This contains the steps for the flow service. So all the steps you write in a flow service are embedded in sort of the nodes/elements in the flow.xml. You can see that it has some MAP, COMMENT, SEQUENCE and BRANCH as elements. All of these XML nodes are translated in executable code by webMethods in the form of static classes. I believe the way, classes are loaded as static classes, and they enhance all the performance. 4. flow.xml.bak - flow.xml.bak consists of backup of the flow service. 5. java.frag - Every Java service has its own java.frag file. This file is compiled version of the Java service. This is to make your java source code available/viewable to others or not (kind of security). As we all know, every java service in wM is merely a method in Java class which is named after the folder name where the service resides. This java.frag file (in my guess) is only the pointer/redirection to the compiled class under \code\classes folder. If you do not have this file or the .class file, try using jcode.bat file under \IntegrationServer\bin. This will update/compile any java services in the package. We really do not need to modify any of these files, as the developer provides the best way to look at and edit them. They preserve the format which is understood by IS and developer too. This means if we try to edit that outside the standard wM editors, we might loose the consistency. Sample webMethods Java service code in classes folder package OUK_ENDUSER_IVR; // -----( IS Java Code Template v1.2 // -----( CREATED: 2009-10-14 18:07:42 EDT // -----( ON-HOST: VMinsash.softwareag.com import com.wm.data.*; import com.wm.util.Values; import com.wm.app.b2b.server.Service; import com.wm.app.b2b.server.ServiceException; // --- <<IS-START-IMPORTS>> --import java.util.*; import java.text.*; // --- <<IS-END-IMPORTS>> --public final class Util { // ---( internal utility methods )--final static Util _instance = new Util(); static Util _newInstance() { return new Util(); } static Util _cast(Object o) { return (Util)o; } // ---( server methods )--public static final void DateDifferenceInDays (IData pipeline) throws ServiceException { // --- <<IS-START(DateDifferenceInDays)>> --// @subtype unknown // @sigtype java 3.5 // [i] field:0:required date1

// [i] field:0:required date2 // [i] field:0:required pattern // [o] field:0:required days // pipeline IDataCursor pipelineCursor = pipeline.getCursor(); String date1 = IDataUtil.getString( pipelineCursor, "date1" ); String date2 = IDataUtil.getString( pipelineCursor, "date2" ); String pattern = IDataUtil.getString( pipelineCursor, "pattern" ); String temp = null; pipelineCursor.destroy(); try { SimpleDateFormat sdf = new SimpleDateFormat(pattern); Date dt1 = sdf.parse(date1); Date dt2 = sdf.parse(date2); Calendar cal1 = Calendar.getInstance(); Calendar cal2 = Calendar.getInstance(); cal1.setTime(dt1); cal2.setTime(dt2); long diff = cal1.getTimeInMillis() - cal2.getTimeInMillis(); long days = diff/(24*60*60*1000); temp = ""+days; } catch (Exception e) { e.printStackTrace(); } // pipeline IDataCursor pipelineCursor_1 = pipeline.getCursor(); IDataUtil.put( pipelineCursor_1, "days", temp); pipelineCursor_1.destroy(); // --- <<IS-END>> --} All Java services in webmethods are declared as a public static final method under a public final class. The class name is the name of the folder in which java services reside. A static method is one which cannot be overridden but can be hidden. A static final method is one which can not be either overridden or hidden. Advantage of making class as final 1. No body can extend final classes and change their behaviour. All methods in a final class are implicitly final which means the methods can not be overridden. 2. You might have a class that deals with certain security issues. By sub classing it and feeding your system the sub classed version of it, an attacker can circumvent security restrictions.

3. To avoid an object becoming mutable. E.g. since Strings are immutable, your code can safely keep references to it String blah = someOtherString; instead of copying the string first. However, if you can subclass String, you can add methods to it that allow the string value to be modified, now no code can rely anymore that the string will stay the same if it just copies the string as above, instead it must duplicate the string. 4. If you mark classes and methods as final, you may notice a small performance gain, since the runtime doesn't have to look up the right class method to invoke for a given object. Virtual (overridden) methods generally are implemented via some sort of table (vtable) that is ultimately a function pointer. Each method call has the overhead of having to go through that pointer. When classes are marked final then all of the methods cannot be overridden and the use of a table is not needed anymore - this it is faster. 5. Final class can be safely shared in between multiple threads without any synchronization overhead. 6. .cnf files - These are mainly the configuration files. Most of the cnf files are stored in IntegrationServer\config folder. There can be some cnf files in config folder of each webMethods product. This also means that TNConsole or workflow server will have it's specific cnf files. It is possible that each Wm* package under Integration Server also can have its configuration files. These files are loaded in memory when the server, packages, products are loaded in memory and flushed back to file system when these servers, packages, products are shutdown. All the changes in settings of the product are made in memory and flushed only at shutdown. So do not attempt to make changes in those cnf files when the product is running (I would say do not make changes at all in files directly). There are some more cnf files in config\jdbc folder. They are for the JDBC pools of the server. Note: Every time the IntegrationServer comes up, it backs up all the cnf files in the backup folder. 7. ServiceCache - IntegrationServer\datastore\ServiceCache file is used to make store the cached the service output. This is an encrypted file, as the data is not much visible in clear text. 8. IntegrationServer\web\conf\web.xml Most of the web interfaces in IS are managed by integrated Tomcat server. This is needed even for running the WmMonitor. The explicitly created web applications are deployed under the pub folder. At the time of loading that web context the IS creates the compiled classes under web\work\\. This web.xml under conf folder is very similar to one under Jakarta Tomcat server. You can modify the same for servlet mapping, logging configurations, mime-mappings and security constraints of web applications. .access files are also used to define security constraints under each \pub\... folder to define ACLs that are allowed to access the files or subfolders at the same level as the .access file. 10. Integration Server Folder Structure
wM712 audit bin common config config bin Some library files wM82 Files AuditStore.data0000000 AuditStore.log0000000

datastore db db

Blank ISTransStorelog0000000 ISTransStoredata0000000 ISResubmitStorelog0000000 ISResubmitStoredata0000000 TriggerStorelog0000000 TriggerStoredata0000000

DocumentStore

DocumentStore

lib logs packages pipeline replicate sdk support updateReadmes updates userFtpRoot web WmRepository4 XAStore

lib logs packages pipeline replicate sdk support updateReadmes updates userFtpRoot web WmRepository4 XAStore

Potrebbero piacerti anche