Sei sulla pagina 1di 41

*MISSING: Invalid java custom function..

summary Error in CustomJavaFunction is


caused because of diffrence in compiled version and running version

tcp://localhost:7222> create Factory QueueConnectionFactory queue


QueueConnectionFactory 'QueueConnectionFactory' has been created
tcp://localhost:7222> create Factory TopicConnectionFactory topic
TopicConnectionFactory 'TopicConnectionFactory' has been created

In EMS 8.1 we have to provide the ems url as well while create factories
create factory myFactory generic URL=tcp://server1:7344

http://www.free-online-exams.com/Pages/Exams/DumpsQA.aspx?d=TIBCO&c=TB0-
123_V2&q=10

show connections will show all active connection


show consumers queue=sample to view the list of consumers in the queue name
sample
tcp://localhost:7222> show consumers
Msgs Sent
Id Conn User T Queue SAS Sent Size Uptime
4 5 admin Q $TMP$.E4EMS-SERVER.1B20530EF2B06.3 +N- 0 0 0:10:49
12 11 anonymous Q sample +A- 0 0 0:04:18

tcp://localhost:7222> show consumers connection=11


Msgs Sent
Id Conn User T Queue SAS Sent Size Uptime
12 11 anonymous Q sample +A- 0 0 0:05:16

tcp://localhost:7222> set server statistics=true


Server parameters have been changed
tcp://localhost:7222> show stat queue sample
Inbound statistics for queue 'sample':
Total Count Rate/Second
Queue Name Msgs Size Msgs Size
sample 0 0.0 Kb 0 0.0 Kb
Outbound statistics for queue 'sample':
Total Count Rate/Second
Queue Name Msgs Size Msgs Size
sample 0 0.0 Kb 0 0.0 Kb

tcp://localhost:7222> set server detailed_statistics=producers


Server parameters have been changed
tcp://localhost:7222> set server detailed_statistics=consumers
Server parameters have been changed

tcp://localhost:7222> set server track_message_ids=true


Server parameters have been changed

tcp://localhost:7222> set server track_correlation_ids=true


Server parameters have been changed

You need to start the ems server track messages by message_ids or correlation IDs
tcp://localhost:7222> show messages
MessageID = ID:E4EMS-SERVER.1B20530EF2B018:4
MessageID = ID:E4EMS-SERVER.1B20530EF2B019:5
tcp://localhost:7222> show message ID:E4EMS-SERVER.1B20530EF2B019:5
TextMessage={Header={ JMSDestination={QUEUE:'sample'}
JMSDeliveryMode={PERSISTENT} JMSPriority={4} JMSMessageID={ID:E4EMS-
SERVER.1B20530EF2B019:5} JMSCorrelationID={hi}(message content)

(in case selector is defined)


JMSMessageID={ID:EMS-SERVER1.C085518F65860:6} JMSTimestamp={Tue Apr 07 14:34:34
2015}} Properties={"criticality"={string:'med'}} Body={string:'med critical
message'}

tcp://localhost:7222> create queue test exclusive

If a queue is created as exclusive its message can be read by only one consumer. Consider a situation where in we have
more than one consumer for an exclusive queue. Only one of them would listen the messages and rest will be on standby.
Once the active listener goes down one of the standby consumers will start picking up the request.

EMS supports persistence by 2 means


1. File: - uses async mode to store data into file. I.e. regardless of message
is written to disk or not the controls get returned to the producer. In case
of ems server crashes the message would be lost without writing in file. To
prevent this scenario, we can use synchronous writing data into file.
Stores.conf file store the messages.

Below steps needs to be done to allow EMS using database for store: -

1. Update the stores.conf file:-


[$sys.orcl] orcl your database name
type=dbstore
dbstore_driver_url=jdbc:oracle://localhost:1522/orcl your dburl
dbstore_driver_username=scott -- username
dbstore_driver_password=tiger -- password

C:\TIBCOSoc\ems\5.1\bin>java -jar tibemsd_util.jar -tibemsdconf tibemsd.conf


-createall

TIBCO Enterprise Message Service Schema Export Tool.


Copyright 2003-2010 by TIBCO Software Inc.
All rights reserved.

Version 5.1.5 V3 3/29/2010

### Processing store '$sys.orcl' .. ###

m_storeName = $sys.orcl, m_storeType = dbstore, m_storeDriverName =


dbstore_driver_name=oracle.jdbc.driver.OracleDriver, m_storeDriverDialect =
org.hibernate.dialect.Orac

drop table ACK_RECS cascade constraints


drop table CONNECTION_RECS cascade constraints
drop table CONSUMER_RECS cascade constraints
drop table CONSUMER_RECS_IMPSELMAP cascade constraints
drop table EMS_HBLOCK_TABLE cascade constraints
drop table EMS_MESSAGES cascade constraints
drop table EMS_SYS_RECORDS_TABLE cascade constraints
drop table PRODUCER_RECS cascade constraints
drop table PURGE_RECS cascade constraints
drop table SESSION_RECS cascade constraints
drop table TXN_RECS cascade constraints
drop table VALID_MSGS_RECORD cascade constraints
drop table VALID_MSGS_RECORD_HELDMSGS cascade constraints
drop table VALID_MSGS_RECORD_HOLDMSGS cascade constraints
drop table VALID_MSGS_RECORD_NOHOLDMSGS cascade constraints
drop table ZONE_RECS cascade constraints
drop sequence hibernate_sequence
create table ACK_RECS (ACK_RECORD_ID number (19,0) not null, CONSUMER_ID
number(19,0), SEQNO number(19,0), DISCARD_ACK_FLAG number(1,0), TXN_ID
number(19,0), SESS_ID numbe
create table CONNECTION_RECS (STORE_ID number(19,0) not null, CONNECTION_ID
number(19,0), CONNECTION_TYPE number(10,0), CLIENT_ID varchar2(256),
CONNECTION_USER varchar2(
create table CONSUMER_RECS (STORE_ID number(19,0) not null, CONSUMER_ID
number(19,0), SESSION_ID number(19,0), CONNECTION_ID number(19,0), DURABLE_NAME
varchar2(255), SUB
create table CONSUMER_RECS_IMPSELMAP (StoreId number(19,0) not null,
HASHMAP_VALUE varchar2(255), HASHMAP_KEY varchar2(255) not null, primary key
(StoreId, HASHMAP_KEY))
create table EMS_HBLOCK_TABLE (id number(19,0) not null, last_update date not
null, server_id varchar2(255) not null, JDBCURL varchar2(4000) not null, primary
key (id))
create table EMS_MESSAGES (STORE_ID number(19,0) not null, MESSAGE_SEQNO
number(19,0), TYPE number(3,0), PRIORITY number(3,0), DELIVERYMODE number(3,0),
REDELIVERED numbe
er(1,0) not null, MESSAGE_SIZE number(19,0), SMALL_MESSAGE_BODY raw(255),
LARGE_MESSAGE_BODY blob, primary key (STORE_ID))
create table EMS_SYS_RECORDS_TABLE (SYSTEM_REC_ID number(19,0) not null, VERSION
varchar2(255), EMS_START_TIME varchar2(255), EMS_UPDATE_TIME varchar2(255),
primary key (
create table PRODUCER_RECS (STORE_ID number(19,0) not null, PRODUCER_ID
number(19,0), SESSION_ID number(19,0), CONNECTION_ID number(19,0), DESTINATION
varchar2(255), DEST
create table PURGE_RECS (PURGE_RECORD_ID number(19,0) not null, DEST_NAME
varchar2(255), DEST_TYPE number(3,0), SEQ_NO number(19,0), primary key
(PURGE_RECORD_ID))
create table SESSION_RECS (STORE_ID number(19,0) not null, SESSION_ID
number(19,0), CONNECTION_ID number(19,0), ACKNOWLEDGE_MODE number(10,0),
TRANSACTED number(1,0), IS_
create table TXN_RECS (TXNREC_STORE_ID number(19,0) not null, SESS_ID
number(19,0), TXN_ID number(19,0), RECORD_ID number(19,0), TXN_STATE
number(10,0), XID raw(1024), pr
create table VALID_MSGS_RECORD (VALID_MSG_RECORD_ID number(19,0) not null,
ACTIVE_HI_SEQ number(19,0), SMALL_MESSAGE_BODY raw(255), LARGE_MESSAGE_BODY blob,
primary key (
create table VALID_MSGS_RECORD_HELDMSGS (id number(19,0) not null, HELD_MSGS
number(19,0), listIndex number(10,0) not null, primary key (id, listIndex))
create table VALID_MSGS_RECORD_HOLDMSGS (id number(19,0) not null, HOLD_MSGS
number(19,0), listIndex number(10,0) not null, primary key (id, listIndex))
create table VALID_MSGS_RECORD_NOHOLDMSGS (id number(19,0) not null, NO_HOLD_MSGS
number(19,0), listIndex number(10,0) not null, primary key (id, listIndex))
create table ZONE_RECS (STORE_ID number(19,0) not null, ZONE_ID number(10,0),
ZONE_NAME varchar2(255), ZONE_TYPE number(5,0), primary key (STORE_ID))
alter table CONSUMER_RECS_IMPSELMAP add constraint FKD1B84D7B3B9C4484 foreign key
(StoreId) references CONSUMER_RECS
create index DEL_IDX on EMS_MESSAGES (DELETED)
alter table VALID_MSGS_RECORD_HELDMSGS add constraint FK5908794BC44ABBE5 foreign
key (id) references VALID_MSGS_RECORD
alter table VALID_MSGS_RECORD_HOLDMSGS add constraint FK6A06C9D5C44ABBE5 foreign
key (id) references VALID_MSGS_RECORD
alter table VALID_MSGS_RECORD_NOHOLDMSGS add constraint FK610EB196C44ABBE5
foreign key (id) references VALID_MSGS_RECORD
create sequence hibernate_sequence

Character Encoding
By default, ISO 8859-1(Latin-1) or UTF-8(by default) if string is used as type
of message. For performance improvement, you can use Latin-1 as your encoding as
it uses single byte to store characters whereas UTF-8 uses 2 bytes to store it.

Message compression confirms that message will consume less space in EMS server
and will result in faster processing. The compression is done only for body
however header and properties will never get compressed. Compression is useful in
message storing as the server storage will come in play.

Selector uses only Message header and properties to define the key but not the
body.

Examples: - In the EndPoint Configuration panel, specify the selector in the Message Selector field. For
example, type (Branch='Boston' OR Branch='East Coast' OR Branch='ALL') AND ((SalesUpper>=62
AND SalesLower<=62) OR Sales volume='ALL').

Repeat step a and step b to specify a JMS selector for another Subscription Service. For example, type (Branch='New
York' OR Branch='East Coast' OR Branch='ALL') AND ((SalesUpper>=90 AND SalesLower<=90) OR
SalesVolume='ALL') in the Message Selector field for Subscription Service named mysub2.

TIBCO EMS message selectors


Posted on March 3, 2016by Lijo Ouseph

A message selector is a string that lets a client program specify a set of messages, based on the
values of message headers and properties. A selector matches a message if, after substituting
header and property values from the message into the selector string, the string evaluates to true.
Consumers can request that the server deliver only those messages that match a selector.

The syntax of selectors is based on a subset of the SQL92 conditional expression syntax.

Identifiers
Identifiers can refer to the values of message headers and properties, but not to the message body.
Identifiers are case-sensitive.

Basic Syntax :An identifier is a sequence of letters and digits, of any length, that begins with a letter.
As in Java, the set of letters includes _ (underscore) and $ (dollar).
Certain names are exceptions, which cannot be used as identifiers. In particular, NULL, TRUE,
FALSE, NOT, AND, O R, BETWEEN, LIKE, I N, I S, and ESCAPE are defined to have special
meaning in message selector syntax. Identifiers refer either to message header names or property
names. The type of an identifier in a message selector corresponds to the type of the header or
property value. If an identifier refers to a header or property that does not exist in a message, its
value is NULL.

Literals
String Literal A string literal is enclosed in single quotes. To represent a single quote within a literal,
use two single quotes; for example, literals. String literals use the Unicode character encoding.
String literals are case sensitive. An exact numeric literal is a numeric value without a decimal point,
such as 5 7, -957, and +62; numbers in the range of long are supported.

An approximate numeric literal is a numeric value with a decimal point (such as 7 ., -95.7, and +6.2),
or a numeric value in scientific notation (such as 7E3 and -57.9E2); numbers in the range of double
are supported. Approximate literals use the floating-point literal syntax of the Java programming
language.

Expressions
Every selector is a conditional expression. A selector that evaluates to true matches the message; a
selector that evaluates to false or unknown does not match. Arithmetic expressions are composed of
numeric literals, identifiers (that evaluate to numeric literals), arithmetic operations, and smaller
arithmetic expressions.

Conditional expressions are composed of comparison operations, logical operations, and smaller
conditional expressions.Order of evaluation is left-to-right, within precedence levels. Parentheses
override this order.
Operators
Operator names are case-insensitive. Logical operators in precedence order: NOT, AND, O R.

Comparison operators: =, >, > =, <, < =, < > (not equal).
These operators can compare only values of comparable types. (Exact numeric values and
approximate numerical values are comparable types.) Attempting to compare incomparable types
yields false. If either value in a comparison evaluates to NULL, then the result is unknown (in SQL 3-
valued logic). Comparison of string values is restricted to = and < >. Two strings are equal if and only
if they contain the same sequence of characters.Comparison of boolean values is restricted to = and
<>

A client cannot use JNDI lookup for dynamic queues

Destination properties:

1. Channel: - used only in Topics, used to determine the multicast channel over
which the message will be broadcasted.
Example: - foll.bar channel=mycast

Note: - this property cant be set before multicasting configuration in the


channel in channel.conf file. Also tibemsd.conf should be updated for
allowing multicast property.

2. Exclusive: - only with queues when set messages can be retrieved by one and
only one consumer. Other consumers will be in standby mood. Once the primary
consumer is down the server picks up one of the standby consumer and starts
sending messages. Queue should not be global queue.
3. Expiration: - both queue and topic specifies how long
(msec,sec,min,hour,day) messages will be stored in server. When messages are
expired it either gets destroyed or stored in $set_undelivered queue if
JMS_TIBCO_PRESERVE_UNDELIVERED property is set to true.
4. Export: - it allows to export the messages of a topic to external systems
defined in the transport.conf ex:- export=RV1,RV2
5. flowControl: - it specifies the maximum size the destination can store for a
producer for pending messages. Once the maximum size is reached the server
asks the producer to slow down the message sending rate. Useful when message
producer is producing quick messages than consumers accepting it.
flowControl never discards any messages or sends error back to producer like
OverflowPolicy. The size can be specified in KB, MB, GB the default size is
256KB flowControl=1000KB
6. global: - used to set the destination global for routing
7. Import: - allows message to be imported to ems server from External systems
if specified in transport.conf
8. Maxbytes: - specifies the maximum bytes for a destination. If reached
messages will be discarded and error will be send to producer.
9. Maxmsgs: -specifies the maximum msgs destination can hold waiting. Once
exceeded an error will be sent to message producer and message will be
discarded
NOTE: - both maxmsgs and maxbytes limit can be seen storing more messages
depending on pipeline concept of ems
10. maxRedelivery: - specifies the number of attempts ems server should
make to redeliver the message. Numbers for 2 to 255 can be specified. If
used set prefetch property to none. Ex: - maxRedelivery=count
Can be used when consumer not using auto acknowledgement mode as if confirm
activity is missing with JMS queue receiver. If maxredelivery is not set,
then the message would be redelivered infinite times to the queue mostly
256/s resulting in wiered behavior of services. MAXDELIVERYCOUNT specifies
the number of time EMS server has redelivered the message in the queue.

However, on other hand when you use wait for JMS activity

it doesnt matter what acknowledgment mode you have used and messages will
not be stored once fetched from wait for JMS activity.

11. overflowPolicy: - describes what has to be done when maxmsgs,maxbytes


limit is reached. There are 2 options 1. RejectIncoming 2. DiscardOld
messages. The behavior depends on the type of destination. For topics if
multicasting is set then overflow policy depends on multicast daemon. There
are 3 parameters for overflowPolicy
a) Default: - for topic if maxbytes and maxmsgs is reached for a
subscriber the servers stops sending message to the subscriber and
no error is returned to producer.
For queues if maxbytes and maxmsgs is reached the error is sent back
to the message producer and incoming messages are rejected

b) descardOld: - for topic if maxbytes and maxmsgs reached the oldest


message is discarded to consume the new messages.
For queues the oldest messages is discarded and error is sent back
to the producer
c) rejectIncoming: - for both topic and queues if max limit is reached
then new messages will be rejected and error will be sent to message
producer.
Example
setprop queue myQueue maxmsgs=1000, overflowPolicy=discardOld
setprop topic myTopic maxbytes=100KB, overflowPolicy=rejectIncoming

<StackTrace>Job-7000 Error in [Process Definition


(1).process/Group/JMS Queue Sender]
There was an unexpected error while sending a message.
at com.TIBCO.plugin.share.jms.impl.JMSSender.send(Unknown Source)
at com.TIBCO.plugin.share.jms.impl.JMSSender.send(Unknown Source)
at com.TIBCO.plugin.jms.JMSAbstractTransmitActivity.eval(Unknown
Source)
at com.TIBCO.pe.plugin.Activity.eval(Unknown Source)
at com.TIBCO.pe.core.TaskImpl.eval(Unknown Source)
at com.TIBCO.pe.core.Job.a(Unknown Source)
at com.TIBCO.pe.core.Job.k(Unknown Source)
at com.TIBCO.pe.core.JobDispatcher$JobCourier.a(Unknown Source)
at com.TIBCO.pe.core.JobDispatcher$JobCourier.run(Unknown Source)
caused by: com.TIBCO.plugin.share.jms.impl.JMSExceptionWrapper:
javax.jms.ResourceAllocationException: Queue limit exceeded

12. prefetch: - (consumer fetches the messages from EMS server prior to be
sending a ready to accept the messages) specifies the number of messages
consumers should process once there are large number of pending items in the
destination.
Background: - delivering messages from server destination to consumer
requires 2 phase fetch and accept:
fetch is 2 steps process between consumer and server
consumer initiates the fetch phase by signaling the server that its ready
to take messages
the server responds by transferring one or more messages
accept phase client code takes a message from consumer
The receive call embraces both phases.
Automatic Fetch Enabled: - used when prefetch is set to a positive integer
value. Can be useful in performance improvement as a consumer does not have
to wait for the messages.
Automatic Fetch disabled: - set prefetch=none. Automatic fetch cannot be
disabled with for Global queues or topics.
Inheritance: - prefetch property can be inherited from parents based on
below scenarios
a) when all parents are set to prefetch NONE then child will hold same NONE
value of prefetch
b) when all parents have integer, value set for prefetch the most upper
value is set for child
c) when none of the parent have, the value set for prefetch the child will
have the default value of 5.

13. Secure: - when enabled instructs the server to check the permission of
user whenever they perform any action on destinations. The authorization
property of server should be enabling to use this property. And secure is
independent of SSL property and is related to EMS server.
14. Sender_name: - if enabled the message will contain the sender name.
The sender name is being captured by EMS server while producer connects to
EMS server and the same is used to pass as sender_name. Note: - in some
business case scenario the client may not want to expose the username to
consumers but ems has the property enabled for the destination. To resolve
this situation the EMS server operator must provide the producer with both
destinations where the property is set and not set. Hence the producer can
use the destination he wants.
15. Store: - it determines where messages sent to this destination would
be stored in file or database. Before using setprop and addprop to change
store property you need to stop the message flow.
16. trace: - specifies the tracing should be enabled for this destination.
Example trace = [body]
create user rishi rishiT
tcp://localhost:7222> set password rishi tanu
Password of user 'rishi' has been modified
revoke queue test.queue rishi receive

TIBCO Enterprise Message Service.


Copyright 2003-2010 by TIBCO Software Inc.
All rights reserved.

Version 5.1.5 V3 3/29/2010

2014-02-27 19:13:51.468 Process started from


'C:\TIBCO\ems\5.1\bin\tibemsd.exe'.

2014-02-27 19:13:51.468 Process Id: 7832


2014-02-27 19:13:51.468 Hostname: PNEITSH54304D
2014-02-27 19:13:51.468 Hostname IP addresse: 10.50.203.138
2014-02-27 19:13:51.468 Detected IP interface: 127.0.0.1 (loopback)
2014-02-27 19:13:51.468 Reading configuration from 'tibemsd.conf'.
2014-02-27 19:13:51.470 Server name: 'E4EMS-SERVER'.
2014-02-27 19:13:51.470 Storage Locations: '.'.
2014-02-27 19:13:51.471 Routing is disabled.
2014-02-27 19:13:51.471 Authorization is disabled.
2014-02-27 19:13:51.480 Accepting connections on tcp://PNEITSH54304D:7222.
2014-02-27 19:13:51.480 recovering state, please wait.
2014-02-27 19:13:51.483 Recovered 3 messages.
2014-02-27 19:13:51.507 Server is active.
2014-02-28 12:08:17.548 ERROR: Slow clock tick 55170, delayed messaging and
time
outs may occur.
2014-02-28 12:22:43.297 WARNING: [anonymous@PNEITSH54304D]: create receiver
fail
ed: not allowed to create dynamic queue [test1].
2014-02-28 14:50:32.578 ERROR: Slow clock tick 898, delayed messaging and
timeou
ts may occur.
2014-03-02 15:01:36.562 ERROR: Slow clock tick 161638, delayed messaging and
tim
eouts may occur.
2014-03-03 12:12:07.578 ERROR: Slow clock tick 73788, delayed messaging and
time
outs may occur.
2014-03-03 14:40:45.481 [admin@PNEITSH54304D]: Authorization has been
enabled
2014-03-03 14:41:10.191 [admin@PNEITSH54304D]: created queue 'myqueue':
secure
2014-03-03 14:46:06.464 [admin@PNEITSH54304D]: created user 'user_name'
2014-03-03 14:46:59.240 [anonymous@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:46:59.256 [anonymous@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:47:20.595 [user_name@PNEITSH54304D]: connect failed: invalid
passw
ord
2014-03-03 14:47:20.610 [user_name@PNEITSH54304D]: connect failed: invalid
passw
ord
2014-03-03 14:48:43.561 [admin@PNEITSH54304D]: deleted user 'user_name'
2014-03-03 14:48:46.931 [user_name@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:48:46.931 [user_name@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:49:00.426 [user_name@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:49:00.426 [user_name@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:49:14.809 [anonymous@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:49:14.824 [anonymous@PNEITSH54304D]: connect failed: not
authorize
d to connect
2014-03-03 14:51:48.533 [admin@PNEITSH54304D]: created user 'rishi'
2014-03-03 14:52:28.423 [admin@PNEITSH54304D]: deleted user 'rishi'
2014-03-03 14:52:56.144 [admin@PNEITSH54304D]: created user 'rishi'
2014-03-03 14:53:58.481 [admin@PNEITSH54304D]: updated user 'rishi'
2014-03-03 14:54:18.964 [rishi@PNEITSH54304D]: connect failed: invalid
password
2014-03-03 14:54:18.964 [rishi@PNEITSH54304D]: connect failed: invalid
password

2014-03-03 15:08:24.017 [admin@PNEITSH54304D]: granted user 'rishi'


administrative permissions: all

Inheritance of properties and permissions: - all the properties present in


parent destination

Bridges: - used to send messages across destinations within ems server.


Queue to queue or topic or vice versa, bridges can be created. If queue a.b
is bridged to b.c both queue will receive the messages regardless of a.b has
consumed it or not. Bridges are not transitive means if a.b is bridged to
b.c and b.c is bridged to c.d. The message produced on a.b will be only
available only in b.c and not in c.d.
Bridges in Transaction: - if a message is produced for a bridge within
transaction then all the consumers will become part of transaction if anyone
of consumer fails the whole transaction gets failed or succeed otherwise.
create bridge source=topic:bridtopic target=queue:bridtest
To create a bridge containing selectors below are the syntax we can use. The
selector would be configured at ems server level and only messages matching
the selector would be transferred to bridged queue or topic. Example if I
send 2 messages on tstbrdgsel containing selector value high it would only
bridged to tstbrdselhighcrt not to the other queues or topic in the bridge.
Purging works on independently though lets say if I purge messages on
tstbrdselhighcrt queue it would not affect tstbrdgsel or vice versa.
Note: - its always preferable to setup message selectors for bridges in ems server so that only selected
messages will flow between bridged destination avoiding unnecessary overhead of accepting every message.
Note: - in below configuration if 2 messages are sent to queue tstbrdgsel with selector value as high. You will
see that both queues tstbrdgsel and tstbrdselhighcrt will have 2 pending messages. Its because both the queues
are persistent so each of them will store the message. Now lets assume I create a receiver for tstbrdselhighcrt
queue with selector value as high both the messages will be consumed from tstbrdselhighcrt queue whereas
tstbrdgsel queue will still show 2 pending messages.

tcp://localhost:7222> create bridge source=queue:tstbrdgsel


target=queue:tstbrdselhighcrt selector="criticality IN ('high')"
Bridge has been created
tcp://localhost:7222> create bridge source=queue:tstbrdgsel
target=queue:tstbrdselmedcrt selector="criticality IN ('med')"
Bridge has been created

Note: - to set a particular selector in JMS queue sendor activity we need to


configure JMS Application properties pallete so that selector would be
part of message header. Ex if I have defined criticality as property then my
request would look like below:-
<ns0:ActivityInput xmlns:ns0 =
"http://www.TIBCO.com/namespaces/tnt/plugins/jms">
<JMSExpiration>0</JMSExpiration>
<OtherProperties>
<criticality>med</criticality>
</OtherProperties>
<Body>med critical message</Body>
</ns0:ActivityInput>

removeprop queue test maxmsgs command is used to remove the properties from
a destination.

Destination name can not exceed more than 249 chars


Username cannot exceed 127 chars
Groupname cannot exceed 127 chars
clientID cannot exceed 255 chars
connection url cannot exceed 1000 chars
autocommit:- autocommit [on|off] when autocommit is on the changes are
written to disk in the configuration files immediately. Regardless of this
property EMS server by default commits all the changes in the disk. The
autocommit just specifies when the commit should happen of the configuration
files.

Flow control: - it specifies maximum size to be allotted to store the


pending messages in EMS server. EMS server blocks new messages once the flow
control limit is reached.

SNFGXIBCT: - prints information about destination properties in order


(S)ecure(N)Sender_name or
sender_name_enforced(F)ailsafe(G)lobal(X)clusive(I)mport(B)ridge(C)flowContr
ol(T)race
- Property not present
+ Property is present and was set on destination itself
*Property is present and inherited from another destination
* EMS server reads the configuration file only once when the server starts so if
we change anything on config file we need to restart the ems server so that it
reflects the changes

Fault tolerance EMS server: -


We can set up ems server in fault tolerance so that if one EMS server goes
down other server continue to process the messages (EMS doesnot support
fault tolerance for more than 2 servers).

Step 1: - create two ems servers (copy bin file twice in same machine or
have)
Step 2: - Update the properties in tibemsd.conf
A) Server=test with a valid servername
b) ft_active=tcp://localhost:7222(other EMS server url)
Note make sure the EMS server names are same otherwise you would see below
error while restarting the primary server.
2015-01-15 12:22:05.011 WARNING: Unable to initialize fault tolerant connect
Remote server returned 'the primary EMS server name is EMS-SERVER while the
Standby EMS server name is E4EMS-SERVER. The names must be the same'

NOTE: - you must have the same connection factories and user name password
in both EMS servers to create a fault tolerance system
Step 3: - restart the ems server once above settings are done
Step 4: - make sure after restarting the ems server you check below messages
in the window
2015-01-15 13:07:57.660 Accepting connections on tcp://PNEITSH54304D:7222.
2015-01-15 13:07:57.660 Server is in standby mode for
'tcp://pneitsh54406d:7222'

Step 5: - Create an EMS connection in BW using


tcp://pneitsh54406d:7222,tcp://pneitsh54304d:7222

Step 6: - send 2-3 messages in Primary EMS server then shut it down before
the receiver receives them. Check the secondary ems server your queue will
be automatically created and it will have those 2-3 messages pending. Once
you connect the receiver it will pick up the messages from secondary server.

Routes: - Routes are used when same messages need to be sent across
different EMS servers.
All topics participating in routes must be defined global.
Zone: - a zone is set of routes. Zone restricts the behavior of routes.
There are 2 types of routes 1. One hop (1hop) and 2. Multiple hop(mhop)
1hop:- messages will be sent to only immediate routes
Mhop: - messages will be sent to all connected routes having receivers

Steps to create routes: -


1) update routes.conf with the server you wish to create the routes.
2) Eg:-
[EMS_SERVER_1]
url = tcp://server1:7222
[EMS_SERVER_2]
url = ssl://server2:7243
ssl_verify_host = disabled
3) Server1 and server2 must give the same names of EMS server which is being
mentioned in the routes.conf
4) Create global queues or topics in both EMS servers.
5) Create JMS connection in BW using all EMS server connected in routes
6) Send a message to one of the ems servers and all configured EMS server
would accept the messages.

Pallete Refrences:HTTP

HTTP Connection: - There can be only one HTTP receiver or wait for HTTP
message on single port. However, there can be multiple SOAP Event Source
listening on the same HTTP port. Soap Event Source and HTTP receiver can
share the same HTTP connection (HTTP Port). The HTTP server will take care
of distributing the messages properly.
There are two types of servers for HTTP
1. Tomcat: - Used when no of request is too high to maintain the throughput
and in request response paradigm for synchronous messages.
2. HTTPComponent: - when handling thousands of requests is important in
resource efficient way rather than throughput
Configuration TAB
Name,Description,Host(machine host name for
HTTP),Port,SSL(Checkbox),ServerType(Tomcat or HTTPComponent)
Configure SSL Button: -
Requires Client Authentication: - When checked client must produce the
digital certs before making the connection to the server.
Trusted Certificate Folder: - used when requires client authentication
checkbox is checked.
Identity, Strong cipher suits only.

Advanced Tab
Enable DNS lookup: - enables DNS lookup for a client so IP address is
resolved to a DNS name.
Max Post Size: - set when server type is Tomcat. Specifies maximum size of
the POST that the container form URL parameter parsing can handle. Default
value is 2MB. 0 disables the limit.
maxSavePostSize: - when Tomcat is selected. Used to specify the maximum data
can be buffered before login or cert authentication is done. Default size is
4k and -1 disables the limit.
URI Encoding: - used to set the specific encoding mechanism.
accept Count: - specifies the maximum number of connection can be accepted
for HTTP while its in use. Default value is 100.
compressableMimeType: -specifies the list of MIMEType for which HTTP
compression can be used. Text/html, text/xml, text/plain
compression: - specifies whether the output of HTTP connection should be
compressed. Values can be ON or OFF.
connectionTimeout: -specifies the time for which connection has to wait to
be successful. Default is 60000 msec i.e. 60 sec.
Custom Properties of HTTP
1. bw.plugin.http.server.minProcessors:- minimum number of thread available
for incoming HTTP requests. Default is 10
2. bw.plugin.http.server.maxProcessors:- maximum number of threads for
incoming HTTP messages. Default is 75
3. bw.plugin.http.server.maxSpareProcessors:- max number of unused thread
processing that can exist until the thread pool stopping the unnecessary
threads. Default is 50
4. bw.plugin.http.server.acceptCount:- max queue size for incoming requests.
If the queue is full new incoming requests are refused with an error.
Default value is 100.
5. bw.plugin.http.server.allowIPAdresses :- a comma separated IP addresses
to allow the connection from remote clients.
6. bw.plugin.http.server.restrictIPAddresses:- a comma-separated list of
regular expression to restrict the connection. If a IP addresses is
listed in both allow and restrictIPAddress field, then restrict overrides
the allow IP addresses.
7. bw.plugin.http.server.serverType: - can be Tomcat or HTTPComponent.
Default is Tomcat.
8. Bw.plugin.http.server.httpComponents.workerThread: - max number of web
server threads available to handle HTTP requests for HTTP Component
server type. Default is 50.

if you set the flow limit for a HTTP receiver process starter, the maxProcessors
is set to <flowLimitValue> -1 and minProcessors is set to <maxProcessorValue>/2.

If I store domain data in file storage the hawk agents take the data from
administrator UI and can support both file and database using RV as
transport. Hence little slow processing.

If I store domain data in database, the hawk agents take the data directly
from database rather than going to admin. This certainly improves the
performance and increases the login and deployment time of TIBCO admin.

If storing application data in database, then admin domain data must also be
stored in database but reverse is not true.
TIBCO Domain Utility

Used for below activities: -


1. Machine Management: - a) Add a machine to a domain
b) Join a logical machine to a domain: - used if you are configuring a
domain to work within a cluster
2. Domain Configuration: - a) create new administration domain for existing
Administrator installation
b) Add Secondary server: - to an administrative domain that uses TIBCO RV
for domain communication
c) Delete an administrative domain: - the administrator server and hawk
cluster agent must be shut down before deleting the domain
d) Enable and configure HTTPS for selected domain on the machine domain
utility is running.
3. Server Settings: - a) Change Transport parameters: - on the server where
domain utility is running.
b) Update Domain certificates: -
c) Change Tomcat Web-server Ports
d) LDAP Configuration
e) Database Configuration
f) Miscellaneous
4. Migration: - a) Upgrade domain to 5.7
5. TIBCO EMS Plugin: - a) Add TIBCO EMS Serer b) Delete TIBCO EMS Server c)
Update TIBCO EMS Server
6. Servlet Engine Plugin: - a) Add Servlet Engine b) Remove Servlet Engine
c) Update Servlet Engine

script deployment using AppManage and BuildEAR

Appmanage is use for: -


1. To deploy application with config file
2. To undeploy
3. To export the configuration file (GV file) from application
4. To start and stop applications
5. To delete the application
6. Create deployment config file from ear
7. Upload ear only, upload ear with default config file, upload ear with
modified config file
8. Batch export deploy undeploy

C:\TIBCO\tra\5.7\bin>appmanage -undeploy -app MyApp/service -user test -pw test


-domain mydomain
Checking if master server is responding ...
Finished checking
Initializing ...
Finished initialization
Undeploying application ...
Finished undeploying application
Finished successfully in 29 seconds

C:\TIBCO\tra\5.7\bin>appmanage -deploy -app MyApp/service -user test -pw test


-domain mydomain
Checking if master server is responding ...
Finished checking
Initializing ...
Finished initialization
Deploying application ...
Instance service created successfully
Finished deploying application
Finished successfully in 19 seconds
2014-03-25 12:31:03 RV: TIB/Rendezvous Error Not Handled by Process:
{ADV_CLASS="ERROR" ADV_SOURCE="SYSTEM" ADV_NAME="DATALOSS.INBOUND.BCAST" ADV_DES
C="dataloss: remote daemon already timed out the data" host="10.50.203.193" lost
=2600 scid=7474}
2014-03-25 12:31:03 RV: TIB/Rendezvous Error Not Handled by Process:
{ADV_CLASS="ERROR" ADV_SOURCE="SYSTEM" ADV_NAME="DATALOSS.INBOUND.BCAST" ADV_DES
C="dataloss: remote daemon already timed out the data" host="10.50.203.193" lost
=2600 scid=7474}
2014-03-25 12:31:04 RV: TIB/Rendezvous Error Not Handled by Process:
{ADV_CLASS="ERROR" ADV_SOURCE="SYSTEM" ADV_NAME="DATALOSS.INBOUND.BCAST" ADV_DES
C="dataloss: remote daemon already timed out the data" host="10.50.203.193" lost
=11 scid=7474}
2014-03-25 12:31:04 RV: TIB/Rendezvous Error Not Handled by Process:
{ADV_CLASS="ERROR" ADV_SOURCE="SYSTEM" ADV_NAME="DATALOSS.INBOUND.BCAST" ADV_DES
C="dataloss: remote daemon already timed out the data" host="10.50.203.193" lost
=11 scid=7474}

C:\TIBCO\tra\5.7\bin>appmanage -undeploy -app MyApp/service -cred "C:\Rishi-


docs\Sprints KDDI\login.txt" -domain mydomain
Checking if master server is responding...
Finished checking
Initializing ...
Finished initialization
Undeploying application ...
Finished undeploying application
Finished successfully in 27 seconds

C:\TIBCO\tra\5.7\bin>buildear -s -ear /service1.archive -o c:/Rishi/abc.ear p


"C:\New folder\BWprojects\service"
Starting up...
Enterprise Archive File has built correctly.
Ear created in: C:\Rishi\abc.ear

C:\TIBCO\tra\5.7\bin>appmanage -export -ear C:/Rishi/abc.ear -out


C:/Rishi/conf.xml
Initializing...
Finished initialization
Exporting application configuration...
Finished exporting application
Finished successfully in 0 seconds

TIBCO admin user document

There are 2 parts of TIBCO admin 1. TIBCO admin ui:- used to manage users and
application and 2. TIBCO admin server: - used to manage resources

Max Jobs and Flow Limit

Consider these settings for a process with a starter activity, say a JMS
receiver:
Flow limit =100
Max jobs =25

There are 200 messages on the receiving queue before the process starts.
Once the process starts then what could be the statistics?
1)
Jobs in memory: 25
Jobs paged to disk: 100
Messages waiting on the queue: 75

or

2)
Jobs in memory: 25
Jobs paged to disk: 75
Messages waiting on the queue: 100

Thread Count is the number of BW worker threads, each of which is entitled to run
a job that is scheduled for execution, i.e. in the run-queue. Just because you
have the thread does not mean it runs as that depends on the OS scheduling which
also depends on the number of available hardware threads (cores plus hyper-
threading) and on the state of the process (it could be blocked on IO for
example). So strictly speaking even the BW worker threads only run virtually
concurrently - how many run physical concurrently is a matter of HW and
statements?

Max Jobs is number of jobs that can be in-memory and that are also entitled to
run. So, they may not run physical' at the same time as not all of them can have
a BW worker thread in your example but they run logical concurrently as BW has
its own scheduling which will result in jobs having to give up the BW worker
thread when they reach an asynchronous activity or they executed to many steps in
sequence. In that it is not much different than the OS scheduling of threads on
the processors.

For the JMS receiver, also the acknowledgment mode comes into play. For CLIENT
ACK the number of session determines how many jobs can be created concurrently
(defined from start to confirm activity which most times is at the end anyway).
This mean you do not have to set maxJobs or flowLimit. Now let's say you use
EXPLICIT ACK then the maxJobs and flowLimit are again important as only way to
define concurrency and thus memory footprint and other resource consumption. In
this case flowLimit=100 will suspend the BW process starter once 100 jobs are
created but not yet terminated. It will resume the job creation only once
1/2(=50) jobs run to their end. The maxJobs of 25 does result in 75 of those 100
jobs to be paged to disk - which is not a good idea (page out and page in cost)
unless you have large jobs and they are so long running that they all must be
started. That is for activationLimit equals false.

With Activation Limit equals, true BW would have stopped to create jobs once it
reached maxJobs = 25 and thus only 25 would be in memory at any time and the
higher flowLimit would be meaningless

Web Services Protocol Choices

Consider using SOAP over HTTP for:

Externally facing web services (e.g. customers or suppliers)


For simple point-to-point and stateless services
Where you need a thin client with no MOM installations

Consider using SOAP over JMS for:

High-volume distributed messaging


Asynchronous messaging
Where a transaction boundary is needed in the middleware
Where the message consumers are slower than the producers
Guaranteed deliver and/or only once delivery of messages
Publish/subscribe
Distributed peer systems that might at times be disconnected

Note also that you may not need the SOAP envelope inside the firewall if there are no WS policies to communicate.
For example, you could switch to context based routing (message routing based on the content of the message)
inside the firewall.

Checkpoint activity: - used to save the state of a bw process. All the activities upto the checkpoint can be
recovered. The recovery is done by Engine Commands activity. It polls GetRecoverableProcesses and another
engine command RestartRecoverableProcess would restart the failed process from its checkpoint.On checkpoint
containing processes failure jobid.xjob is created on the system where code is deployed on location
C:\TIBCO\tra\domain\mydomain\application\checkpnt\working\checkpnt-Process_Archive\jdb\checkpnt-
Process_Archive\jobid.xjob. To troubleshoot if checkpoint is not created please check the
bw.engine.enableJobRecovery=true property in the application tra file as by default it is set to false. One can also
give the user specified checkpoint recovery path by setting Datamanager.Directory=<> property in application TRA.
By default, the checkpoint information is saved in a file but you can also store it in database. The option is available
at par level on advanced tab of your application. If you are using any JDBC connection in your bw project that same
connection can also be used for storing checkpoint data.

We use checkpoint in scenarios where duplicity of records is expected and we dont want previous activities before
checkpoint to get executed more than once. If we have a scenario where we can have executed the activities before
checkpoint we should prefer to use acknowledgement modes in ems to redeliver the messages.

You can set below properties for logging in detail level for TIBCO admin.
Trace.Startup=true

Trace.Task.*=true

Trace.JC.*=true

Trace.Engine=true

Trace.Debug.*=true

in the deployed tra file and then restart the application and run it.
Then check detailed log file located in the <install-path>\TIBCO\tra\domain\application\logs folder.

Please keep in mind that all manual settings will be cleared after redeploy. To keep it permanent, set it in the bwengine.tra
file in <install-path>\TIBCO\bw\<version>\bin folder.

-p C:\local.properties

Sequencing in TIBCO bw:-


Sequencing in bw is used for sequential execution of a process. Sequencing is applied for all process
starter activities and is available in misc tab.
Sequencing key is used in most of the process starter activities. This is used to control the concurrent execution of
process instances of a process defitintion.
E.g. lets assume there is a process which updates balance of an account. In this process definition, the sequencing key
can be the "Account Id". So now, if there are multiple concurrent requests to update balance in the same account id, then
those all requests will be processed one by one sequentially instead of concurrently creating many process instances for
each of them.

An example of the same is show below in filepoller which is polling file name having .txt extension sequentially. It also
processes that other extensions files but once all the .txt files are processed.
A bw project can have multiple ear in one project and multiple par in one EAR.
We can modify GVs at designer and in while deploying in admin ui. We can also modify the GV directly in
<<app>>.tra. Syntex -TIBCO.clientVar.<variablePathAndName> <value>.
After changing the GV in tra we will only have to restart the application. This is a workaround to avoid
redeployment and debug the issue.

Each time you run a domain utility in same machine it will increment the port number by 10. We can have
multiple domains in one machine.
TIBCO EMA(Enterprise Management Adviser) :- resource availability issues, handling resources exception can be
minimized.
EAR contains local project resources, library builder resources and alias library resources.
designer.ear.watermark.size=16 is the default size specified in designer.tra file for EAR. If the size is exceeded,
then a warning msg is shown. We can customize this value though.
Shared archieve contains all the resources automatically which are used by process definations. E.g.
schemas,JMS connection,JDBC connection etc.
BWengines can be configured fault tolerance as peers (master) and secondary relationship. Once the peer engine
fails the

Wait,Notify, Receive Notification(starter) :- Notify activity sends a notification message to eighter a wait or Receive
Notification activities having the same key. If no activity containing the Notify activity key is active the message along with
data to be send by notify activity is stored in the server until one activity comes active to access the messages. We can
define a timeout value though in notify and wait activity to control the storing the notification data.

JMS Queue Receiver: -


Acknowledgement Mode: -
1) Auto: - messages are acknowledged automatically once they are received.
2) Client: - when used messages will be acknowledged at later point of time by confirm activity. When the message
is not confirmed, a new job instance is created for the process and message is redelivered. We have extra option
to set max sessions on client acknowledgement.
3) EMS client Explicit: - when used messages are acknowledged at later point of time using confirm activity. The
sessions are not blocked and one session handles the entire incoming message for each process instance. When
the message is not confirmed, a new job instance is created for the process and message is redelivered
4) Dups OK: - messages are acknowledged automatically once received. JMS provides this for lazy
acknowledgment
5) Transactional: - used when JMS messaging used inside transaction. Acknowledgement is sent once the
transaction is committed.
6) Local Transactional: - used when only JMS messages are involving in transaction.

Messages Delivery mode: -


1. Persistent: - messages are stored and forwarded. Acknowledgment is sent to message producer about ems
server receiving the messages.
2. Non-persistent: - messages are not stored and lost in case of failure. Acknowledgment is sent to message
producer about ems server receiving the messages.

3. Reliable-delivery: - producer just sends the messages without receiving any notification about messages
received or failed by consumer.

Service Activity: -

Service implements one or more interfaces over one or more transports. Service provides an abstraction so that you can
provide more generic SOA. It also allows you to decouple a service from its underlying transport and implementation.

Context resource: - it allows defining the schema for any operation. A service can contain multiple operation each
containing separate context resources for them. You can use getcontext and setcontext activities to get and set the
context values inside a process.

Observation 1: - while creating the service using JMS connection, use JNDI checkbox must be selected always or else
designer throws error
2. While we create multiple part table in messages of wsdl it designer doesnt allow creating 2 typed part tables
elements or one type part table element with element. We can have multiple part table elements in messages
with element type of single one with typed.
3. We can have multiple document parts if we select the default style (Document style) method as RPC
4. Note: - If we configure service activity with soap over JMS and select delivery mode as client explicit then all
the messages sent to the service will be acknowledged automatically at the end of service implementation
method regardless of client explicit acknowledgement mode.

Difference between wait for jms message and get Jms message activity
1. Wait for JMS message activity starts browsing EMS server the moment bw engine starts whereas getJMSMessage
activity browse once the process instance is instantiated
2. getJMS message activity retrieves one message at a time from EMS server (if messages selector is not defined) but
wait for JMS activity retrieves more than one message (depending on prefetch value) from ems server.
3. Wait for jms message contains an extra tab for message event
4. getJMS message activity allows to perform a receive operation on the queue opposed to wait for jms which invokes
delivered operation
Message. This can be one of the following:
Simple A message with no body portion.
Bytes A stream of bytes.
Map A set of name/value pairs. The names
are strings, and the values are simple data
types (Java primitives), an array of bytes (use
the Binary datatype when mapping this data),
or a string. Each item can be accessed
Sequentially or by its name.
Object A serializable Java object.
Object Ref An object reference to a Java
object.
Stream A stream of Java primitives, strings,
or arrays of bytes. Each value must be read
Sequentially.
Text The message is a java.lang.String.
XML Text The message is XML text.
Field Global
Var? Description

Adapters:-
Adapters are used for enabling the communication within different areas in an organization which directly could not
communicate with each other and there are some coding efforts needed to enable the communication between them.

Adapters have 4 major functionalities: -


Publication, subscription, request response and request response invocation.
ADB adapters: - ADB adapters are used to connect to the databases which support JDBC and ODBC standards. Like
oracle,MS SQL.

While configuring ADB adapters we need 2 major tables. 1 from where subscription should start (source table) and one
where the publication should happen (target table).
The target table contains all the columns present in the source table and in addition to it contains few extra columns
contains adapters subscription information example: -
ADB_L_DELIVERY_STATUS: - contains values N once the records from source table is put in target table (publishing
table)
ADB_OPCODE: - contains the values if the publication has failed

Invoke Partner Activity: -


Used to invoke external services over soap. In other words, it provides and abstraction to actual implementation of service
from web services invokers. The service implementing invoke partners can be considered as a proxy service as well.

Configuring Invoke partner: -


1. Configure a service and save the concrete wsdl out of it.
2. Drag the partner link configuration activity and select the wsdl on it. Once you browse for wsdl you will see two
different port types hierarchy containing the same operation. One with AMX and one with SOAP. Select the port
type under the SOAP hierarchy only.

Ex:-

3. Make sure the configuration has the above value and EndpointType as SOAP. You can optionally edit the name
and endpoint URL.
4. drag a process defination and on partners tab select the partner link configuration(Partner link configuration will
only available here If installed properly)

Ex:-
5. Drag a service activity and configure it with another abstract wsdl. On implementation process of an operation
select the process definition defined in step 4. You will notice that invoke partner tab already is containing required
partner information.

Run your service(step 5) with just invoking the operation and partner will get invoked.

Cluster (in TRA)


Cluster is a group of machine which works together as a single system to ensure that applications and resources are
available all the time. The machines are managed by single system thus providing fault tolerant, high availability,
scalability and so on. The machines connected in cluster use a common shared drive so that one of the machines are
always up and serving the requests.

There are mainly 2 types of applications for clusters

1. cluster aware: - the application knows the cluster is running on the machine

2. Cluster unaware: - the application doesnt know that cluster is running on same machine. The cluster software
takes care of failover scenarios in this case. All TIBCO applications fall under cluster unaware applications.
TIBCO application run seamlessly on cluster environments but does not use any cluster features.

High availability modes: -

When TIBCO applications are installed under cluster, high availability is best characterized as warm-standby. There are 3
major categories for high availability: -

1. Hot-standby: - applications on the backup server come up with very minimal down time or even approach zero
downtime. Using hot-standby two processors use hardware checkpoints to verify synchronization after each CPU
instruction.

2. Warm-standby: - application on the backup server comes up with a little downtime.

3. Cold-standby: - applications on the backup server takes a little more time then warm-standby mode. Backup
server application needs to be manually started since they dont have a way of knowing the failures.

Basic cluster terminologies: -

Node: - server member of a cluster

Resource: - hardware/software in cluster such as IP, disk etc

Group: - combination of resources that are managed as a unit of failover. Groups are also known as resource groups,
cluster packages or cluster groups.

Dependency: - an alliance between two or more resources in cluster architecture.

Failover/failback: - process of moving resources from one server to another.

Active/active: - applications exists in 2 or more servers and actively serving clients

Active/passive: - applications exist in 2 or more servers but only one of them is serving the client and others waiting on
standby.

Rolling upgrade: - it means on the cluster nodes software would be upgraded one by one thus one of the node is always
processing client requests.

Shared storage: - refers to an external SCSI or fiber cable. Used in multi node cluster env. Although nodes used shared
storage but only one node access the external storage at a given point of time.

About Java Palletes

Please make sure that if you are referring any jar file in designer please make sure that BW JRE and jar files java version
are same. Otherwise class and methods will not be available when you browse the jar from bw designer.

Variables in BW:-

Global variables: - used to set the constant values in the project values cannot be changed at runtime. Used mainly for
connection parameters and values this would remain same during project execution. There are 2 checkboxes available
while creating GVs. Service and deployment. If deployment is checked then the GV would be available to change in
admin ui after deployment otherwise GV will not be visible. Same is with Service checkbox if checked the GV will be
visible at service level in admin ui otherwise not.
Process Variable: - its a user defined variable and scope is within the process only. The value is modified by assign
activity. The value will be available to all activities within the process.

Job Shared Variable :- the scope is with each job, Get Shared Variable and Set Shared Variable activities are used to
get and set the values for JSV. Consider a situation where we have a process A calling a process B. If process B is being
called inside process A and is not spawned then the updated value of JSV would be reflected in the process A. The value
will not be reflected if B is spawned.

Shared Variable.- the scope is across the project and process even across the engine. While creating shared variable
we can configure it to support multiple engine. All the processes within the project can modify the value of shared variable
and each one accessing it would get the updated value of shared variable. Get Shared Variable and Set Shared Variable
activities are used to get and set the values for SV.

Document Vs RPC encoding in service activity


I did a POC for it & clarified. Main difference is the structure of the Soap Body.
Here is the sample snippet.
RPC style:
<soapenv:Body>
<ns:concat> <!-- this indicates operation name being invoked -->
<part1>1</part1>
<part2>2</part2>
</ns:concat>
</soapenv:Body>
In RPC style, input/output/fault messages can have multiple parts.
Soap-body will have a child element having a name of operation ( e.b. "concat").
All the part will be bundled under <concat> in soap-body.
Document Style:
In Document style, input/output/fault messages can not have multiple parts otherwise validation of service
agent will fail.
They should have only one part. Below example has <Request> under soap-body is a part defined in input
message.
But This does not indicate which operation it is unlike "RPC style".
<soapenv:Body>
<imas:Request>
<imas:Part1>1</imas:Part1>
<imas:Part2>1</imas:Part2>
<imas:Part3>1</imas:Part3>
<imas:Part4>1</imas:Part4>
</imas:Request>
</soapenv:Body>
1. In document style, the SOAP message is sent as a single document whereas in the
RPC style, the SOAP body may contain several elements.
2. The document style is loosely coupled whereas the RPC is tightly coupled.
3. In the document style, the client sends the service parameters in simple XML format
whereas in the RPC style the parameters are sent as discrete of values.
4. The Document/Literal style loses the operation name in the SOAP message whereas
the RPC/literal style keeps the operation name in the SOAP message.
5. In the Document/Literal style, messages can always be validated using any XML
validator whereas in the RPC/literal style, the transferred data is difficult to validate by
the SOAP message.

Read more: Difference Between RPC and Document | Difference Between | RPC vs Document
http://www.differencebetween.net/technology/protocols-formats/difference-between-rpc-and-
document/#ixzz3WVoVnbqh

BusinessWorks Certificate Path Validation Process


The underlying security library for BusinessWorks (BW) performs path validation on the sequence of
certificates that is presented by the server. These can be classified into the following set of checks:
Name chaining
Key chaining
Duplicate certificates
Syntax check
Integrity check
Validity period check
Criticality check
Key usage check
Below is a complete and ordered set of sample certificates that a SSL-enabled server would ideally present to
BW. In real life, however, one must realize that some SSL-enabled servers may send unordered and/or
incompletecertificate chains.
Certificate 0 [Server Certificate]
subject:
/C=US/ST=california/L=palo alto/O=xyz/OU=xyz/OU=Terms of use at www.veri
sign.com/rpa (c)00/OU=For Intranet Use Only/CN=xyz.com
issuer:
/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https:/
/www.verisign.com/rpa (c)03/CN=VeriSign Class 3 Secure Intranet Server CA
Certificate 1 [Intermediate Signer Certificate]
subject:
/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https:/
/www.verisign.com/rpa (c)03/CN=VeriSign Class 3 Secure Intranet Server CA
issuer:
/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority -
G2/OU=(c) 1998 VeriSign, Inc. - For authorized use only/OU=VeriSign Trust Network
Certificate 2 [Root Signer/Trust Anchor]
subject:
/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority -
G2/OU=(c) 1998 VeriSign, Inc. - For authorized use only/OU=VeriSign Trust Network
issuer:
/C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority -
G2/OU=(c) 1998 VeriSign, Inc. - For authorized use only/OU=VeriSign Trust Network
Schematically, the hierarchy above can be expressed with the following example:

Name Chaining
The process of name chaining involves comparing the value of the subject field in one certificate to the issuer
field in the subsequent certificate. In this case, the SSL-enabled server would present an ordered list of
certificates (i.e. leaf certificate > intermediate certificate > root certificate). In cases where the certificates
received are not in order, BW would internally rearrange the order such that the 0th certificate will be the leaf
certificate, the 1st certificate will be the intermediate signer, and will traverse the certificate hierarchy all the
way up to a trust anchor (the self-signed, root CA certificate). The comparison between the subject and
issuer fields is made between adjacent certificates in order to make sure there is no broken link until a self-
signed root certificate is reached.

Key Chaining
Key chaining involves checking that the key certified in each certificate verifies the digital signature on the
subsequent certificate. This check establishes a cryptographic trust from the relying partys CA certificate (i.e.
the trust anchor) to the public key contained in the server certificate. In our sample certificate chain, the key
chaining would resemble the following:

Duplicate Certificates
A misconfigured SSL-enabled server could possibly send duplicate certificates in the certificate chain. If the
path includes two certificates that have matching issuer names and identical serial numbers, then the certificates
would be considered duplicates. This check ensures that a duplicate certificate does not adversely alter the
outcome of proper path validation, e.g. when checking the "pathlen" (path length) constraint.

Syntax Check
The certificate syntax has been defined in the X.509 profile. It includes a set of base certificate fields and
extensions that allows additional information to be transmitted. Each certificate extension has its own syntax. A
standard set of extensions is specified in the X.509 and PKIX profiles. The syntax of all base certificate fields as
well as all known extensions must be checked to ensure that it conforms to the standards. If unknown
extensions are present, their syntax need not be checked.

Integrity Check
It is necessary to verify the integrity of each certificate, as it is possible for the certificate content to be altered
and replayed back to the receiving party. This check involves verifying the digital signature on the certificate. If
the signature on any certificate in the path fails to verify, then the certificate must be corrupted and therefore
should not be trusted.

Validity Period Check


Each certificate has a validity period, i.e. the time period during which the certified public key contained within
the certificate is considered valid for use. Generally, the certificate validity period is compared with the current
system time of the host machine. The validation period is represented as a SEQUENCE of two dates: the date
on which the certificate validity period begins (notBefore); and the date on which the certificate validity period
expires (notAfter). For example,
Validity:
Not Before: May 13 00:00:00 2004 GMT
Not After: May 12 23:59:59 2014 GMT
The Windows OS certificate viewer does not represent time in notBefore and notAfter format. It shows as
Valid from and to dates, instead.

Criticality Check
All certificate extensions have an indication as to whether their processing is critical to the acceptance of a
given certificate. The criticality flag provides a backward/forward compatibility mechanism which enables CAs
to state what should happen when a relying party that does not yet support the new extension encounters it. If an
extension is flagged critical but the path validation process of the relying party does not support the extension,
then the certificate cannot be used. Unknown, non-critical extensions can safely be ignored and other processing
continued. This process prevents relying parties from trusting a certificate under conditions for which its issuer
did not intend. Often the policy under which a certificate was issued will free its issuer from liability, if the
certificate is not used per the issuers specific policy, including the processing of all critical extensions in the
certificate. For instance, if the Basic Constraints extension is marked critical, then the relying party should be
aware that this certificate can have other subordinate certificates which it has signed. Moreover, if the pathlen
attribute is present, then it should indicate the number of chain levels that can be present under that CA
certificate.

Key Usage Check


The keyUsage extension is a mechanism used by the issuing CA to indicate the general use for the public key.
For the server certificate this extension may be set to any one of several values in accord with RFC 3028
including "digitalSignature", "nonRepudiation", "keyEncipherment", "dataEncipherment", "keyAgreement",
"keyCertSign", "cRLSign", "encipherOnly", and "decipherOnly". For example, if the key can be used for client
authentication, the keyUsage extension would have the digitalSignature indicator set. If this is the only
indicator set, such a certificate would not be appropriate for encrypting data for that subscriber. All other
certificates in the path must have the "keyCertSign" indicator set in their "keyUsage" extension, indicating that
these certificates can be used to verify the subject CAs signature on subsequent certificates. For example,
"certificate1" has the following "keyUsages":
Once all checks are performed, one can assume that the certificate chain has been cryptographically verified. If
the chain verification is successfully completed, then we must decide if the chain of certificates in hand should
be trusted. Making such a decision requires that a complete list of the CA certificates, from the trust anchor to
the signer of the end entity certificate, be uploaded into BW's trusted certificates folder. BW, then, will load
the certificates as a list in its memory during runtime and determine if the certificates presented in the chain are
trustworthy. If the presented chain is incomplete, BW will try to complete the chain based on the certificates
that have been uploaded to its trusted certificates folder.
[Note: Refer to ARC KB107454 -- Common_Errors_in_BW_related_to_SSL.pdf -- for descriptions of all
SSL-related errors thrown by BW.]

Catch, Rethrow, $ _error and Generate Error (Exception handling)


It is a suggested practice to handle errors and exceptions as close as possible to where they occur (in the same
BW process) where the greatest amount of information about the exception is available. It is also a good
practice to implement a "Catch" activity rather than trying to explicitly handle every possible exception in each
activity. In this way, you can be confident of being able to handle any exception regardless of where it occurs.
Consider a scenario where BW process A calls BW process B. Assuming you have properly configured a
"Catch" activity in your BW code in process B, you will be able to catch any exceptions and handle them
appropriately. At the termination of your exception handling you will have a transition to the "End" activity in
process B. Depending on the interface implementation between process A and process B, as well as the
business logic that has been implemented, once an exception is caught and handled in process B, the calling
process may not be aware that an exception was incurred within the called process. There may however be
times when you wish to propagate the exception up from process B to process A after handling the exception.
In this scenario, the Rethrow activity is helpful.
Even we can propagate the error to parent process using 'Generate error' activity instead of Rethrough.The only
difference found between Rethrogh and Generateerror is, in generate error we should map 'msg' and 'msgcode'
fields and where as in rethrough there is no input.
TIBCO creates an $_error element for the exception which is caught, this element will give you details about
the exception like msg, msgcode, stacktrace, process stack etc.
Generate Error Activity: It is mainly used to implement user defined exception. Lets discuss one simple
scenario to understand the usage of this activity. Assume we must implement a rule where we should only allow
a person of age <= 60 (Age is Integer type variable). Now we must generate error when receive age field as
60+.
I am passing 61 in the input and system should not allow this value to process. But, in normal execution system
accept this value because 61 is a valid Integer. Here comes the concept of user defined exception.
We will make one condition where we check that if age > 60 then we must raise one error/exception. To achieve
this, we can use Generate Error Activity. Here we can add suitable messages for the source I.e. Invalid age or
age should be less/equal to 60.
On the other-hand Catch Activity is used to handle error/exception at the runtime. The error in the above
example can be handled by using Catch activity. Here we can add the suitable process flow, which inform
sender/source about violation of rule (Invalid Data: Age > 60).
XPATH shortcuts
Line feed (New Line) = "&lf;"
Tab = "&tab;"
Carriage Return = "&cr;"
Carriage Return- Line feed = "&crlf;"
Non-breaking space = " "
you need to login to EMS adminstration Command line tool.
then do the following
1. Show config (shows the settings for config files and you can see the max_msg_memory value there)
2. command to change the memory value is.
==> set server max_msg_memory=2048MB
or
set server max_msg_memory=2GB
hope this helps you .........
Env : Linux Redhat 4 enterprise 64 bits / BW 5.8 / BE 3.0.2 / tra 5.6.2
TIBCO support and I, solved my issue.
When you deploy an application into a be container, you must set these variables:
- TIBCO.env.CUSTOM_EXT_PREPEND_CP
- TIBCO.env.CUSTOM_EXT_APPEND_CP
Please RTM if you want have more information.
With only that modifications, be engine will throws this kind of error:
- The binding type [ soap] is not valid
- The implementation type [ bw ] is not valid

There is an issue while be loads palettes, and this is due to property:


java.property.palettePath=%BE_HOME%/lib/palettes
You must modify this property and tell to be to loads first bw and tra palettes before be :
java.property.palettePath=%BW_HOME%/lib/palettes%PSP%%TRA_HOME%/lib/palettes%PSP%%PSP%
%BE_HOME%/lib/palettes
If there is a public communication about that issue, I will send you the link. But now, just remember there is an
issue with class loader, and the workaround is: modify palettePath java property.

netstat -atnp |grep 6090: - to check if port is already in use or not.

Import and include a schema: -


To import a parent schema in child schema first, create a parent schema and then the child schema. On child
schema select the overview tab. You will see both import and include button there.
With above settings, we have successfully imported the schema. Now to use the elements or nodes of parent
schema, first go to elements/types tab and on Extends section of schema click and select extends. Start typing
tns:(or namespace alias of import schema) and the node name you would like to include in the child schema.
There is a restriction that we can only extend one node with each extends part or else schema tool throws error.
Include: - Schema namespace of parent and child must be same then only include can happen. The complete
description along with its elements is included in case of include option. However, with import its just gets
referenced from the parent schema definition.

WSDL: - We can create wsdl message and port types in different wsdl activities and use them together in soap
event source activities. You need to select the wsdl defining the operation while configuring soap even source
activity and in advance tab you will notice that Embed Interface and Embed Types check boxes in General
Tab will be automatically checked. If you try to uncheck the checkbox you will notice that concrete wsdl does
not contain the input and output elements and will just contain the schema import. In schema import scenario,
we will have manually import the schemas to have the input and output elements.

1 way and 2-way SSL (Mutual or Client


Authentication)
In two-way SSL authentication, the SSL client application verifies the identity of the SSL server application, and then the
SSL server application verifies the identity of the SSL-client application.

Two-way SSL authentication is also referred to as client or mutual authentication because the application acting as an
SSL client presents its certificate to the SSL server after the SSL server authenticates itself to the SSL client.

Establishing the encrypted channel using certificate-based 2-Way SSL involves:

1. A client requests access to a protected resource.


2. The server presents its certificate to the client.
3. The client verifies the servers certificate.
4. If successful, the client sends its certificate to the server.
5. The server verifies the clients credentials.
6. If successful, the server grants access to the protected resource requested by the client.

1-Way SSL
In such mode, the SSL-client application is not verified by the SSL-server application. Only the server is
verified.

Limitations of using JMS queue requester: -


1. There is no guarantee whether we will receive the same messages back which we sent in request. The
reason for this issue is because of messageid gets generated by EMS for jms queue requester would not
be unique in case the requests are too frequent in terms of fraction of seconds.
2. MessageID generated for JMS queue requestor doesnt have a way to customize it.
EnableMemorySavingMode Property at Par level
When using memory saving mode, as each activity is executed, the list of process variables is evaluated to
determine if subsequent activities in the process refer to the variable. If no activities refer to the variable, the
memory used by the variable is released. EnableMemorySavingMode.<processname> property to true.
SOA: - service oriented architecture: - enables following
1. Code reusability
2. Loosely coupled application
3. High availability of data between different systems

4. Enables easy changing and assembling


Soap Vs Rest

Sr. no Soap Rest

1 Soap header overload No headers

2 Schema confirmation(xsd,wsdl) Wsdl 2.0 slight support for


contracts

3 Heavy Lightweight

4 WS-standards Not supported or custom support


like(Transactionmgmt,securities)

5 XML XML, JSON

EMS vs RV
1. RV is used for faster messaging where EMS is used for reliable messaging
2. RV does not support persistent messaging
3. RV supports bus architecture whereas EMS supports hub and spoke
4. RV is developed by TIBCO but EMS is developed in c language

What is REST?
Representational State Transfer (REST) is a platform-and-language-independent architectural style used in
building services for distributed systems and networked applications. REST ignores the details of component
implementation the key features of REST architectural style is:
Client-server architecture: Provides a separation of implementation details between clients and servers.
Stateless communication: During a client-server communication, the server does not store any
session information, making the client-server communication stateless. Every request is complete in
itself and must include all the information required to complete the request.
Cacheability: Provides an option to the client to cache response data and reuse it later for equivalent
requests; thus, partially eliminating some client-server interactions. This results in improved
scalability and performance.
What is a Resource?
REST APIs operate on resources that are defined within a REST interface file such as a Swagger
specification. A resource is a representation of a thing (a noun) on which the REST APIs (verbs) operate.
Some examples of a resource are a user, a book, or a pet. The REST operations, POST, GET, PUT, and
DELETE operate on a resource. Individual instances of a resource are identified by a unique identifier
within the resource such as an ID or name.

"include" Component - This component brings all declarations and definitions of an external schema
document into the current schema. The external schema document must have the same target
namespace as the current schema. "include" components are usually used to build a new schema by
extending existing schema documents.
"import" Component - This component offers the same functions as the "include" component except that
the included schema document has a different target namespace. "import" components are usually used
to build a new schema by borrowing element declarations from existing schema documents from other
namespaces.

Use xsd:include brings all declarations and definitions of an external schema document into the current
schema.

Use xsd:import to bring in an XSD from a different namespace and used to build a new schema by
extending existing schema documents.

The difference between the include element and the import element is that the import element allows
references to schema components from schema documents with different target namespaces and the
include element adds the schema components from other schema documents that have the same target
namespace (or no specified target namespace) to the containing schema. In short, the import element
allows you to use schema components from any schema; the include element allows you to add all the
components of an included schema to the containing schema.
he fundamental difference between include and import is that you must
use import to refer to declarations or definitions that are in a
*different* target namespace and you must use include to refer to
declarations or definitions that are (or will be) in the *same* target
namespace.

You are absolutely right to use xs:import in your schema because the
target namespace of your schema is
http://www.cisco.com/elearning/xsd/ciscomd_v001 but the target
namespace of the schema that you're importing is
http://www.imsglobal.org/xsd/imsmd_rootv1p2p2.
XML Schema Sequence Element

The sequence element specifies that the child elements must appear in a sequence. Each
child element can occur from 0 to any number of times.

XML Schema Restrictions Element


Restriction means only allowing set of values for an element. There are around 12 keywords to define restrictions: -

minInclusive, maxInclusive, minExclusive, maxExclusive, totalDigits, fractionDigits, pattern, whitespace, enumeration,


length, maxLength, minLength

https://TIBCO4all.com/2016/01/20/how-to-configure-TIBCO-ems-fault-tolerance-and-load-balancing/
EndpointURL:- its the url in the server service is hosted and can be accessed by third party
soap action: - which program needs to be invoked while calling a webservice
Generate Error: - is primarily used for user defined errors
Catch: - can be used for exceptions

xpath function "tib:render-xml" vs "Render XML" activity

We can use tib:render-xml function instead of Render XML activity if we are not modifying XML structure.

We can't set encoding with the function, whereas with activity we can.

By default, Render XML activity do the XML validation, not Schema validation.

Catch activity vs Error path

In TIBCO, DB Adapters are C based and BusinessWorks is java based applications.

Hence we can't catch all the exceptions from Adapter in BW using Error path.

So we need to use either "catch" or "Write to Log" to know the error in BW.

Scenario on EMS Server start up

If we have config file and duplicate queue/topic names are present in that conf file, so can we start this EMS
server with that conf file ?

Yes, when starting EMS Server, if the duplicate queue/topic names are present in respecitve conf files, while
starting up, EMS Server displays the duplicate names and the server will get start.

If the duplicate entries are present in bridges.conf then what will happen while starting up the EMS server ?

duplicate entries are present in bridges.conf then it displays the line number where it got the duplicate entry
and server will not get start.
How To Handle / Parse / Split Large XML Files ?

We understood that we cannot fit a 10GB file into only 1GB of memory or 4GB of memory i.e. RAM size.

Even for the 77Mb file it might not be enough depending on your implementation.

Memory issue is a common problem for applications that need to handle large XML files.

The XML parser of BW works roughly like a DOM parser.

Meaning, it loads the whole XML file into memory. This is not really design for working with huge XML.

For reading a large XML file you will need to use a different XML parser that does not load the whole XML (like
stax or sax or other ?). For this you need to use "java code".

Design steps:

Step1 : Get the stax jar file and java code from open source links stax xml parser.

Step2: Do the necessary steps for copy the jar file in TIBCO library folders.
Step3: Design one subprocess with java Code and configure the parameters as required for Jar file.

Step4: Design the Main Process File Pollar, callprocess and Parse XML.

Step5: Select the subprocess (which we designed in Step2) in main process call process.

Solution example posted in this blog with title BW solution for Large XML Files.

How to handle / read a large file in TIBCO BW ?

Best practise would be to use is File adapter, if we need to process a large file of size around 25MB.
The TIBCO file adapter parses the file and send it over to TIBCO BW Engine. Whenever the filesize is
larger than 25MB the TIBCO BW process hangs or give java.lang.OutOfMemoryError.

2) If the file is delimiter separated type (fixed format or comma separated), then we can use "Parse
Data" palette to read the file in subsets. check the option "Manually specify start record". we can make a
loop for this palette, where we can pass the 'startRecord' value as 100, 200, 300 in each loop. So, that
each loop reads the file with specific 100 lines. Then we can process these 100 lines within loop.

3) we may need to increase Heap Size as well in run time, we can see the setting in TIBCO
administrator while deploying the project.

For heap size go to process service instance of that project->monitoring->server settings->maximum


heap size

For threads (max jobs) go to process archive.par->advanced below we can see TIBCO BW Process
Configuration there we can set the max jobs for this read file process to 8.

we can try setting EnableMemorySavingMode=true.

Error : out-of-memory errors received by a Wait For JMS Topic Message


A shorter event timeout should be configured in the Message Event tab of the activity.

Event Timeout in Message Event Tab specifies the amount of time (in milliseconds) a message waits if it is
received before this activity is executed.
If the event timeout expires, an error is logged and the event is discarded.
If no value is specified in this field, the message waits indefinitely.
If we specify a shorter timeout, then we can meet the requirement given in the question.
SSL Tracing Information - Enable

For troubleshooting any problem related to SSL configuration in BW, it helps to enable the following
tracing properties:
Trace.Task.*=true (client-side SSL tracing information is made available)
Trace.Startup=true
Trace.JC.*=true
Trace.Engine=true
Trace.Debug.*=true
bw.plugin.http.server.debug: true (server-side SSL tracing information is made available)
We can specify the tracing properties in a custom properties file anywhere on your file system (e.g. in
C:\test\props.cfg) then reference the file in your C:\TIBCO\designer\5.3\bin\designer.tra file using the
property
java.property.testEngine.User.Args p c:/test/props.cfg
After updating your designer.tra file, we must restart Designer in order for the updates to take effect.

TIBCO ADB Adapter Error - Failed to load shared library

When starting an Adapter Service from the TIBCO Designer Adapter Tester, you might encounter this error:
Failed to load shared library, library name: adb55.dll.
Fix this error with the following steps:
1. Open C:\WINDOWS\system32.
2. Open the TIBCO RV home directory (it might be C:\TIBCO\tibrv\8.1\bin).
3. Copy the libeay32.dll and the ssleay32.dll from the TIBCO RV home directory to the system32 directory.
4. Say yes if you are prompted to replace the files.
Restart your TIBCO Designer and try again.

Difference between the exception table and opaque tables in TIBCO


ADB adapter configuration
Posted on March 2, 2016by Lijo Ouseph

The subscription service uses two logical layers when processing a message. The first layer decodes
data from the message and the second layer provides the database transaction. If an exception
occurs in the first layer, the adapter logs the message to the opaque exception table. In the second
layer, if any DML command fails at any level, the adapter rolls back this transaction and starts another
transaction, inserting into exception tables. If the insert into exception table transaction fails, the
adapter then logs the message to the opaque exception table.

In other means, ff some error occurred when the sub service insert/update/delete data in target table,
the error information will be inserted in the exception table. The exception table contained the
columns of the target table. But sometimes, error occurred on insert data into exception table. For
example, insert some character value into some number fields, this error will be inserted into opaque
table. The opaque table doesnt contain the columns of the target table.

what's JNDI ?
The Java Naming and Directory Interface (JNDI) is an API for directory service that allows clients to discover and lookup
data and objects via a name. Like all Java APIs that interface with host systems, JNDI is independent of the underlying
implementation. Additionally, it specifies a service provider interface (SPI) that allows directory service
implementations to be plugged into the framework. The implementations may make use of a server, a flat file, or a
database; the choice is up to the vendor.

JNDI is also used as an abstraction of underlying implementation. JNDI urls would not be changed even if
the ems server url has changed.

EMS uses JNDI to provide lookup for administered objects like connection factories, queues, topics

[factory-name] # mandatory -- square brackets included

type = generic|xageneric|topic|queue|xatopic|xaqueue| url = url-string metric = connections | byte_rate

clientID=client-id[connect_attempt_count|connect_attempt_delay| connect_attempt_timeout|
reconnect_attempt_count| reconnect_attempt_delay|reconnect_attempt_timeout = value] [ssl-prop =
value]*

difference between StepCount and ThreadCount property of TIBCO BW Engine

The ThreadCount property defines the amount of threads (java threads) which execute all you process.
So, with the default value of 8 threads you can run 8 job simultaneously.

The StepCount on the other hand defines the amount of activities executed before a thread can context
switch into another job.

Sample scenario:

a process with 5 activities


ThreadCount is 2
StepCount is 4
If there are 3 incoming requests, the first two requests spawn 1 job each. The third job is spawned, but
gets paused due to insufficient threads.

After the first job completes the forth activity, the thread is freed and can be assigned to another paused
job. So the first job will be paused and the third job starts to execute.

When the second job reaches the forth activity, this thread will be freed and is available for re-
assignement. So the second jobs pauses and the first resumes.

After the third job reaches its forth activity, the thread is freed again and resumes job number one (and
completes this one). Afterwards Job number 3 get completed.

All of this is a theoretical scenario. What you usually need is to set the amount of concurrent jobs (so
ThreadCount). The StepCount is close to irrelevant, because the engine will take care of the pooling and
mapping of physical threads to virtual BW jobs.

Difference between plugin and Adapters


Primary difference being the ease of use and deployment. Tibco MQ series Adapter works by using the
MQ binding file and also it is invoked inside BW by using the adapter palette activities. MQ adapters are
deployed as a separate entity as aar.

Where as ActiveMatrix MQ plugins enables developers to directly interact with MQ destinations and there
is no additional deployment of adapter service required.

TIBCO BW 5.x and BW 6.x are different product branches. TIBCO BW 6.x is a successor of tibco bw
express which was directed atsmaller companies, while BW5.x was (and still is) directed at enterprises.
Basically they're different products with similar names.

As a note, I can point out that last time I've checked official support end dates for both products, BW 5.x
had later end dates than BW 6.x. I suppose BW 5.x has a bigger customer base.

With BW 6.x REST palette is standard and not a separate plugin you need to purchase. IDE for
development is based on Eclipse and deployment is possible directly from BW studio and plugins are
available to integrate with Maven etc.. so basically, CI/CD is possible. As IDE is eclipse based
Design/Debug/Java etc... perspectives are possible.

ProjLib Vs GVs
You have to create one properties file for GV's whose reference you have to give in one file named same
as your project name is created for your project, whenever you run first time any process in your project.

Path will be : c:/users/[username/system name]/.Tibco/BW Debug/[File for your project].

You have to open above file and enter below line :

usrargs= -p [Property file full path]

Suppose Your property file is saved at D:/property/ProjectName.prop

usrargs = -p D/://property//ProjectName.prop

Now , In property file you have to mention all GV value you want to change in runtime like this :

tibco.clientVar.TestProject/Connection/JMS/Username=user1
tibco.clientVar.TestProject/Connection/DB/Timeout=60

So, mention how many variables you want to change in runtime into this property file.

Using extranal jar as alias library and Java Global Instance

1. In your project, add an AliasLibrary task from the General palette. Add the jar file to the AliasLibrary
containing the Class you want to access.
2. Within a BusinessWorks process activity, drag a "Java Method" task onto the canvas. Use the
configuration tab to specify the AliasLibrary and then use the finder to locate the Class and method
you wish to invoke. The "Advanced" tab gives you some options for managing the java instance
lifecycle associated with this method call.
Optionally, if you want to instantiate a global java instance which is shared among multiple
jobs/processes, then use the "Java Global Instance" task from the Java palette. In the configuration tab,
point to the AliasLibrary and use the finder to locate the Class and static method you want to execute. The
"Java Method" task can be used to invoke a method on this global instance.
The "Java Global Instance" may also be necessary if you don't have a default constructor on your java
class.

Same Server failover

Looks like from our testing that there are settings on both the server and the client that enable this feature.
On the client side, the SetReconnAttemptCount, Delay, Timeout govern the attempts the client tries to
reconnect once its aware of a server failover / connection failover.

In our testing, we used a single server environment, listed the server twice in the connection string (using
the trick you outlined above) and when that server was taken offline, we received a client notification of
the failover process taking affect (we enabled Tibems.SetExceptionOnFTSwitch(true)) and when the
server was brought back online, our client seemlessly reconnected without missing a beat. We didn't need
to code anything, the internal reconnect logic worked its magic.

On the server side, fault tolerance needs to be enabled and I believe server-client and client-server
heartbeats need to be enabled (though this has not yet been verified).

Hope this helps.

EMS Best Practices.


Naming conventions for ports, connection factories, destination names, security group
names
Security model:
- groups (sync'ed over ldap, group membership managed out in directory, use existing
company security process)
- users (authenticated over ldap, managed out in directory, only assigned to groups in
directory, authorized based on group)
- service accounts (special users created and managed in TIBCO, for non-human use
- permissions are assigned to groups (the group kind of becomes a role)
- wildcards created using destination naming convention, eases provisioning once
security model established
- ACL entries created for wildcards for each group created for each security realm
(collection of groups sharing a wildcard)
- users use admin tools and do few destination tasks, service accounts do the bulk of
destination work and ZERO admin tools
Standardize EMS Admin provisioning tasks via scripts and preferably a system that can
automate the scheduling/execution of these scripts (e.g. Jenkins), determine how the right
EMS server instances are chosen, establish review/approval process
High availability should be implemented where possible and support flexibility for rolling
maintenance
Load balancing designed up front when and when not to use
Implement JNDI, such that applications do not have direct knowledge of the EMS server
instances that are used for their destinations (this also gives admins greater flexibility for
adding EMS servers, moving connection factories without disruption (if combined w/ FT))
Create resiliency through:
- high availability of EMS using Fault Tolerance
- establish location transparency via JNDI
- set appropriate restrictions on destinations
- establish reusable connection management libraries for development
Bridging usage should be well defined and standardized (e.g. publish to Topics, bridge to
Queues). Use of selectors should be part of the message header design, if EMS selectors
will be used
Routing should be well thought out and protections created for both sides
Scaling designs to be designed and published up front, test out the upper limits of naming
conventions and domains
End client usage directly to EMS (such as pc's, laptop's, tablets, smartphones) should be
isolated and well thought out
Establish templatized Hawk rulesbases
Implement an appropriate monitoring/alerting/notifications framework to ensure admin and
developer awareness of health in EMS resources
Maintenance plan needs to be established and followed
Installation best practices created to support scaling and/or rebuilding of servers
Establish messaging standards for approved patterns, link these to provisioning procedures
Work with O/S and HW vendors to review and certify designs before implementing,
especially from the companies that support your O/S (e.g. RedHat) and Hardware (e.g. IBM,
HP, etc)
Create a center of competency with at least a knowledge base for admins, developers, etc
to capture and share knowledge (how to's, FAQ's, support articles, provisioning procedures,
policies, documentation, etc.)

Read topic Sys.monitor messages

JDBC:
The JDBC transaction allows multiple JDBC activities that access the same
database connection to participate in a transaction. Only JDBC activities that use
the same JDBC Connection participate in this transaction type, but other activities
can be part of the transaction group. If the transaction commits, all JDBC activities
using the same JDBC connection in the transaction group commit. If the
transaction rolls back, all JDBC activities using the same JDBC connection in the
transaction group roll back.
The transaction group commits automatically if all activities in the group
complete and a non-error transition is taken out of the transaction group. If any
errors occur while processing the activities in the group, even errors in non-JDBC
activities, the transaction is rolled back and an error is returned (you should have
an error transition out of the group to handle this situation).
Individual JDBC activities can override the default transaction behavior and
commit separately.
JTA:
The Java Transaction API (JTA) UserTransaction type allows JDBC, JMS,
ActiveEnterprise Adapter (using JMS transports), and EJB activities to participate
in transactions. JTA specifies standard Java interfaces between a transaction
manager and the parties involved in a distributed transaction system: the
resource manager, the application server, and the application. Sun Microsystems
developed and maintains the API.

For activities that use the JMS transport, request/reply operations cannot
participate in a JTA transaction.
Not all application servers permit JMS and JDBC operations to participate in the
JTA transaction. Refer to your application server documentation for more
information about supported operations. If the application server does not permit
an operation, TIBCO ActiveMatrix BusinessWorks still allows you to configure
the operations in the transaction. However, no exception is raised and the
operations that are not supported by the application server are performed
independent of the transaction.

If the transaction commits, all eligible activities in the transaction group commit.
If the transaction rolls back, all eligible activities in the transaction group roll
back. The transaction group commits automatically if all activities in the group
complete and a non-error transition is taken out of the transaction group. If any
errors occur while processing the activities in the group, even errors in activities
that do not participate in the transaction, the transaction is rolled back and an
error is returned. You should create an error transition out of the group to handle
this situation.

XA:

The XA Transaction type allows you to specify an XA-compliant transaction


manager provided by a third party that supports the interfaces
javax.transaction.TransactionManager
files. For the TIBCO BusinessWorks XA Transaction Manager,
configuration occurs during product installation.
If the transaction commits, all eligible activities in the transaction group commit.
If the transaction rolls back, all eligible activities in the transaction group roll
back. The transaction group commits automatically if all activities in the group
complete and a non-error transition is taken out of the transaction group. If any
errors occur while processing the activities in the group, even errors in activities
that do not participate in the transaction, the transaction is rolled back and an
error is returned. You should have an error transition out of the group to handle
this situation.
Please go throught the BW_Process_Desigh_Guide for more info.

JMS Local Transaction


The JMS Localtransaction allows JMS activities to participate in a transaction.
The JMSspecification defines the concept of a transacted JMS Session that can be usedto
transactsends and receives, all or none of which will get executed when the session is
committed.
A session,when specified as transacted, supports a single series of transactions. Each
transactiongroups a set of produced messages and a set of consumed messages into an
atomicunit of work. Multiple transactions organize the sessions input message stream
and outputmessage stream into a series of atomic units. When a transaction commits, its
atomicunit of input is acknowledged and the associated unit of output is sent. When a
transactionrollback is done, all the produced messages (output stream) are destroyed and
theconsumed messages (input stream) are automatically recovered.
Actvities Supported:
JMS QueueSender
JMS TopicPublisher
Get JMS Queue Message activity
JMS Event Sources - JMS QueueReceiver and JMS Topic Subscriber, and Reply To JMS
Message
Publisherto Adapter
Adapter Subscriber
Adapter Request-Response Server
**
The JMSRequestors and Wait for JMS Queue/Topic Message activities will not participate in a
local transaction even if they are included in a JMS Local transaction group.

Factors affecting performance of TIBCO BW


Below factors decides the performance of Tibco Activmatrix BW process during runtime, so one must decide these
resources very carefully

Hardware - CPU, Memory and Disk resources


Java - JVM and JVM configuration
Engine - Number of engines, number of threads, job creators, flow control, job pool, etc.
Job - Job, Message size
Process Design - User scripts, sub-processes, inline processes, checkpoints, logging activities
External software - like relational DB, other TIBCO Software product

In addition to the components above, the performance of the BusinessWorks engine is also affected by external factors
such as
rate of incoming messages,
network latency,
performance of external applications with whom BW processes communicate, and
other OS processes that may be running on the system

AppManage commands Syntax


AppManage location C:\tibco\tra\5.4\bin>

Creating the configuration xml File from a deployed application..

AppManage -export -out C:\AUTODEPLOY\HOToStore.xml -app HOToStore/HOToStore


-domain TIBADMIN -user tibco -pw tibco

Creating the configuration xml File from Ear File

AppManage -export -ear C:\AUTODEPLOY\HOToStore.ear -out


C:\AUTODEPLOY\HOToStore.xml

Deploy command using ear and config xml.....

AppManage -deploy -ear C:\temp\HOToStore.ear -deployconfig C:\temp\HOToStore.xml


-domain TIBADMIN -user tibco -pw tibco

UnDeploy Command

AppManage -undeploy -app HOToStore/HOToStore -domain TIBADMIN -user tibco -pw


tibco

Deploy command for deploying an application that is in the undeployed state


AppManage -deploy -app HOToStore/HOToStore -domain TIBADMIN -user tibco -pw
tibco

Delete Command

AppManage -delete -app HOToStore/HOToStore -user tibco -pw tibco


-domain TIBADMIN -force

Exporting an EAR file from Administrator


We can export an EAR file and its configuration file of deployed application in Admin with
AppMange command

AppManage -export -out c:\temp\ExampleApp.xml -genEar -ear c:\temp\ExampleApp.ear -app


root/folder1/ExampleApp -user <username> -pw <password> -domain <domain name>

The deployment configuration file and EAR file are created in the c:\temp folder. The application
is embedded in root/folder1/, which is relative to the Application Management root in the TIBCO
Administrator GUI.

We can export all applications EARs in an administration domain using the appManage
-batchExport option.

Example,

AppManage -batchExport -user <user name> -pw <password> -domain <domain name> -dir
c:\temp\test

Potrebbero piacerti anche