Sei sulla pagina 1di 27

INTERNATIONAL JOURNAL FOR APPLICATION DEVELOPERS

JANUARY - MAY 2002

SUBSCRIPTIONS ARE FREE

VOL. 9 / ISSUE 1

WHAT ARE YOU DOING FOR YOURSELF?


by Rolf Andr Klaedtke

About the author


Rolf Andr Klaedtke is an independent consultant and software developer with over 15
years in the IT industry, mainly
working with IBM midrange
systems. He is the publisher of PowerTimes and has been
the president of the Sybase and PowerBuilder User Group
Switzerland. In 1996 and 1997 he has been the main organizer of the Swiss PB Conference. Currently he is mainly
working in AS/400 and web development projects using
Macromedias tools. In his spare time, Rolf Andr likes
jogging and biking and is interested in music, eXtreme Programming, Use Case Analysis and more. You may reach
him at rak@powertimes.com.
Hi, my name is Rolf Andr and I used to be a workaholic !
Thats it, Ive publicly admitted it. But believe me, it has
taken a long time to get to this point !
Lets jump back a few years: in 1995, when I decided to go
independant, I had two main reasons for doing it.
Firstly, I always wanted to do it because it was challenging.
I figured that I would learn a lot, and working for many
clients, sometimes on short projects, would allow me to see
various work environments and get to know lots of people.
I found this a very exciting idea.
The 2nd reason is probably not a good one to go independant:
the past 2 or 3 years in the companies I worked for were
more or less frustrating. I had to work with people who didnt
understand their job or were too chaotic. I experienced mobbing before the term was widely known and used, both on
me and on others. I was interested in programming and the
technology but there was a lot of politics behind the scene.
So, finally I quit.
The second reason has a certain importance for what happened: because of my experiences and accumulated frustration I decided that I would do (almost) anything just to avoid

having to work as a permanent employee again. That means,


I started many activities in order to have many possible
sources of income: if one would not work, I would have an
alternative.
The problem was that almost everything worked out. As a
steering committee member of the Swiss PowerBuilder
Group I was the editor of the groups newsletter. In 1995,
when it started, it was already called PowerTimes...
In 1996, based on what I had seen and liked at PB user
group conferences in Atlanta and Chicago in 1995, I organised the Swiss PowerBuilder Conference with the great help
of Mark Lansing and Arthur Hefti and many others. It was
an incredible experience and we repeated it in 1997.
Besides these activities I worked for two clients, one was a
software developer, the other a consulting type assignment.
I also was reseller of some US software companies like
MetaSolv, Greenbrier & Russell, Financial Dynamics, etc..
Oh yes, I lived in the same house with my girlfriend (theres
a reason why I dont write I lived WITH her!) and played
soccer where I was the coach of my team for one year !
A normal day didnt finish when I got back from my clients site. After dinner and the news on TV I started working
at home, either for PowerTimes, the user group or whatever
else was urgent. There was always something urgent ! When
I decided that I had to go to bed sometime in the morning, I
told myself that I had to take a book and read something
before falling asleep, since I needed to learn something...
I learned how to spell burn out then !
It took me some years and a lot of pain to come to the conclusion that I was sick. My girlfriend moved out and I got
several stress-related health problems. My doctor once told
me that I might have a tumor because I had very strange
headaches, tooth pain and sight troubles. It was actually not
that serious, but an inflammation of a nerv called
Trigenimus. It mad me suffer. And finally think...

Page 1

Continued on page 2

PowerTimes
Whats Inside

I wont go into more detail here. As I wrote earlier, it took


me a long time to recognize my problem and even more to
overcome it.

Editorial: What are you doing for yourself?


How did I do it ? After the 3rd soccer-related surgery I decided to quit playing soccer. I left the SPBUG steering committee and started to get a somehow normal schedule,
which essentially means that I tried to go to bed at latest
around midnight or 1am. The key point is getting focused.

by Rolf Andr Klaedtke

I stopped drinking softdrinks like Coke and Ice Tea and


started drinking water. Lots of water actually. Just changing
my drinking habits accounted for a loss of about 5 kg in
weight ! And - most importantly - I started jogging. The first
time I went, I couldnt finish a tour of 2km without walking. In September 2001 I finished my first half-marathon,
thats 21,195 km ! For this year, I joined a Gigathlon team
in Switzerland where Ill cover the running distance (more
information on this event is available through our website).

on Page 3

I dont know whether Im healed today, but I definitely


enjoy life much more and I am in better shape than ever
before. Im doing a better job at work, arising problems dont
seem to take that much of my resources. It sounds paradoxical, but I seem to have more and better time for my professional career too.

The Architectural IDE and OMGs MDA

Now, what are you going to do for yourself ? How about


putting away that softdrink, drink some water and going out
for some physical exercise ? Thats what Im going to do
right now...

Testing your Web Application:


A quick 10-Step Guide

Impressum can be found on Page 2


PowerOpinions: PowerBuilder vs. Java vs.
Visual Basic vs. C#
Communication Middleware for Mobile
Applications A Comparison
by Dr. Silvano Maffeis on Page 9

Heat Usage Metering with MobileBuilder and


ASA Ultralite
by Daniel Wenger and Arthur Hefti on Page 12

by Richard Hubert on Page 14

WebWorker: Creating a Search Form that is


using a Stored Procedure
be Rolf Andr Klaedtke on Page 19

be Krishen Kota on Page 22

ASP.NET Performance
Take care,
Rolf Andr Klaedtke, editor

by Alan Walsh on Page 25

PowerTimes is an international developers journal

Editorial Board in alphabetical order:


Don Draper
Boris Gasin
Rolf Andr Klaedtke
Mark Lansing
Alan Walsh

published several times a year. It is targeted towards


client/server and web application developers.

Contact addresses:
Editor:
Rolf Andr Klaedtke
Bchlistrasse 21
CH-8280 Kreuzlingen / Switzerland
Fax: ++41 - (0)71 - 670 01 71
e-mail: rak@powertimes.com
Co-Editor:

Mark A. Lansing
Eichmatt 17
CH-6343 Rotkreuz / Switzerland
Fax: ++41 - (0)41 - 790 74 79
e-mail: mlansing@powertimes.com

On the web: http://www.powertimes.com


Back issues
Back issues are available on our website at
http://www.powertimes.com as Adobe Acrobat files.

Subscriptions
Subscription are available for free: just go to our website
at http://www.powertimes.com and register on our
mailing list. Each time a new issue is available, mailing list members will be notified and receive the password for the issue.
Disclaimer
Articles or opinions published in PowerTimes do not
necessarily reflect the opinion of the editorial board.
They reflect the opinion of the submitter. We assume
that the copyright of all submitted articles is owned by
or granted to the submitter. We must decline every
responsability regarding technical correctness or truth
of submitted articles and opinions.

Page 2

PowerTimes
POWEROPINIONS:
POWERBUILDER VS. J AVA VS. VISUAL BASIC VS. C#
Introduction
This is not an ordinary article. In fact its not really an article
at all. Its actually a discussion thread that took place between
some very qualified people who have frequently written
proper articles for PowerTimes over the years.
The following thread took place at the end of August in 2001.
We took out the irrelevant comments and submitted the
document to all participants again this year for a final review.
As the Editor of PowerTimes and the one who has submitted
the questions to the participants, I take full DIScredit for not
having published this much earlier. I apologize for this.
I hope youll enjoy reading the very interesting comments
as much as I did. And if youd like to add your opinions or
remarks, please feel free to send us an email:
opinions@powertimes.com.

The Question
The thread starts off with a question from Tom Brackney to
Bruce Eckel (author of various books, including the famous
Thinking in Java), who then forwarded it to the Editor
here at PowerTimes.
Right now I am learning Java through your book (i.e.
Thinking in Java) and CD. My question is what is your
opinion of PowerBuilder? Have you had enough exposure
to it or know someone who has that has and has the same
credentials you have to give an opinion as to PB vs. Java?
I guess I am seeking your opinion on PB from an
experienced OOP that has used other tools besides PB.
In a second mail, Tom added the following:
One more question if its ok. In what way are you seeing
Java implemented mostly? I am not seeing any Java except
in connection to the web as servlets or programs running
on the web server to accommodate a client request. Is this
true in your experience and what you have seen Java is
mainly used for? Personally I dont think PowerBuilder
has out lived its usefulness especially with what I have seen
in the new version. But, marketing and packaging has a
great deal to do with what products are ultimately used in
any IS shop. What is considered to be the leading edge or
hottest development language is not necessarily the best
tool to use for a particular project. But when your managers
say this is the standard tool we will be using in this
corporation, then thats it.

The Answers
So, here comes what others had to say. Please refer to the
end of the article or our website (www.powertimes.com)
for a short bio of the participating people. Thanks to Tom

for the question, and Bruce for forwarding it. And of


course a big thanks to all those who have replied.
Bill Heys, well-known for his paper on NVOs (Non-Visual
Objects in PowerBuilder) and seminars, now with Microsoft,
replied with the following:
I would like to respond to this message, but first, I would
like to broaden the discussion to include PB vs. VB vs.
Java, and perhaps even C#.
I think I am in somewhat of a unique position to give one
persons point of view, having worked with and taught both
Java and PB and, since I recently joined Microsoft, to a
lesser extent VB and C#.
I would like to address issues such as: learning curve, OO
features, ease of use, performance, market strength, vendor
viability, and other pros and cons.
It will take me some time to fully develop all my thoughts,
but I thought I would at least let you know I am interested in
participating in the debate.
Overall, I fear that Sybase is becoming more and more
irrelevant in the market place. I dont think they are market
leaders in any category any longer. It is very sad. At Microsoft,
we view Oracle and IBM as our primary DBMS competitors,
and Sun/Java (along with IBM Websphere, and BEA
WebLogic) as our primary developer tools competition.
Finally we see AOL as a big threat for content on the Web.
Mark Pfeifer added the following:
This has become an extremely popular question in the past
few months. I think we are seeing companies that have gone
to Java and possibly move back to PB or are simply starting
up their IT programs again. Not sure, but it is interesting to
see companies stepping back and taking a look at their
options. I for one like Java, but people can develop bad
applications in any language J
Steve Benfield, well-known PB and OO-evangelist and
now with SilverStream wrote:
Java is primarily used for server-related programming.
Servlets, JSPs mostly. Objects that encapsulate business logic
& EJBs. (EJBS are used by about 10% of the people that
say they do J2EE). If you are doing web programming, Java
works really well.
For client-side work, Id use PB or VB. Java on the client
can be used as a stand-alone app but we all know PB/VB
give a much better visual/GUI experience to the user.
Because of the advent of Web Services, using PB/VB on

Page 3

PowerTimes
the client and java on the server are much easier because
data can be passed via XML over HTTP. Im not sure if PB
is getting into Web Services (Ed. note: the answer is yes, in
PB 9) with a stub generator for WSDL or not. There is no
need for developers to spend time parsing XML - a waste of
their time; let a code generator do it. It would be pretty easy
to write a WSDL -> PB NVO compiler I think. VB is a little
easier if you use .NET because it is already Web Services
savvy.
Peter Horwood, aka Madman Pierre integrated some
other tools/languages in his opinion:
So Ill throw in some really quick comments from my
perspective and then get back hard to work making $$ from
my opinions J.
I have worked/team lead with all 4 (PB, VB, C++ and Java)
in the past 6 months.
When we were in the building process of the VB project I
was thinking. Hmm. VB isnt as bad as I remember,
obviously it has improved drastically in the past few years.
However, when we got into testing, I remembered - oh yeah...
terrible debugging, stupid errors that would have been caught
at compile time by any decent language. By the time the
project was over, I came to loath VB - purely because it
doesnt compile time error check like Java and to a large
degree PB. I even came away thinking compared to VB,
even C++ does a lot of compile time checking.
Years ago I read an article on debugging in Dr. Dobbs. I
noted in an article I subsequently wrote that if the Dr.
Dobbs article is correct, a fully debugged C++ program is
equivalent to a Java program that simply compiles. So
that would give you an idea how far down the line VB is
in my opinion. The BIGGEST advantage to VB is cheap
DEVELOPERS. If I am doing a project that is small
enough that the runtime compiling and debugging wont
frustrate me to no end, then VB is great because I can hire
reasonably competent developers and pay them in small
amounts of cheap Canadian Dollars. This is a huge
advantage.
I still like C++ for non-db stuff where you want a small tight
program. (Personally I dont touch C++ - I hire the staff).
And my companies have used C++ in the past couple years
when writing tiny utilities and operating system type drivers.
But, since Sybase dropped Power++ I do not like C++ for
anything else.

solution and look at other servers as the lower side of the


scale. Macromedia Dreamweaver (+ accessories) is my
development environment. For a Client/Server environment
I like and use exclusively PowerBuilder.

Java and PB are the only solutions I


am willing to consider
If Im actually PERSONALLY doing the work, Java and
PB are the only solutions I am willing to consider these days.
I enjoy life far too much to use VB, and I dont get my hands
dirty with code enough to be competent in C++. I can leave
PB for a year, never touching the code and become
competent in a few hours, it is easy to get back in the saddle.
Java is harder, C++ is much harder. I have no interest in
working with C++ long enough and often enough to stay
competent in C++. I am much happier hiring people in such
a case.
So, thats my 2 bits worth. But remember, Im Canadian, so
our 2 bits are only worth 16.5 cents US J.

Peters lines stimulated Bill Heys second, much longer, mail


in this thread:
I havent had the time yet to prepare my complete
comparison. But I wanted to comment on a few of the
thoughts submitted by Madman Pierre.
First, I was actually seeking to include the new MS language
C# for .Net (rather than C++). I have no problem with adding
C++ to the list, however. Second, I want to compare the
latest version of VB (VB.Net), which is now in beta testing.
I realize that there will be a lot of people complaining that I
want to compare to beta version languages: VB.Net and
C# to the other tried and true languages such as C++, PB,
and Java.
My reasons are that if I compare PB version 7 with VB
version 6, there would be many reasons why I would prefer
PB over VB. Peters comments are on target with respect to
earlier versions of VB. But VB.Net is a strongly typed
language now, and I believe that Peter would not only like
the compile-time error checking in Visual Studio.Net
(regardless of language choice), but that he would also like
the intellisense feature of the script editors that understand
the properties and methods of classes and either types the
code for you or allows you to pick a choice from a dropdown list of methods, properties, etc.

This brings it back to PB & Java.


My answer there is very simple:
For web presentation I like to use exclusively Java. (I
specifically like Java to create the HTML etc., I dont use
Java Applets on the client anymore). And yes, I still have a
SilverStream bent. But with the latest versions of
SilverStream I look at SilverStream as the high end scalable

If I compare PB 7 to PB 6, PB 6 would win hands down. I


actually hated using PB 7. I felt the new interface of PB 7
was a disaster. I found it much more time consuming and
difficult to create menus in the PB 7 menu painter, or
DataWindows in the DataWindow painter. There were too
many postage stamped windows crammed into the
workspace. I was always resizing, maximizing, minimizing,
pinning, unpinning, closing, opening panes. It was a pain.

Page 4

PowerTimes
Give me the old menu painter and the old DataWindow
painter any day.
As someone who has been trying to evangelize the ObjectOriented analysis, design, and implementation approach,
PowerBuilder was always a superior language compared to
VB.... until now. What especially I like about Java, in contrast
to PB or C++, is that it is a pure object-oriented language.
With PB or C++, you could create classes and develop fully
object-oriented code, or you could choose to use the
languages non object-oriented features instead. With Java,
everything is a class, and you must create fully object-oriented
code, for better or for worse.
I agree with the suggestions that PB and VB are better than
Java for traditional fat-client/server applications. The clientside user interface capabilities and issues have always left a
lot to be desired. I also agree with the observation that few
Java developers are fully taking advantage of J2EE features
such as EJBs. I agree with the opinion that most people are
using Java for server-side processing such as generating
HTML, Servlets, JSP, etc. A few people are using Java for
transaction based processing.
As we compare Java with PB, VB, C#, and C++, I think it is
helpful to separate issues about the language from the
development environment and the execution environment.
In the case of PB, the language, the IDE, and the execution
environment are PB (with or without Jaguar CTS/EAS). I,
for one, found the EAS environment slow, unreliable, and
frustrating to use with native PB NVO components, although
with EAS 3.5, and PB 7.0 things started to get a little (only
a little) better.
With Java, the language is defined by Sun. But the IDE and
development environment can be PowerJ, Visual J++,
JBuilder, Symantec Cafe, Visual Age, Silverstream, and
others. Each of these has their base of loyal followers. In
itself, Java IDEs are a subject of great debate. The wars here
are not over. For runtime environments, in the case of Java
application servers, the leaders seem to be WebSphere, and
WebLogic with a host of also rans, each with their compelling
features. Apologies to those who love SilverStream, but I
dont think it comes close to the market share of either IBM
or BEA.

A key selling point for Java is


cross-platform support.
With VB and C#, the language is defined by MS. The
development environment is Visual Studio .Net. And the
runtime environment is the .Net platform (with its
common language specification, common language
runtime, and cross-language framework of classes)
running on Windows NT. Windows 2k, or Windows XP.
I realize it will be a religious war at this point. But...
A key selling point for Java is cross-platform support. While

this is true for the language, it is not necessarily true for a


given application. It is virtually impossible to port an
application developed in Java on, for example, WebSphere
on any other vendors application server. With Java, you are
tied to a single language (Suns Java) and to a specific
vendors runtime environment (application server).
When I was working with NetDynamics, an extinct Java
Application Server, I found the learning curve to be
substantial. Not only did I need to understand the ever
changing (JDK 1.1.2, 1.1.3, 1.1.6, 1.2, etc.), but I also needed
to understand the ever changing framework of NetDynamics
classes that sat on top of the JDK. These classes defined the
API that I needed to understand in order to write an
application to run on the NetD application server. In the short
time that is was viable, the underlying architecture of the
NetD server changed dramatically several times as it moved
to become a CORBA server and then a J2EE server. When
NetD was acquired by Sun, rather than cementing its future,
the acquisition cemented the feet of NetD. It soon became
apparent that iPlanet (aka Netscape Application Server, aka
Kiva) would prevail. Believe me, it is and was not trivial to
convert an app from NetD to iPlanet.
So try to port a WebSphere application to WebLogic. It wont
happen. So Java may be cross platform as a language, but as
soon as you go beyond the JDK to a specific vendors
implementation, or a specific application server, you are now
tied to a particular vendors proprietary solution.
I think there are two significant challenges that Java folks
have to deal with. First, I think almost everyone agrees that
the learning curve is huge. It takes a long time and a
significant investment to become proficient developing
applications for WebLogic or WebSphere. The second
challenge is performance. I believe people are beginning to
discover that Java applications often suffer from severe
performance issues.
Throw darts now.
The MS .Net approach is different. Here the story is language
choice. There are over twenty (20) languages currently being
developed to run on .Net. Fujitsu is developing COBOL .Net.
MS has VB and C# .Net. There are .Net versions of Eiffel,
Perl, Python, C++, even Java and JScript. Right now, the
.Net platform runs only on Windows (NT, 2K, XP, etc.).
Even in the Unix world, there are proprietary issues to deal
with. Sun Solaris is not the same as IBM AIX or HP UX. A
lot of people are coming to believe that Linux will prevail
over all the vendor-specific flavors of Unix. IBM, for example
is making a strong push into Linux. Today, there is not a
version of the .Net platform for Linux. There may not ever
be a version of the .Net platform for Linux. But it is not out
of the realm of speculation.
With MS. Net, the story is not cross-platform compatibility,
but cross platform integration. Being able to allow an
application written in any language to talk to any other
application, written in any language, running on any

Page 5

PowerTimes
platform. Applications talking to applications over the Web.
This is the power of Web Services. Applications sending and
receiving XML messages formatted in a standard way (using
SOAP). Text-based XML messages sent over HTTP.
Application integration using data formatted in XML
documents and messages formatted in XML/SOAP. The MS
Visual Studio .Net is the best RAD environment for
developing Web Services in almost any language.
Okay, kill me now.
With VB .Net the language has really evolved. It now fully
supports inheritance, function overloading, function
overriding, polymorphism, encapsulation... all the things I
loved about PowerBuilder. But it goes beyond that. With
Visual Studio .Net, I can develop a class in any .Net language
(such as COBOL), and inherit from that component in any
other language (such as VB or C#). Cross-language
development, cross-language inheritance, cross-language
debugging, cross-language structured error handling (try
catch), debug from client to server to database. It is a beautiful
thing.

Visual Studio .Net


puts PowerBuilder to shame.
VB .Net is now a real language. And Visual Studio .Net puts
PowerBuilder to shame. I absolutely love the IDE. I have
never been as frustrated using VS .Net as I was using PB
version 7. And I had been using PB for almost eight years
when PB 7 began drive me crazy.
Take a look at the www.tpc.org tpc-c benchmarks. The
Windows 2k platform, with Windows SQL Server 2k is
blowing away the competition in performance. Even IBMs
DB2 benchmarks run better on Windows than on AS/4000.
And MS is much lower on a cost/txn basis.
For performance, Microsoft, Windows, and SQL Server
2000 win. For cost, Microsoft, Windows, and SQL Server
2000 win hands down.
For language choice... Java and C++ developers should take
a look at C#.
For RAD development Visual Studio .Net wins hands down
over PB... PowerBuilder developers take a look at VB.Net.
For generating web pages: take a look at the new drag and
drop Web Forms in Visual Studio ASP.Net.... and the code
in ASP.Net is fully compiled offering significant performance
advantages over JSP or traditional ASP.
So, if you are a Java shop looking to reduce training costs,
learning curve, hardware costs, and application server costs...
take a look at C# and the MS .Net platform.
If you are a PowerBuilder shop, questioning the future of
Sybase, wanting to build Web Services, looking for better
performance... take a look at VB .Net

Having said all this. What one feature do I like in


PowerBuilder that I cant get anywhere else? It is the
DataWindow. But the DataWindow had its own
shortcomings. It tended to be very client-centric. Connecting
from the client to the database. It was frustrating trying to
move to an n-tier architecture with DataWindows.
DataWindows worked best when they connected to the
database, issued dynamic SQL to the database, and managed
the buffers on the client. DataStores began to help. Stored
Procedure support began to improve. When it came to being
able to control the look-and-feel of a data-bound control on
a window... nothing compares to the DataWindow. Sigh!
But as a reporting vehicle, it fell short of other tools like
Crystal Reports.
Nobody has better support for XML/SOAP and Web Services
than the .Net Platform. The Java application servers are all
in a tizzy when it comes to a standard way of formatting
XML messages.
When it comes to an application server, Windows 2k comes
with COM+ (aka MTS). Why buy Jaguar CTS (EAS) to
run on top of Windows, when there is a better application
server already there ?
Oh... I forgot to mention... if you havent figured it out by
now, I am now a Developer Evangelist at Microsoft in
Waltham, MA.
Well, that was a heavy, pro-Microsoft, statement. But this
column is called Opinions, so one may either agree or think
about it differently. But lets go on. Peter Horwood replied
with the following short comment.
C# - I have noticed that frequently foreign emails have a
variety of spare random characters so I just assumed that
you had a special character that looked like ++ on your
computer and # on mine! Sorry bout that.
The one big personal advantage that C# and VB might have
is - presumably Microsoft was smart and they are not case
sensitive. It may sound silly but I honestly consider Javas
biggest weakness to be case sensitivity. Too easy to create a
bug by creating 2 variables when you think you forgot to
declare it because you typed the wrong case <breath here>.
Are these assumptions correct? Who knows, if they are,
maybe Ill look at them some day J. However if C# is too
C/C++ish Ill have no interest whatsoever. But with strong
type checking, VB may be worth looking at again.
Asked directly, Bill adds the following clarifications to Peters
questions:
C# follows into the same category of languages as C++
and Java. Case sensitive and semi-colons are required. VB
is like PB - Case insensitive and continuation characters are
used where needed, semi-colons are not. This, to some
extent, summarizes the decision today whether to use C# or
VB within Visual Studio .Net.

Page 6

PowerTimes
Since VB has been brought into the category of first-class
languages (OO, data types are the same as other languages,
no more variant data type, data types are classes, etc.) the
choice is more one of preference. Today, a VB class and a
C# class will look almost identical from a syntax perspective.
Yes, C# has { } (curly braces) and VB has Function/End
Function syntax, but the same underlying class from the
framework are used.
So a VB class and C# class that contain the same basic code,
will compile to the same intermediate language (MS IL),
which is conceptually similar to Java Byte code. A .Net
assembly, either a DLL or EXE, contains this intermediate
language. At runtime, the classes in the assembly are then
compiled by a Just In Time (JIT) compiler. The resulting
machine code binary is cached for subsequent executions.
Since both VB and C# compile to the same MS IL, they will
perform nearly identically at runtime. There no longer is a
performance advantage to using C++ or C# over VB.
So, if you are tired of typing curly braces and semi-colons, if
your shift key is wearing out, code in VB. On the other hand
if you want a syntax more similar to Java and C++, code in
C#.
VB is definitely worth looking at. Especially since developers
are a dime ($.01 CDN) in your neck of the woods.
Breath Here J.

Finally, Don Draper, a well-know PowerBuilder and ASP


(Active Server Pages) developer and speaker, adds his
comments on all this:
Great thread! Let me just say that I agree with Bill 100%.
The .NET platform is truly a giant step forward for Microsoft.
We are using .NET now to build a new Web application and
so far its very nice. All of the shortcomings of ASP/VBScript
and COM including poor data typing, weak security, awful
error handling, no OO and interpreted performance are now
gone. Microsoft is typically late the party but when they
finally arrive, you know it.
We are using both C# and VB in our new app so we can get
a feel for both. C# appears significantly easier to learn than
Java and is syntactically like Java or JavaScript.
One wonderful feature with Web forms is that you can truly
separate the GUI (HTML) from the business logic in a fully
OO environment (where have we heard this before?). Each
web page may contain its business code in separate codebehind file and using the new Visual Studio makes this very
easy.
In addition, Microsoft appears to be focusing more on serverside processing for Web apps. This means that you can use
.NET architecture and still fully support Internet Web sites
with cross-browser functionality. In fact, nearly every server-

side control can be set to render itself based on the calling


browser. And the validation controls provide server-side and
optional client-side code (or BOTH) with minimal coding
effort.
Other new features include:
- all code is always JIT compiled; performance is amazing
- new ADO.NET provides direct native to support to SQL
Server for optimum performance
- fully OO - classes can inherit from other classes regardless
of language
- COM is now gone (as we know it) but is supported for
backward compatibility
- DLL hell is finally eliminated
- use almost any language(s) you want
- Web server and app configuration greatly simplified
- Web forms make web apps much easier to build yet retain
low-level control when you need it
- XML is easy with an XML control and the ability to treat
XML files as a database
- debugging, timing & global tracing was never easier!
- Scalability with powerful page caching
- flexible session support including cross-server for web
farms
- Re-usable code works like HTML tags
- Xcopy Web Site Deployment. No MTS/COM registration!
- Crash recovery.. advanced, easy to setup
- Thousand of object libraries including encryption, graphics
on the fly, scraping websites and more
- VB is no longer a second-class citizen...same capabilities
as C#/C++

there still is nothing that can


compete with the DataWindow.
I quit using PB after version 6.0 and then only used it to
create DataWindow based reports for some web apps. I too
did not like the new 7.0 interface and was discouraged to
see more bugs than in version 6. But even with VB.NET,
there still is nothing that can compete with the DataWindow.
For simple client/server apps I would use the new VB but
enterprise needs I would still use PowerBuilder.
Java developers will feel right at home with C#. Those 55
gazillion VB programmers now have the opportunity to build
some really nice, enterprise quality applications if they can
only master the OO part. Hey...perhaps this is a training
opportunity for someone!

Editors remark: The thread went on... in the next issue


were going to publish a dialog that took place between
Peter Horwood and Bill Heys a few months after the
above, with Peter having used C# in the meantime.

Page 7

PowerTimes

Page 8

PowerTimes
Communication Middleware for Mobile
Applications A Comparison
by Dr. Silvano Maffeis, CTO, Softwired AG
About the author
Dr. Silvano Maffeis is CTO at Softwired, a company specializing in mobile messaging middleware. He holds 6
years of practical experience with CORBA middleware,
and 4 years of practical experience in Java messaging
middleware and wireless. Silvano Maffeis is the author of
various articles and books related to middleware.
He can be reached at Silvano.Maffeis@softwired-inc.com,
Phone: +41-1-445-2370.

Introduction
In this article we argue that to provide compelling data-driven
services to the mobile user, one often needs to deploy a
customized client application on the mobile device. This
client application usually has to communicate with
applications running on a server back-end. To expedite
the development of wireless services, companies are using
mobile middleware platforms to connect mobile Java
applications to the back-end. Two different middleware
standards are compared from the viewpoint of mobile
services: CORBA and JMS. We conclude that from the point
of view of user experience, scalability, and fit into a Java
application server environment (J2EE), a wireless enabled
JMS solution is preferred. This article assumes familiarity
with the general concepts of communications middleware,
as well as with CORBA and JMS.

Challenges of Mobile Applications


To provide compelling data-driven services to the mobile
user, wireless service providers seek out applications that
take advantage of the features of 2.5 and 3G networks:
always on communication, push notifications, and higher
bandwidth. However, user expectations have already been
shaped through exposure to high-end personal computers
and high-speed Internet links. In the current (wired) Internet
environment, users are accustomed to good Quality of
Service, meaning that Web pages and e-mail servers are
highly available and accessible at good speed.
In an ideal world, the quickest way to provide good
applications for mobile devices is to port PC based
applications to wireless devices. However, the wireless
communication environment is characterized by lower
bandwidth than the wired Internet, and by sporadic network
disconnects. Therefore, to create a compelling user
experience, new applications need to be developed and
optimized for the wireless platform. For the sake of vendor
neutrality, scalability, user experience, and time to market,
those applications will consist of a rich client deployed

directly on the mobile device, and of a server application


running at the back-end. We foresee the adoption of Java
technology for this type of solution: J2ME (Java-2 Micro
Edition) on the mobile device, J2EE (Java-2 Enterprise
Edition) application servers at the back-end, and a 2.5 or
3G network between clients and servers.

by using Java middlewaretime to


market is reduced and the user
experience is improved.
J2ME only provides rudimentary communi-cations
interfaces (e.g., TCP/IP streams, HTTP streams, and
datagram connections) for internetworking wireless clients
with servers. In order to cope with sporadic network
disconnects, with off-line service usage, as well as with the
peculiarities of the various wireless networks, developers
need to devote a substantial amount of their time to solving
technical networking issues, instead of focusing on the
business problem to solve. Companies are thus adopting
mobile middleware platforms aimed at enabling reliable,
secure, and scalable services in a 2.5 and 3G world. Mobile
services can now be developed and deployed more quickly,
by taking advantage of the offline operation, data
synchronization, reliability and security mechanisms
provided through the middleware.
In summary, by using Java middleware instead of coding
against the raw communications APIs provided by J2ME,
time to market is reduced and the user experience is
improved.

Middleware versus Micro-Browsers


The advantages of deploying a customized Java application
on a mobile device, as opposed to using a micro-browser to
access pages stored on a server, are manifold:
The user experience can be improved by providing
more interactions (e.g., imagine an interactive, intelligent map as opposed to downloading an image of a
map from a server).
Offline operation can be supported. The user can keep
on using the mobile application whilst disconnected
from the network. This is typically impossible with a
browser.
Less data transmission: A client application can cache
data locally, compress data, or perform local computations. This in order to minimize interactions between
mobile device and server.
More interesting types of interaction are possible: push
notifications, peer-to-peer sessions, and so forth.

Page 9

PowerTimes
The disadvantages are:
Higher development costs, because client applications
need to be developed.
Higher complexity of the overall solution, because
application code is distributed over client and server.
In summary, by using a middleware enabled client
application, users get a better usage experience and more
fun, at a lower cost. However, we believe that both models
(middleware and micro browsers) are appealing and have
their application areas. Actually, the two models can be
combined. For example, a micro browser can be used to
browse the service offerings of a provider, and to dynamically
download a middleware enabled client application.

CORBA Middleware
It has been proposed to use CORBA middleware on mobile
devices. Low-footprint CORBA implementations are being
worked on. Also, the OMG is working on a draft specification
titled Wireless Access and Terminal Mobility in CORBA
(dtc/01-05).
The CORBA specification was first presented back in 1992.
Thus, CORBA was not designed for mobile communications
and is used almost exclusively in server environments over
corporate networks. The standard has gained popularity until
about 1998. With the success of the Java platform (and
notably J2EE application servers), attention has been moving
away from CORBA, towards middleware solutions which
are better integrated into the Java platform and are less
complex than CORBA: RMI and Enterprise JavaBeans
(EJBs) became popular.
As a matter of fact, CORBA was conceived for computing
environments in which multiple programming languages are
used. However, in homogeneous environments, for example,
J2EE or Microsoft .NET, developers tend to prefer the
middleware tools integrated into that environment.
These are the points in favor of using CORBA on mobile
devices:
CORBA is a well accepted standard
CORBA can be used from various programming
languages, and not only from Java. CORBA is
programming-language agnostic
There is a substantial amount of CORBA experience,
books etc. available.
If a company has invested a lot in developing CORBA
based services, mobile clients can connect to those
services more easily, by using a CORBA ORB on the
device.
These are the points against using CORBA on mobile
devices:
CORBA is inherently based upon a request/response
synchronous communication model. There is a misfit
between this communication model and between the
nature of wireless networks: wireless networks are
packet based, sporadic network disconnects do occur,

transmission speeds and delays vary a lot, etc.


The CORBA communication model assumes that
communication links are stable, have low error rates,
and that network disconnections occur only rarely. This
is an acceptable assumption in corporate (wireline)
environments, but not in wireless networks.
Introduction of the Enterprise Java (J2EE) technology
has moved attention away from CORBA to other
communication mechanisms, notably RMI and JMS.
Implementing bi-directional communication is
awkward, as this normally requires the provision of a
CORBA server object on the mobile device.
CORBA is based on a classical client/server interaction
model. However, wireless services often require push
notifications, queued communication, and peer-to-peer
sessions. One could argue that this is granted through
the CORBA Notifications service and through CORBA
Time-Independent Invocations (TII). However, we
believe that these mechanisms are too heavy weight
and complex to be deployed on mobile devices.
ORBs typically support only the TCP/IP protocol, but
no packet-based protocols optimized for 2.5G and 3G
networks.

In summary, a mobile CORBA ORB is to be considered by


companies that have built a substantial amount of CORBA
based services, and want to make these accessible to
applications running on mobile devices. There is, however,
a misfit between the CORBA communications model and
the characteristics of wireless networks.

References:

OMG standards: http://www.omg.org/


Wireless CORBA specification:
http://www.omg.org/techprocess/meetings/
schedule/Telecom_Wireless_FTF.html

JMS Messaging Middleware


Messaging middleware operates using message queues.
These queues are hosted on both the mobile devices and the
back-end servers. Communication is fully bi-directional, oneto-one, or one-to-many. Outbound messages are added to a
queue and are sent when a network connection can be
established between a mobile device and a server. This
enables effective communication between client and server
to take place despite sporadic network disconnects or periods
of off-line service usage. JMS is the de-facto standard for
messaging middleware in the Java domain. JMS is part of
the J2EE platform, and must be offered by application server
vendors, in order to claim J2EE compliance.

JMS provides a compelling interaction


model for mobile applications.
These are the points in favor of using JMS middleware on
mobile devices:
Messaging middleware has been deployed very
successfully in mission critical business systems, since

Page 10

PowerTimes

the 1970s.
Messaging middleware has a large share of the
middleware market and is the solution of choice for
financial trading floor systems, Enterprise Application
Integration (EAI), order processing systems, and other
areas.
There exist very large JMS installations with thousands
of concurrent users.
JMS is an asynchronous transport. Asynchronous
transports are ideally suited to packet-oriented networks
over which 2.5 and 3G services will operate.
Although JMS was not explicitly designed for mobile
devices, it provides an ideal abstraction layer for
developing mobile applications.
Increased scalability: Many mobile devices can send
messages to a server simultaneously. When messages
arrive at the server, they are added to an inbound queue
and can be dealt with when resources are available, or
can be forwarded to other servers for load sharing.
JMS middleware provides mechanisms for
implementing fault-tolerance and load balancing. This
is important for mobile services, which are likely to have
large numbers of concurrent users.
JMS middleware is inherently more scalable than
CORBA: Communication is mostly asynchronous, and
hence throughput is increased. Also, JMS messages can
be routed from one data center to another, or can be
dispatched to a cluster of servers for load sharing.
The JMS model is simpler and less complex than
CORBA. Therefore developers need less time to get up
to speed.
In practice, JMS allows wireless services to operate more
responsively, to recover from sporadic network outages
easily, and to allow mobile applications to be operated
offline. This makes for a much richer user experience.
Messaging can be implemented elegantly atop
Bluetooth, Wireless LAN, GPRS, UMTS, and Mobitex.
etc.
Messaging middleware typically provide more QoS
customization, as well as integrated security.
JMS implementations for mobile devices are becoming
commercially available, for example, iBus//Mobile from
Softwired.
JMS can be integrated with XML and SOAP, as well as
with other middleware technologies (CORBA, EJBs,
IBM MQSeries, Microsoft .NET, etc.).

These are the points against using JMS middleware on


mobile devices:
JMS is of interest only if applications are written
predominantly (but not exclusively) in Java.
JMS is part of the J2EE platform and is not officially
part of the J2ME platform yet. However, this is very
likely to change in the near future.

In summary, JMS provides a compelling interaction model


for mobile applications. JMS allows applications to run
efficiently atop 2.5 and 3G networks and to support
disconnected operation. JMS fits very well into the micro
Java end enterprise Java environment. Using JMS,
interactive wireless services can be developed more quickly,
and user experience is improved. Since JMS is a de-facto
standard, companies face less risk of being tied to a single
vendor.

Vendor Proprietary Middleware Interfaces


In any case, we strongly recommend you pick a mobile
middleware platform compatible with either CORBA or
JMS. Those are the predominant middleware standards in
the Enterprise Java world. However, most of the mobile
middleware products offered on the market today are not
written in Java, and come with programming interfaces
which are vendor proprietary. By embarking with such a
solution, you will need to train your staff on those vendorproprietary interfaces, and your wireless software will by
tied to that vendors solution. This means you risk redoing
your wireless architecture at a later point if you wish to switch
to Java or to a standard-based middleware platform.

Conclusions
Wireless networks behave differently than wireline networks:
devices tend to lose and regain network coverage, sporadic
network disconnects do occur, bandwidth is much lower and
varies substantially over time. Communications middleware
thus needs to addresses these problems and to provide an
adequate programming model. In this article we have shown
that Java messaging middleware (JMS) is better suited than
CORBA for running applications over wireless networks.
The main reason is that JMS provides an asynchronous,
message based transport. Using message queues hosted on
both the client and the server side, JMS applications can be
operated in disconnected mode, and data synchronization
occurs transparently and immediately, without user
intervention.

References

Professional JMS , Scott Grant et al., Wrox Press Inc;


ISBN: 1861004931 ??iBus//Mobile wireless JMS
product:
http://www.softwired-inc.com/products/
mobile/mobile.html http://www.Hayabooza.com/
JMS Specification:
http://java.sun.com/products/jms/docs.html

Introduction to JMS (Java Message Service) and Wireless JMS:


Page 11

PowerTimes
USAGE METERING WITH MOBILEBUILDER AND ASA ULTRALITE
by Daniel Wenger and Arthur Hefti

What is MobileBuilder?

About the authors


Daniel Wenger has a degree in technical informatics. Hes
working for CATsoft as a PowerBuilder, C/C++ and Web
Programmer. He can be reached by email at
daniel@catsoft.ch.
Arthur Hefti has a degree at the Swiss Federal Institute of
Technology and is managing director of CATsoft
Development GmbH. He is a well known consultant in
the PowerBuilder area and develops in various
programming languages. Arthur can be reached by e-mail
at arthur@catsoft.ch.

Introduction
The target of our project was the creation of a handheldbased prototype that could be used to enter values from heat
usage and other meters. The daily work would be
synchronized from a PC to the handheld device and the
collected data would be synchronized back for further
processing.
For this prototype we decided to use a Palm and the ASA
UltraLite database from Sybase. For an IDE we evaluated
MobileBuilder from PenRight.

MobileBuilder provides a complete, Windows- based


development environment specifically designed to create
powerful pen-based solutions for a wide choice of mobile
platforms on the market. With MobileBuilder developers can
quickly create cross platform applications for Mobile
platforms based on the Palm OS, Windows CE, Pocket PC,
Windows NT, 98, 95, 3.1, MS-DOS, DPMI operating
systems - all from a single application design.

MobileBuilder IDE
Coding in MobileBuilder is done in C. To compile a project
depending on the deployment platform a C compiler is
defined which is called from the IDE.

MobileBuilder includes a code assistant that helps to find


the right API function out of about 350 functions that
MobileBuilder provides with its libraries.
The IDE comes with syntax coloring for scripts and an easy
to use drag-and-drop based form designer with property tabs
for each element. The IDE also includes also various wizards,
for example to define a connection.

MobileBuilder IDE with Project Tree on the left and Script Window on the right

Page 12

PowerTimes
Adaptive Server Anywhere UltraLite

Conclusion

ASA UltraLite is a relational database with some limitations


e.g. no triggers or procedures. It runs on handheld computers
and needs only a few kilobytes of memory. It includes
synchronization with a database running on a PC through
MobiLink.

The combination of MobileBuilder with ASA UltraLite is a


powerful combination to develop applications for handheld
devices. When there is a need for a database application its
very easy to add the ASA UltraLite database from Sybase as
a component.

Project Details

When starting database development for mobile applications


there are some things to consider and there are tricks you
have to know. The combination of MobileBuilder with ASA
UltraLite makes a start in mobile development easier.

One of our customers is in the heat usage reading, calculation


and billing services. They have a lot of customer service
technicians reading the values from heat usage and other
meters. Up to now the technicians have written the data on
paper and entered it in the evening into an application on
their notebook PC. The data on the notebooks was
synchronized with the headquarters on a weekly base.
The new application on the handheld is used to directly enter
data at the customer site. The daily work on the notebook
database is synchronized via MobiLink with the UltraLite
database on the Palm. During the day the customer service
technician enters the values and synchronizes them back in
the evening.

More Information

CATsoft Development GmbH


Haldenstrasse 65
8045 Zurich
Switzerland
Web: www.catsoft.ch
Mail: mobile@catsoft.ch

PenRight! Corporation
6480 Via Del Oro
San Jose, CA 95119, USA
Web: www.penright.com

Sybase Incorporated
Worldwide Headquarters
6475 Christie Avenue
Emeryville, CA 94608 USA
Web: www.sybase.com

Working with MobileBuilder


Working with MobileBuilder is very convenient. The forms
can be designed using drag and drop and a code assistant
helps to get familiar with the initially unknown functions
and events. The different SQL statements, database profiles
and field mappings can be maintained as well.
The only problems we had in the beginning where related to
the compiler, for the Palm, where you have to consider some
special features, e.g. that the segment size is limited to 32Kb.

CATsoft Development GmbH is specialized in the software


development of custom-made Web applications, Client/
Server applications and mobile development. CATsoft is
located Zurich, Switzerland. More details of the projects and
company can be found at www.catsoft.ch.

Coeus Application in the Palm Emulator


Page 13

PowerTimes
THE ARCHITECTURAL IDE AND OMGS MDA
by Richard Hubert

About the author


Richard Hubert is an accomplished software architect who has won numerous international
awards for large-scale
software systems and architectural tools. As founding director of Interactive Objects Software GmbH (iO), he leads
a large team of professional architects who apply Convergent Architecture across diverse industry segments. In
2000, iO introduced its Architectural IDE for MDA,
ArcStyler (www.ArcStyler.com).
He is author of the recent OMG Press book Convergent
Architecture (www.ConvergentArchitecture.com) and is
an active contributor to the OMGs MDA standardization
effort.

Introduction
This paper provides an overview of the an Architectural IDE
an automation platform implementing the concepts of the
OMG Model Driven Architecture in the context current
OMG submission (MDA 2002) and as defined in the recent
OMG Press book, Convergent Architecture (Hubert 2002,
www.ConvergentArchitecture.com ).

The Architectural IDE

shop which has been specifically designed to meet the


requirements of an experienced architect. It provides tightly
integrated tools and automated assistants to support the
effective construction of architecture-conforming IT systems.
This integrated approach to high-level architectural tools is
known as an IT-Architectural IDE.1
Figure 1 summarizes the development coverage of the
Architectural IDE and its support of the Rational Unified
Process (RUP) and its public-domain counterpart, the
Unified Process (Kruchten 1998).
The principle objective of the Architectural IDE is to
automate and assist the Critical Path Workflows in a concrete
instance of Rational Unified Process. At the top of figure 1,
the Critical Path Workflows are shown as they progress with
time from left to right. Situated below each workflow are
major categories of artefacts that must be created, integrated
and manipulated during the workflows. Observing the figure,
we can briefly summarize the requirements on the
Architectural IDE as follows:

Business and Requirements Models: The Business


Analysis and Requirements Workflow requires tools to
easily record and manipulate business structures and
flows. The modeling activities in this workflow are
highly interactive. Thus, the tool must help a designer

A full coverage tool suite is an integral part of an IT


architectural style, as defined by the Convergent Architecture.
This tool suite is analogous to a pre-configured machine

IDE is an acronym for Integrated Development Environment,


made popular in the IT field by main-stream programming IDEs.
IT-Architectural IDE may be abbreviated to Architectural IDE when
the IT context is clear.

Figure 1, Covering the Critical Path Workflows of the RUP.


Page 14

PowerTimes
to rapidly record and structure significant amounts of
business information without hindering the dynamics
of group analysis sessions. The resulting models should
then be equally valuable as a source of business
information as well as for convergent refinement into
software systems.

Common Model Repository: The Business and


Requirements Models should initiate a trackable thread
of information and design refinement across all other
workflows. To support this thread, both business and
technical design information should be saved in a welldefined central format (UML) in a common model
repository. This repository must be open to incremental
exchange and integration at any time with other tools
(XMI/XML, open Java API).

UML Design Models: The creation of UML models


according to the Analysis and Design Workflow should
proceed in an automated or assisted manner using the
patterns defined by a well-formed architectural style.
Further automation should help the developer refine
UML models according to the well-documented
modeling style an integral part of the architectural
style. This process of tool-assisted modeling should
continue until the model is sufficiently complete to
permit automatic generation of all those aspects of a
software system that can be reasonably represented in
UML (as defined by the modeling style). To enable
generation of high-value artefacts, the tools must permit
the developer to automatically verify and debug the
UML model according to the specification of the
modeling style and according to the real requirements
and constraints of the target deployment environment.

Implementation, Build, Deployment and Test Artefacts: Significant portions of these artefacts can be automatically generated from any UML design models that
conform to the modeling style. This generation occurs
according to a documented Technology Projection that
has been designed to map a style-conform UML model
to a particular technology. Thus, the IDE must support
pragmatic, flexible configuration of Technology Projections and their automatic use in an incremental development process. Lastly, the tools must help developers create new Technology Projections or modify and
extend existing Technology Projections.

ArcStyler
The rest of this article describes how an available
Architectural IDE, the ArcStyler (iO 2002), meets these
requirements.
Figure 2 introduces the main modules of the ArcStyler, an
Architectural IDE as defined by the Convergent Architecture.
The figure positions the components, or modules, of the IDE
with respect to the RUP workflows. It also shows some of
the major tools that are currently encapsulated or explicitly
coordinated by the IDE: Rational Rose, JBuilder, J2EE/EBJ

Application Servers, for example.


The Architectural IDE is arranged into five major
components, or modules as shown in Figure 2, each
supporting one or more of the steps along the critical
development path. Each module is summarized here in
conjunction with the techniques and automation levels they
implement to support the development process.
The Convergent Business Object Modeler (C-BOM) assists
both the IT designer and the business domain expert in their
joint task of requirements analysis and elaboration of the
business structure and dynamics. It provides visual modeling
assistance to help identify and document a convergent system
using the RASC components described above. It does this
using the proven techniques of Responsibility Driven Design
(RDD) and Class Responsibility Cards (CRC) as prescribed
by Convergent Engineering (Taylor 1995) and OPEN
(Graham 1997). Cross-functional teams use this module
during the highly collaborative task of modeling, documenting
and testing the business structure and business dynamics.
This task also leverages Analysis By Design (ABD) and
dynamic Walk-Through/Run-Through techniques to verify
model completeness and quality as described by Convergent
Engineering. Run-Through results are recorded both as
formal State Transition Tables (STT) as well as a more
intuitive graphical form. The graphical representation provides
visual documentation, tracking and playback. The rules of
the modeling style are used to automatically verify and report
on the models integrity and completeness in both structural
and dynamic aspects at any time. These reports based on
verified models serve as design sign-off documents. They
also include test cases in the form of State Transition Tables
and the visual scenarios documenting all Run-Through paths
through the convergent business system. The results of the
business modeling session are stored in a repository based
on the standard Unified Modeling Language (UML 2000)
metamodel and the Extensible Markup Language (XML).
This repository is used in a federated manner by all other
modules of the Architectural IDE to guarantee the translationfree, lossless enrichment of design information according to
the principle of Component Metamorphosis as defined by
the Convergent Architecture.
The Convergent Pattern Refinement Assistant (C-RAS)
picks up the results of the C-BOM and helps a designer
graphically evolve the business model into a more detailed,
more technically precise model representation in UML. This
task proceeds in a structured manner according to the principle
of Convergent Engineering. This is achieved by using
refinement patterns based on those developed by the OPEN
Consortium (Henderson-Sellers 1998), which are employed
to guide the designer and to check the integrity of the
refinement. With the visual support provided by the tool, the
CRC component representations from the business model
are mapped to UML component representations without
losing track of their origin and without losing the existing
information on the business components: the information is
enhanced and refined, not translated or replaced. In the spirit

Page 15

PowerTimes

Figure 2, Modules and Environment of the Architectural IDE


of assisted modeling, much of the UML refinement is
handled automatically by the tool itself according to the
patterns and the UML modeling style defined by the
Convergent Architecture. For example, the projection to a
standard J2EE/EJB component model starts here. The UML
model is automatically elaborated into a J2EE/EJB compliant
design using reasonable defaults. The designer can influence
these defaults, but the tool suggests a well-formed, standard
structure for the designer to build upon or tune in the
subsequent stages of refinement. Once again, all results are
stored in the federated UML repository.
The Convergent UML Refinement Assistant (C-REF)
reads the results of the C-RAS and now presents the
Convergent Component model at the level of an advanced
UML-modeler for further enrichment and tuning of the
design. This is the point where system interaction and access,
whether per Internet2 or other channels, and interaction with
external systems, whether internal or per Internet3 is
elaborated in detail using the standard Unified Modeling
Language. The tool provides several intelligent architecture
assistants during this phase to further preserve convergence,
model integrity and assure technological feasibility of the
design. The first assistant checks at any time whether the
detailed model is complete and well formed according to
the UML, J2EE/EJB and Java standards. Another assistant
helps the designer proceed according to the specific modeling
style of the architecture by providing specialized wizards,
dialogs, views and diagrams. A third assistant helps generate
UML models for default user-access and test components

Often referred to as Business-To-Customer interaction


(B2C).
3
Often referred to as Enterprise Application Integration (EAI)
and Business-To-Business (B2B) respectively.

(known as Accessors), based on the existing business


component model. This allows the designer to model and
reuse complex Internet access and interaction logic in UML.
A fourth assistant helps the user configure a particular
technology projection and its runtime environment. Based
on this configuration, the assistant then checks the model at
any time for its technological feasibility. This verification
step is analogous to a compiler, except it is working at the
level of a UML-model. Based on the capabilities and
constraints of the configured technology projection, the
assistant points out which aspects of the model cannot be
effectively mapped to the selected implementation
technology. With this just-in-time feedback, the modeler can
better maintain architectural integrity and assure the quality
of the subsequent system generation.
The Convergent Translative Generator (C-GEN) reads the
UML model in parts or in its entirety from the C-REF tool
and generates the complete component infrastructure
including the environment for configuration, construction
and deployment of the convergent system. The generator
translates the UML model to the particular infrastructure
while preserving convergence. To do this, it uses
transformation scripts, code generation templates (e.g. for
Java, HTML, J2EE/EJB, XML, Java-IDE), technology
capability tables (e.g. for a J2EE/EJB application server) and
other information. All transformation instructions belonging
to a particular technology projection are encapsulated in a
so-called generator cartridge, which can be installed,
configured and used as a unit by the developer. The generator
cartridge is referred to simply as a cartridge when used in
this context. There can be any number of cartridges, one for
each particular model-to-code generation to a target
infrastructure. The translative generator itself is oblivious of
the specific content of the cartridge. Thus, cartridges also
exist for model-to-model transformations between modeling

Page 16

PowerTimes
steps. In addition, combinations of cartridges can be used in
concert to guarantee proper modularity and separation of
concerns between coexisting types of infrastructures. The
source code and other artifacts generated by the cartridge
are of consistent, pre-determined quality. The internals of
the generated artifacts (e.g. source code, deployment
configuration, build configuration) can be modified at places
deemed appropriate by the architectural style. The cartridge
uses several techniques to enable the controlled modification
of generated artifacts. However, it is important to note here
that the Convergent Architecture mandates clean modelbased, model-driven development. This means that all
artifacts that were generated from the UML-model can only
be extended or modified in a controlled manneras defined
by the architectural style. This is an explicit enforcement of
model-driven development approach. However, the rigor of
this enforcement can be regulated using the Meta-IDE
(below) to modify the rules of the code generation.
Not all aspects of a system can be reasonably represented in
UML models or derived and generated from UML models.
These aspects must be developed at the source code level.
To do this, the Architectural IDE leverages one of the several
Java IDEs available on the market. The Java IDE may be
used to refine, compile and debug the artifacts generated by
the C-GEN module. These include Java programs,
configuration files, the build environment, test infrastructure
and deployment information. During the generation process,
the cartridge clearly demarks and annotates the areas where
additions can or should be made in the Java IDE. This helps
the developer make rapid additions while maintaining
structural integrity and synchronization with the UMLmodels. In addition, the cartridge generates the artifacts
required by the Java-IDE in order to load, build, deploy and
test the system in the context of a specific runtime
infrastructure. This includes default test code to permit
evolutionary modification and testing of the system.
The Convergent Meta-Programming IDE (Meta-IDE) is
the visual development environment for a generator
cartridge. The development of a cartridge can be regarded
as meta-programming since the scripts developed here drive
the translative generation of many other programs and
models in accordance with MDA concepts. The Meta-IDE
is required only when a developer needs to extend or adapt a
cartridge. In this case, the cartridge is visually developed,
tested, traced and debugged in a similar fashion to well
known programming IDEs for C++ or Java. The Meta-IDE
is used, for example, to modify the HTML- and J2EEgeneration templates in order to produce a different lookand-feel in compliance with a particular web site branding
or a corporate identity. Using the Meta-IDE, the chief
architect and lead designers of large, multi-team IT
organizations have a tool to tailor and adapt the architectural
style and its MDA support in a well defined place and well
defined form. This helps guarantee a consistent level of welldocumented quality and architectural integrity across all
projects.

The Convergent Implement, Deploy and Test


Environment (C-IX). Model-driven development can also
cover the areas of user interaction and access to and from
external systems. This is achieved in part in the C-REF
module as described above, where the appropriate modeling
capabilities and assisted modeling style is provided. To enable
end-to-end MDA, the UML Accessor models must be
mapped to a stable deployment infrastructure. This
infrastructure is provided by the Accessor cartridge. The
Accessor cartridge complements the Accessor modeling style
with a corresponding technology projection that generates
infrastructure for various implementations based on J2EE
standards such as JSP, Servlets and Web Archives. The
Accessors provide a higher level of UML design for Web
interface and Web service design as well as the design of
their respective test and deployment properties. They enable
consistent generation of well-formed deployment and test
infrastructure. In addition, specification at the level of UML
models simplifies the migration process as implementation
technologies change.

Conclusion
An architecture-driven approach according to the OMGs
Model Driven Architecture requires significant tool support
for both model-to-model and model-to-code transformations.
In addition, tool automation according to a well-formed
architectural approach can significantly improve quality
along the critical development path as well as across
development projects. For this reason, the Convergent
Architecture defines a full-cycle tool platform, known as an
Architectural IDE, as an integral part of a holistic architectural
style. The intense effort required to develop or integrate a
dependable tool environment is chronically underestimated.
In fact, the effort and skills required to develop a high-level
platform for Model Driven Architecture is prohibitive even
the largest IT organizations. Thus, introducing a pre-defined,
pre-tested Architectural IDE can significantly reduce costs
and risks in IT projects.
In this article, the underlying concepts and the rational behind
an Architectural IDE were outlined, and an available product,
the ArcStyler, was used to exhibit how these concepts are
applied in real-world situations.

Bibliography
MDA, 2002, OMG Model Driven Architecture
Initiative, http://www.omg.com/mda, http://www.omg.org/
cgi-bin/doc?ormsc/02-01-04.pdf
Kruchten, P. 1998. The Rational Unified Process,
Addison Wesley Longmann. ISBN 0-201-60459-0
Hubert, R. 2002. Convergent Architecture: Building
model-driven J2EE systems with UML. New York: John
Wiley and Sons, OMG Press, ISBN 0-471-10560-0,
www.ConvergentArchitecture.com

Page 17

PowerTimes
iO GmbH. 2002. (Interactive Objects Software GmbH)
The ArcStyler Architectural IDE for MDA.
www.ArcStyler.com . www.io-software.com
UML, 2000, The OMG Unified Modeling Language
Specification, Version 1.3, March 2000, http://
www.omg.org/cgi-bin/doc?formal/2000-03-01
Graham, I., Henderson-Sellers B., Younessi H. 1997. The
OPEN Process Specification. Addison Wesley Longman.
ISBN 0-201-33133-0
Taylor, D. A. 1995. Business Engineering with Object
Technology. New York: John Wiley & Sons. ISBN:0-47104521-7-522

IN THE NEXT ISSUE...

The following are some of the articles that you will find in
the next issue that we are already preparing:

and a follow up on the PowerBuilder vs. Java vs. VB vs


C# discussion. Thats not everything, but the remaining
ones are not confirmed yet, so we dont talk about it...

Versioning in the .NET framework


If you like PowerTimes, please tell your friends and colleagues so that they can subscribe as well. Remember, subscriptions are free.

by Alan Walsh

Improve your PowerBuilder code


by Herv Crouzet

If you would like to advertise in PowerTimes or on our


website, please contact either Rolf Andr Klaedtke at
rak@powertimes.com or Mark Lansing at
mlansing@powertimes.com. Thank you for your support.

A primer on Enterprise Application Integration


by Thomas Frling
Furthermore, there will be another WebWorker article by
Rolf Andr Klaedtke, a product review by Mark Lansing

Page 18

PowerTimes
WEBWORKER: CREATING A SEARCH FORM
USING A STORED PROCEDURE
by Rolf Andr Klaedtke
About the author
Using a stored procedure to query your
database

Rolf Andr Klaedtke is an


independent consultant and
software developer with over
15 years in the IT industry,
mainly working with IBM midrange systems. He is
the publisher of PowerTimes and has been the president of the Sybase and PowerBuilder User Group Switzerland. In 1996 and 1997 he has been the main organizer of the Swiss PB Conference. Currently he is
mainly working in AS/400 and web development
projects using Macromedias tools. You may write Rolf
Andr at rak@powertimes.com.

In this first article Ill show you a solution to a question posted


by Gokhan Aytekin in a Macromedia user forum in January
2002: I want to build a search form with 3 text fields. When
I submit the form I want to send the values in the text fields
to a Stored Procedure to query the database. Can you explain
which is the easiest way?.
I will go through the necessary steps to accomplish this,
assuming that youre familiar with HTML, creating forms
and a record set in Dreamweaver UltraDev. I wont go into
too much detail here. If you need help with these more basic
steps, check the booklist at the end of this article.

Introduction
Over the past few years I have developed several websites,
almost all of them consisting of static HTML pages, mostly
by using HoTMetaL Pro and later Adobes GoLive.
About a year ago, I thought that it would be a good idea to
finally back the PowerTimes web site with a database. As
our provider supports MS SQL Server 7 and ASP, that was
the technology to be used.

Creating the Search Form


If you wish to see the form this article is based upon, point
your browser to http://www.powertimes.com/pages/
events_search.html (see figure 1). The form allows to search
for events in an events table, providing between 0 and 8
search arguments.

In this new column, I will share my experiences about


developing our site. This will consist of technical tips,
solutions to problems that I, or others, encountered,
interesting resources on the web, and tools that I use.
However, I wont stay with ASP forever, the next step will
be to explore JSP, Servlets, EJBs, etc.
Of course Im looking forward to your feedback. Please dont
hesitate to send me your tips on improving the code that Ill
present or let me know of additional resources.

What tools am I using ?


When you decide to develop a website, either static or
dynamic, you have a lot of choices to make. I wont list all
the questions here, rather I will let you know what Ive been
using.

Editor: Macromedia Dreamweaver UltraDev


Graphics: Macromedia Fireworks and PaintShop Pro
Scripting: JavaScript, ASP
CSS: TopStyle Pro
Database: MS SQL Server 7

Figure 1, Search form with text fields (extract)


When the form is submitted, another ASP page is called,
where all the work is done. Heres the part that does this:
<form name=form1" method=get
action=events_list.asp>

These are the base tools, as I use other utilities Ill introduce
them as well.
Page 19

PowerTimes
Creating the Stored Procedure
Before creating the form, we need to create the stored
procedure. For this example, we use MS SQL Server 7.
In the Enterprise Manager, open your database and select
Stored Procedures. You should see a certain number that
have been created when you set up the database. Select the
option to create a new one (right-click in the right panel will
do) and a window will open, allowing you to enter the code
for the procedure. Heres the one that were using for this
example, without any extra code for validation or error
checking:
CREATE PROCEDURE sp_listEvents @title
varchar(100) = NULL, @location varchar(50) =
NULL, @city varchar(50) = NULL, @state
varchar(50) = NULL, @country varchar(60) =
NULL, @typeofevent varchar(50) = NULL, @organiser varchar(75) = NULL, @startdate varchar(10)
= NULL
/*
Object: sp_listEvents
Description: Displays a list of events according to parameters received
Author: Rolf Andr Klaedtke
Date: November 11, 2001
*/
AS
SELECT * FROM dbo.event
WHERE (evt_title like @title OR @title = OR
@title is NULL) AND (evt_location like @location OR @location = OR @location is NULL)
AND (evt_city like @city OR @city = OR @city
is NULL) AND (evt_state like @state OR @state
= OR @state is NULL) AND (evt_country like
@country OR @country = OR @country is NULL)
AND (evt_typeofevent like @typeofevent OR
@typeofevent = OR @typeofevent is NULL) AND
(evt_organiser like @organiser OR @organiser =
OR @organiser is NULL) AND (evt_startdate
like @startdate OR @startdate = OR
@startdate is NULL) AND (evt_PublishFlag = Y)
ORDER BY evt_country, evt_city, evt_startdate,
evt_title

There are a few important points in this stored procedure:


1.

2.
3.

The arguments must be followed by = NULL, e.g.


@title varchar(100) = NULL, otherwise the SP will
expect an argument. So the user would have to provide
a search value for each argument.
Its always a good idea to comment your code, the same
is true for your stored procedures.
The SQL code which finally selects the values in the
table is rather long in this sample. This is due to the fact
that an argument may or may not contain a value, which
is expressed as follows:
AS
SELECT * FROM dbo.event
WHERE (evt_title like @title OR @title =
OR @title is NULL) AND

select a row in a table if the column evt_title equals the


argument @title OR if the argument @title is empty (note
that I check for NULL and for , which may not be
necessary). We repeat this test for each argument, which
correspond to the search fields in our search form, and also
to our fields in the database.

The Result Page


The page events_list.asp is a little more interesting. First,
in Dreamweaver, I created the page as I wanted it to look
like: theres a table with two rows (one for the title, one for
the record) and 4 columns in this case, named Date,
Title, Location, City and URL. We wont worry about
the rest of the functionality, like navigation, here.
The next step is to create a record set. But instead of selecting
a table in the Database Items field, we select Stored
Procedures in this dialog (see figure 2).
Once you select the stored procedure that was created earlier,
youll notice that in the Variables section, UltraDev
automatically displays the arguments expected by the stored
procedure. All you need to do is to fill in a default value (%
in this case) and the Run-time Value column. This is in
fact the input from the form that you want to pass to the
stored procedure. For the first argument in our example, this
would be:
Request.QueryString(Evt_Title)

I assume that you know about the Request object, otherwise


youll find a description in each decent book on Active Server
Pages.
Once youve filled every line according to the sample above,
youre basically done: the user can enter any number of
search values into the form. When the form is submitted,
the values are passed to the second page, which reads them
in and passes them to the stored procedure. The procedure
queries the database and returns a result set, which in turn
can be displayed on the page. Theres nothing really magical
going on here.

Need more help ?


If youre not feeling comfortable with Active Server Pages
or SQL Server, you may want to check out the following
resources:
Websites
http://www.sqlteam.com/
http://www.4guysfromrolla.com
http://www.aspemporium.com
http://coveryourasp.com/

Here, evt_title is the name of the column in the database,


@title is the argument, as declared in the beginning. So, we
Page 20

PowerTimes

Figure 2, Creating the record set

A special note on this one, as its very interesting. In fact,


James Shaw, the creator, lets you download the full code of
the entire site for you to study and reuse it. An excellent
learning tool.
Books
The Gurus Guide to Transact-SQL by Ken Henderson
(Addison-Wesley, ISBN 0-201-61576-2)

to add a comment or provide some more input, please dont


hesitate in contacting me - I wont miss to pass on the credit!
I still have some ideas and samples that Id like to write about
in upcoming issues, but if you have a topic of interest, please
let me know. Maybe youll even consider sharing your ideas
and knowledge in your own article ?

There are plenty of books on ASP and I suggest you search


for one on amazon.com or in your favorite bookstore.
If you have a question regarding the above, or if youd like

Page 21

PowerTimes
TESTING YOUR WEB APPLICATION:
A QUICK 10-STEP GUIDE
by Krishen Kota
About the author

questions to help you get started:

Krishen Kota is a 10-year


veteran of the information
technology consulting industry and is President of
AdminiTrack, Inc. (www.adminitrack.com), which
provides a web-based issue and defect tracking application
designed specifically for professional software
development teams. Krishen can be contacted via email
at kkota@adminitrack.com.

Introduction
Interested in a quick checklist for testing a web application?
The following 10 steps cover the most critical items that I
have found important in making sure a web application is
ready to be deployed. Depending on size, complexity, and
corporate policies, modify the following steps to meet your
specific testing needs.

Step 1 - Objectives
Make sure to establish your testing objectives up front and
make sure they are measurable. It will make your life a lot
easier by having written objectives that your whole team
can understand and rally around. In addition to documenting
your objectives, make sure your objectives are prioritized.
Ask yourself questions like What is most important: minimal
defects or time-to-market?
Here are two examples of how to determine priorities:
If you are building a medical web application that will assist
in diagnosing illnesses, and someone could potentially die
based on how correctly the application functions, you may
want to make testing the correctness of the business
functionality a higher priority than testing for navigational
consistency throughout the application.
If you are testing an application that will be used to solicit
external funding, you may want to put testing the aspects of
the application that impact the visual appeal as the highest
testing priority.

How will issues be reported?


Who can assign issues?
How will issues be categorized?
Who needs what report and when do they need it?
Are team meetings scheduled in advance or
scheduled as needed?

You may define your testing process and reporting


requirements formally or informally, depending on your
particular needs. The main point to keep in mind is to
organize your team in a way that supports your testing
objectives and takes into account the individual personalities
on your team. One size never fits all when dealing with
people.

Step 3 - Tracking Results


Once you start executing your test plans, you will probably
generate a large number of bugs, issues and defects. You
will want a way to easily store, organize, and distribute this
information to the appropriate technical team members. You
will also need a way to keep management informed on the
status of your testing efforts. If your company already has a
system in place to track this type of information, dont try to
reinvent the wheel. Take advantage of whats already in place.
If your company doesnt already have something in place,
spend a little time investigating some of the easy-to-setup
online systems such as the one found at http://
www.adminitrack.com. By using an online system, you can
make it much easier on yourself by eliminating the need to
install and maintain an off-the-shelf package.

Step 4 - Test Environment


Set up a test environment that is separate from your
development and production environment. This includes a
separate web server, database server, and application server
if applicable. You may or may not be able to utilize existing
computers to setup a separate test environment.

Your web application doesnt have to be perfect; it just needs


to meet your intended customers requirements and
expectations.

Create an explicitly defined procedure for moving code to


and from your test environment and make sure the procedure
is followed. Also, work with your development team to make
sure each new version of source code to be tested is uniquely
identified.

Step 2 Process and Reporting

Step 5 Unit Testing

Make sure that everyone on your testing team knows his or


her role. Who should report what to whom and when? In
other words, define your testing process. Use the following

Unit testing is focused on verifying small portions of


functionality. For example, an individual unit test case might

Page 22

PowerTimes

focus on verifying that the correct data has been saved to


the database when the Submit button on a particular page is
clicked.
An important subset of unit testing that is often overlooked
is range checking. That is, making sure all the fields that
collect information from the user, can gracefully handle any
value that is entered. Most people think of range checking
as making sure that a numeric field only accepts numbers.
In addition to traditional range checking make sure you also
check for less common, but just as problematic exceptions.
For example, what happens when a user enters his or her
last name and the last name contains an apostrophe, such as
OBrien? Different combinations of databases and database
drivers handle the apostrophe differently, sometimes with
unexpected results. Proper unit testing will help rid your web
application of obvious errors that your users should never
have to encounter.

Step 6 - Verifying the HTML


Hyper Text Markup Language (HTML) is the computer
language sent from your web server to the web browser on
your users computer to display the pages that make up your
web application. HTML is theoretically a standard used on
the Internet to make it easy for anyone, anywhere to view
the information on a website. That may be somewhat true
for a static website, but anyone who has been involved in
developing a web application knows that HTML is anything
but standard.
Verifying HTML is simple in concept but can be very time
consuming in practice. There are many online and
downloadable applications to help in this area such as Website
Garage (http://websitegarage.netscape.com). There are two
main aspects of verifying the validity of your HTML. First
you want to make sure that your syntax is correct, all your
opening and closing tags match, etc. Secondly, you want to
verify how your pages look in different browsers, at different
screen resolutions, and on different operating systems. Create
a profile of your target audience and make some decisions
on what browsers you will support, on which operating
systems, and at what screen resolutions.
In general, the later versions of Microsoft Internet Explorer,
version 5.5 and above are very forgiving. If your development
team has only been using Internet Explorer 5.5 on highresolution monitors, you may be unpleasantly surprised when
you see your web application on a typical users computer.
The sooner you start verifying your HTML, the better off
your web application will be.

Step 7 - Usability Testing


In usability testing, youll be looking at aspects of your web
application that affect the users experience, such as:

How easy is it to navigate through your web


application?

Is it obvious to the user which actions are available


to him or her?
Is the look-and-feel of your web application
consistent from page to page, including font sizes
and colors?

The book, Dont Make Me Think! A Common Sense


Approach to Web Usability by Steve Krug and Roger Black,
provides a practical approach to the topic of usability. I refer
to it often, and recommend it highly.
In addition to the traditional navigation and look-and-feel
issues, Section 508 compliance is another area of importance.
The 1998 Amendment to Section 508 of the Rehabilitation
Act spells out accessibility requirements for individuals with
certain disabilities.
For instance, if a user forgets to fill in a required field, you
might think it is a good idea to present the user with a friendly
error message and change the color of the field label to red
or some other conspicuous color. However, changing the
color of the field label would not really help a user who has
difficulty deciphering colors. The use of color may help most
users, but you would want to use an additional visual clue,
such as placing an asterisk beside the field in question or
additionally making the text bold.
For more details, refer to http://www.section508.gov.
Another great resource that can help analyze your HTML
pages for Section 508 compliance can be found at http://
www.cast.org/bobby/. If you are working with the United
States federal government, Section 508 compliance is not
only good design, it most likely is a legal requirement.

Step 8 - Load Testing


In performing load testing, you want to simulate how users
will use your web application in the real world. The earlier
you perform load testing the better. Simple design changes
can often make a significant impact on the performance and
scalability of your web application. A good overview of how
to perform load testing can be found on Microsofts
Developer Network (MSDN) website.
http://msdn.microsoft.com/library/default.asp?url=/library/
en-us/dnserv/html/server092799.asp
A topic closely related to load testing is performance tuning.
Performance tuning should be tightly integrated with the
design of your application. If you are using Microsoft
technology, the following article is a great resource for
understanding the specifics of tuning a web application.
http://msdn.microsoft.com/library/default.asp?url=/library/
en-us/dnserv/html/server03272000.asp
People hate to wait for a web page to load. As general rule,
try to make sure that all of your pages load in 15 seconds or

Page 23

PowerTimes
less. This rule will of course depend on your particular
application and the expectations of the people using it.

Step 9 - User Acceptance Testing


By performing user acceptance testing, you are making sure
your web application fits the use for which it was intended.
Simply stated, you are making sure your web application
makes things easier for the user and not harder. One effective
way to handle user acceptance testing is by setting up a beta
test for your web application.
One article to help you get started planning an effective beta
test is: Supercharged Beta Test by Joshua Grossnickle and
Oliver Raskin, May 14, 2001 which can be found at: http://
hotwired.lycos.com/webmonkey/01/20/
index1a.html?tw=design. This article points out the critical
aspects of setting up a beta test including how to identify
beta testers and how to obtain their feedback. The main point
to remember in user acceptance testing is to listen to what
the people using your web application are saying. Their
feedback will be critical to the ultimate success of your web
application.

Conclusion
Testing a web application can be a totally overwhelming task.
The best advice I can give you is to keep prioritizing and
focusing on the most important aspects of your application
and dont forget to solicit help from your fellow team
members.
By following the steps above coupled with your own
expertise and knowledge, you will have a web application
you can be proud of and that your users will love. You will
also be giving your company the opportunity to deploy a
web application that could become a run away success and
possibly makes tons of money, saves millions of lives, or
slashes customer support costs in half. Even better, because
of your awesome web application, you may get profiled on
CNN, which causes the killer job offers to start flooding in.
Proper testing is an integral part of creating a positive user
experience, which can translate into the ultimate success of
your web application. Even if your web application doesnt
get featured on CNN, you can take great satisfaction in
knowing how you and your teams diligent testing efforts
made all the difference in your successful deployment.

Step 10 - Testing Security


With the large number of highly skilled hackers in the world,
security should be a huge concern for anyone building a web
application. You need to test how secure your web application
is from both external and internal threats. The security of
your web application should be planned for and verified by
qualified security specialists.
If you think security is a subject that is over-hyped, check
out Steve Gibsons account of how a 13 year old hacker
took his companys website down for an extended period of
time at will. You can find this eye-opening security case study
at:
http://grc.com/dos/grcdos.htm
Some additional online resources to help you stay up to date
on the latest Internet security issues include:
CERT Coordination Center
http://www.cert.org/
Computer Security Resource Center
http://csrc.nist.gov/
After performing your initial security testing, make sure to
also perform ongoing security audits to ensure your web
application remains secure over time as people and
technology change.

Page 24

PowerTimes
ASP.NET PERFORMANCE
by Alan Walsh
The server automatically checks for changes and if it detects
a new version of your page it compiles it and then replaces it
in cache, after servicing all remaining requests for the page.
From the user perspective everything appears to keep
humming along even as the code changes.

About the author


Alan Walsh has been
working in IT for 16
years. He is currently
working at Indiana
University building
web-based systems using Microsoft technologies. In his
spare time he is an active amateur radio operator and is
working on his first screenplay.

One other non-performance related note that I feel compelled


to mention is that you are no longer limited to interpreted
languages like VBScript or Jscript in ASP.NET. Because your
code is getting compiled by the CLR at runtime you are free
to use any .NET language in your ASP.NET pages.

Introduction

Output Caching

Now that Visual Studio.NET and the .NET Framework are


finally here, its time to take a look at one of the most
important topics for any developer: performance. There are
so many new and exciting features in this platform it is easy
to lose sight of something as mundane as pure performance.
Yet, as any developer knows, this is one of the first things
on a users mind. Its unlikely that any of your users will
ever complain about your modeling or coding techniques,
but they will always complain about poor performance.

Aside from all the internal caching that ASP.NET is doing


on your behalf it also provides you with an easy to use facility
to perform your own application caching. Any developer
knows that caching is a sure-fire way to improve performance
in your application. Round trips in a distributed application
are expensive, especially when they involve the data tier,
and can be a true performance killer. ASP.NET brings new
caching capabilities to improve your applications
performance by eliminating some of those steps.

So lets take a quick look at ASP.NET and see what has


changed to allow you to create web applications that really
perform well.

Output caching, as it is known in ASP.NET, allows you to


cache the actual HTTP response of a particular page.
Subsequent requests for that page will be returned from the
cache and your code will not have to execute at all. Needless
to say, this will result in a big performance gain for your
application, especially on pages that retrieve data from the
database.

Compile That Code!


One of the good news/bad news features of ASP is that you
are using script, not compiled code to execute your
application. The good news is that it is relatively easy to
modify your application: edit your .asp file and restart the
application or server and youre all set. The bad news is that
your code has to be interpreted by the server each time it
runs, and that takes precious CPU cycles. Many people use
compiled COM+ components in their ASP applications to
encapsulate their business logic but the reality is that nearly
all developers leave a significant chunk of their application
in ASP scripts.
With the move to ASP.NET you can now have it both ways.
When your page runs for the first time the server compiles
and then caches your code internally. Subsequent requests
will be handled by the cached and compiled version of your
code. The result is an improvement in performance over the
lifetime of your application. Furthermore, maintenance of
that code is even more straightforward. .NET code uses an
xcopy deployment model. To install your application on a
new machine simply copy the files to the server and voila
youre done. No messy registration anymore. The really good
news is that you can do this on a live production server
without having to bring down the application!

To enable this functionality you can use a programmatic API


or a declarative API. Most people will probably choose the
latter, especially as they are first starting out. Using the
declarative API is as simple as adding a line like this at the
top of your page:
<%@ OutputCache Duration=60" VaryByParam=None
%>

This line is called a directive and gives special processing


instructions to the ASP.NET engine. Both of the above
attributes are required in any kind of output cache statement
and failure to include them will result in an error. The first
attribute instructs the server as to how long you would like
this page to be cached. In this example the page will be cached
for 60 seconds. At that time your code will be executed again
and a new version of the page will be loaded into cache for
the same duration.
The second attribute allows you to cache multiple versions
of pages that contain a query string or form parameters in a
POST request. Lets say for example that you have a search

Page 25

PowerTimes
form that allows users to look up details about products in a
catalog. On the page that processes the request you could
include a directive like:
<%@
OutputCache
VaryByParam=ProductID %>

Duration=120"

This statement tells .NET to create a cached page response


for each request that specifies a different Product ID. You
can include multiple parameters in your list by separating
them with semi-colons, or to include all parameters use an
asterisk. Obviously you do not want to cache all pages in
this manner. The best candidates are pages where the data
you would be retrieving does not change frequently.
There are two other possible attributes you can use in your
output cache directive to cache multiple versions of your
page. VaryByHeader gives you the ability to cache versions
based on the headers in the HTTP request. For example to
cache versions based on different browsers you could use a
directive like the following:
<%@ OutputCache Duration=60" VaryByParam=None
VaryByHeader=User-Agent %>

As was the case with VaryByParm you can include multiple


headers in your directive by using a semi-colon separated
list or include all headers with an asterisk.
Finally you can use a custom string to vary your cache by
using the VaryByCustom attribute:
<%@ OutputCache Duration=60" VaryByParam=None
VaryByCustom=mystring %>

Youll have to write some code to make this one work.


Specifically you will need to override the
GetVaryByCustomString method in your global.asax file:
<script>
public override string
GetVaryByCustomString(HttpContext context,
string arg){
If (arg = mystring){
Return somestring; //add some really
useful code here
}
}
</script>

Database Access
Even with caching, sooner or later your data is going to have
to make that trip all the way back to the database. Luckily
the .NET Framework also includes an improved facility for
data access. Under ASP developers used ADO for access to
data. Not surprisingly the .NET Framework version is now
called simply ADO.NET.
ADO.NET is quite a significant change from ADO and
certainly too deep a subject for us to cover here. But for our
purposes the relevant performance improvements in
ADO.NET can more or less be addressed by one question:
which provider do you use?
First there was ODBC, and then came OLEDB, and now
we have managed providers. This is the layer in ADO.NET
that sits between your abstract data access code and the
grungy bits that actually talk to your back end database. You
the developer write your data access code independent of
the data source and then the managed provider takes care of
translating that into something that the database can make
sense of. And thats where the hidden performance gains
are made. ADO.NET managed providers can provide big
performance gains for your application, particularly if you
are using SQL Server.
The SQL Server managed provider uses TDS, or Tabular
Data Stream, in its implementation. TDS is the native wire
protocol of SQL Server and so naturally performance is
astounding. Your mileage may vary, but expect a very
significant performance gain in your application just by
switching to ADO.NET and using the SQL Server managed
provider.
What if you are not lucky enough to be using SQL Server as
your database? All is not lost. ADO.NET also includes
managed providers for OLEDB and ODBC. The latter is a
relatively new add-on that you can download separately from
the .NET Framework. It is also the best bet for performance
if you dont have a native managed provider (i.e. SQL
Server). The ODBC managed provider should work with
any compliant ODBC driver, although it has only been tested
with the following:

Then in your directive you can use something like:


<%@ OutputCache Duration=60" VaryByParam=None
VaryByHeader=mystring %>

As you can see there are many options available to you as a


developer in ASP.NET for controlling how your pages are
cached too many to mention here. You can even choose to
cache pieces of your ASP.NET pages using a technique called
fragment caching! My advice is to take a close look at
what your pages are doing and start by trying to identify
those that would benefit most from caching, especially pages
that are performing data access.

Microsoft SQL ODBC Driver


Microsoft Oracle ODBC Driver
Microsoft Jet ODBC Driver

Im sure that the middle entry will be of interest to many of


you. If you are using ADO.NET and you are also using an
Oracle database then you should most certainly download
and use the ADO.NET ODBC managed provider. As .NET
gains momentum expect to see more native .NET
providers for all major databases from Microsoft and thirdparty vendors, just as we saw with OLEDB.

Conclusion
Moving to ASP.NET offers significant performance
improvements over ASP applications. If you are starting a

Page 26

PowerTimes
new web project you should undoubtedly be using ASP.NET
for your coding. If you have existing ASP applications and
you are interested in improving performance you should
seriously consider migrating your code to ASP.NET.
Fortunately, ASP and ASP.NET can coexist quite happily on
the same server. And you can mix and match ASP and
ASP.NET pages within your application. Look at which
pages can benefit most from migrating and start there. You
can download the .NET Framework SDK right now on the
MSDN web site at http://msdn.microsoft.com.

In the editorial, I asked you what youre going to do for


yourself, do you remember ? I wrote that I would go running right away, and I did so, when I wrote the editorial. Im
doing so again today, because Im preparing for the first
swisspower gigathlon in Switzerland.

Members of the PowerTimes Gigathlon Team:

The Swisspower Gigathlon 02 is a multi-day, multi-sport


journey through Switzerland. From July 7 - 14, single
gigathletes as well as teams compete in a friendly race
through Switzerland.
Participants (teams or individuals) can cover the whole distance or just one out of 7 days. If you wish to read more
about the event and the distances, go to http://
www.gigathlon.ch.
PowerTimes will be present at this event with a team of 5
athletes that will cover the distance from Samedan (in the
Alps) to Frauenfeld on Friday, July 12, 2002. We will keep
you informed on our team through our website.

Page 27

Dlf Alpiger
Mountain Bike
53 km from Samedan to Davos
Sandra Kreis
Swimming
2km in the Lake of Davos
Reto Kunz
Bike
139km from Davos to Amriswil
Rolf Andr Klaedtke
Running
21km from Amriswil to Weinfelden
Barbara Meier
Inline Skating
20km from Weinfelden to Frauenfeld

Potrebbero piacerti anche