Sei sulla pagina 1di 92

Introspection

Introspection is the automatic process of analyzing a bean's design patterns


to reveal the bean's
properties, events, and methods. This process controls the publishing and
discovery of bean operations and properties. This lesson explains the purpose
of introspection, introduces the Introspection API, and gives an example of
introspection code.
Purpose of Introspection
A growing number of Java object repository sites exist on the Internet in
answer to the demand for centralized deployment of applets, classes, and
source code in general. Any developer who has spent time hunting through
these sites for licensable Java code to incorporate into a program has
undoubtedly struggled with issues of how to quickly and cleanly integrate
code from one particular source into an application.
The way in which introspection is implemented provides great advantages,
including:
1. Portability - Everything is done in the Java platform, so you
can write components once, reuse them everywhere. There
are no extra specification files that need to be maintained
independently from your component code. There are no
platform-specific issues to contend with. Your component is
not tied to one component model or one proprietary platform.
You get all the advantages of the evolving Java APIs, while
maintaining the portability of your components.
2. Reuse - By following the JavaBeans design conventions,
implementing the appropriate interfaces, and extending the
appropriate classes, you provide your component with reuse
potential that possibly exceeds your expectations.
Introspection API
The JavaBeans API architecture supplies a set of classes and interfaces to
provide introspection.
The BeanInfo (in the API reference documentation) interface of the
java.beans package defines a set of methods that allow bean implementors
to provide explicit information about their beans. By specifying BeanInfo for a
bean component, a developer can hide methods, specify an icon for the
toolbox, provide descriptive names for properties, define which properties are
bound properties, and much more.
The getBeanInfo(beanName) (in the API reference documentation) of the
Introspector (in the API reference documentation) class can be used by
builder tools and other automated environments to provide detailed
information about a bean. The getBeanInfo method relies on the naming
conventions for the bean's properties, events, and methods. A call to
getBeanInfo results in the introspection process analyzing the bean’s classes
and superclasses.
The Introspector class provides descriptor classes with information about
properties, events, and methods of a bean. Methods of this class locate any
descriptor information that has been explicitly supplied by the developer
through BeanInfo classes. Then the Introspector class applies the
naming conventions to determine what properties the bean has, the
events to which it can listen, and those which it can send.
The following figure represents a hierarchy of the FeatureDescriptor classes:

Each class represented in this group describes a particular attribute


of the bean. For example, the isBound method of the
PropertyDescriptor class indicates whether a PropertyChangeEvent
event is fired when the value of this property changes.
Editing Bean Info with the NetBeans BeanInfo Editor

Note: This tutorial covers the Java Beans functionality available in the Beans
module of the NetBeans IDE. Due to this module has been removed from the
NetBeans IDE 6.0 (see the NetBeans project page for more information) it is
recommended to use version 5.5.1 to obtain the full range of the Java Beans
features.

To open the BeanInfo dialog box, expand the appropriate class hierarchy to
the bean Patterns node. Right-click the bean Patterns node and choose
BeanInfo Editor from the pop-up menu. All elements of the selected class
that match bean-naming conventions will be displayed at the left in the
BeanInfo Editor dialog box as shown in the following figure:
Select one of the following nodes to view and edit its properties at the right
of the dialog box:
• BeanInfo
• Bean
• Properties
• Methods
• Event Sources
Special symbols (green and red) appear next to the subnode to indicate
whether an element will be included or excluded from the BeanInfo class.
If the Get From Introspection option is not selected, the node's subnodes are
available for inclusion in the BeanInfo class. To include all subnodes, right-
click a node and choose Include All. You can also include each element
individually by selecting its subnode and setting the Include in BeanInfo
property. If the Get From Introspection option is selected, the setting the
properties of subnodes has no effect in the generated BeanInfo code.
The following attributes are available for the nodes for each bean, property,
event sources, and method:
• Name - A name of the selected element as it appears in code.
• Preferred - An attribute to specify where this property appears
in the Inspector window under the Properties node.
• Expert - An attribute to specify where this property appears in
the Inspector window under the Other Properties node.
• Hidden - An attribute to mark an element for tool use only.
• Display Name Code - A display name of the property.
• Short Description Code - A short description of the property.
• Include in BeanInfo - An attribute to include the selected
element in the BeanInfo class.
• Bound - An attribute to make the bean property bound.
• Constrained - An attribute to make the bean property
constrained.
• Mode - An attribute to set the property's mode and generate
getter and setter methods.
• Property Editor Class - An attribute to specify a custom class
to act as a property editor for the property.
For Event Source nodes, the following Expert properties are available:
• Unicast (read-only)
• In Default Event Set

Introspection Sample
The following example represents code to perform introspection:
import java.beans.BeanInfo;
import java.beans.Introspector;
import java.beans.IntrospectionException;
import java.beans.PropertyDescriptor;

public class SimpleBean


{
private final String name = "SimpleBean";
private int size;

public String getName()


{
return this.name;
}

public int getSize()


{
return this.size;
}

public void setSize( int size )


{
this.size = size;
}
public static void main( String[] args )
throws IntrospectionException
{
BeanInfo info = Introspector.getBeanInfo( SimpleBean.class );
for ( PropertyDescriptor pd : info.getPropertyDescriptors() )
System.out.println( pd.getName() );
}
}
This example creates a non-visual bean and displays the following properties
derived from the BeanInfo object:
• class
• name
• size
Note that a class property was not defined in the SimpleBean class. This
property was inherited from the Object class. To get properties defined only
in the SimpleBean class, use the following form of the getBeanInfo method:
Introspector.getBeanInfo( SimpleBean.class, Object.class );

70. What is the request processor in struts and how it works


Posted on September 9, 2006 by sharat

The RequestProcessor Class is the actual place where the request


processing takes place in a Struts controller environment.
When the request object first reaches the actionservlet class then it invokes
the process method of the underlying RequestProcessor Class.
This process method then looks into the struts-config.xml file and tries to
locate the name of the action that has come with the request.Once it
identifies the action in the xml file it continues the rest of the steps needed
for request processing.
processor has most of the following responsibilities:
1. Determine path,
2. Handle Locale,
3. Process content and encoding type,
4. Process cache headers
5. Pre Processing hook
6. Pre-processing hook,
7. Determine mapping,
8. Determine roles,
9. Process and validate actionForm,
10.Return a response
It is one instance per application module; it invokes proper Action instance
Of course processes all requests for a module.

RequestProcessor :
Introduction
I have seen lot of projects where the developers implemented a proprietary
MVC framework, not because they wanted to do something fundamentally
different from Struts, but because they were not aware of how to extend
Struts. You can get total control by developing your own MVC framework, but
it also means you have to commit a lot of resources to it, something that
may not be possible in projects with tight schedules.
Struts is not only a very powerful framework, but also very extensible. You
can extend Struts in three ways.
1. PlugIn: Create your own PlugIn class if you want to execute some
business logic at application startup or shutdown.
2. RequestProcessor: Create your own RequestProcessor if you want to
execute some business logic at a particular point during the request-
processing phase. For example, you might extend RequestProcessor to
check that the user is logged in and he has one of the roles to execute
a particular action before executing every request.
3. ActionServlet: You can extend the ActionServlet class if you want to
execute your business logic at either application startup or shutdown,
or during request processing. But you should use it only in cases
where neither PlugIn nor RequestProcessor is able to fulfill your
requirement.
In this article, we will use a sample Struts application to demonstrate how to
extend Struts using each of these three approaches. Downloadable sample
code for each is available below in the Resources section at the end of this
article. Two of the most successful examples of Struts extensions are the
Struts Validation framework and the Tiles framework.
I assume that you are already familiar with the Struts framework and know
how to create simple applications using it. Please see the Resources section if
you want to know more about Struts.
PlugIn
According to the Struts documentation "A plugin is a configuration wrapper
for a module-specific resource or service that needs to be notified about
application startup and shutdown events." What this means is that you can
create a class implementing the PlugIn interface to do something at
application startup or shutdown.
Say I am creating a web application where I am using Hibernate as the
persistence mechanism, and I want to initialize Hibernate as soon as the
application starts up, so that by the time my web application receives the
first request, Hibernate is already configured and ready to use. We also want
to close down Hibernate when the application is shutting down. We can
implement this requirement with a Hibernate PlugIn by following two simple
steps.
1. Create a class implementing the PlugIn interface, like this:
2.
3. public class HibernatePlugIn implements PlugIn{
4. private String configFile;
5. // This method will be called at application shutdown time
6. public void destroy() {
7. System.out.println("Entering HibernatePlugIn.destroy()");
8. //Put hibernate cleanup code here
9. System.out.println("Exiting HibernatePlugIn.destroy()");
10. }
11. //This method will be called at application startup time
12. public void init(ActionServlet actionServlet, ModuleConfig config)
13. throws ServletException {
14. System.out.println("Entering HibernatePlugIn.init()");
15. System.out.println("Value of init parameter " +
16. getConfigFile());
17. System.out.println("Exiting HibernatePlugIn.init()");
18. }
19. public String getConfigFile() {
20. return name;
21. }
22. public void setConfigFile(String string) {
23. configFile = string;
24. }
25.}
The class implementing PlugIn interface must implement two
methods: init() and destroy(). init() is called when the application
starts up, and destroy() is called at shutdown. Struts allows you to
pass init parameters to your PlugIn class. For passing parameters, you
have to create JavaBean-type setter methods in your PlugIn class for
every parameter. In our HibernatePlugIn class, I wanted to pass the
name of the configFile instead of hard-coding it in the application.
26. Inform Struts about the new PlugIn by adding these lines to struts-
config.xml:
27.
28.<struts-config>
29. ...
30. <!-- Message Resources -->
31. <message-resources parameter=
32. "sample1.resources.ApplicationResources"/>
33.
34. <!-- Declare your plugins -->
35. <plug-in className="com.sample.util.HibernatePlugIn">
36. <set-property property="configFile"
37. value="/hibernate.cfg.xml"/>
38. </plug-in>
39.</struts-config>
The className attribute is the fully qualified name of the class implementing
the PlugIn interface. Add a <set-property> element for every initialization
parameter which you want to pass to your PlugIn class. In our example, I
wanted to pass the name of the config file, so I added the <set-property>
element with the value of config file path.
Both the Tiles and Validator frameworks use PlugIns for initialization by
reading configuration files. Two more things which you can do in your PlugIn
class are:
• If your application depends on some configuration files, then you can
check their availability in the PlugIn class and throw a ServletException
if the configuration file is not available. This will result in ActionServlet
becoming unavailable.
• The PlugIn interface's init() method is your last chance if you want to
change something in ModuleConfig, which is a collection of static
configuration information that describes a Struts-based module. Struts
will freeze ModuleConfig once all PlugIns are processed.
How a Request is Processed
ActionServlet is the only servlet in Struts framework, and is responsible for
handling all of the requests. Whenever it receives a request, it first tries to
find a sub-application for the current request. Once a sub-application is
found, it creates a RequestProcessor object for that sub-application and calls
its process() method by passing it HttpServletRequest and
HttpServletResponse objects.
The RequestProcessor.process() is where most of the request processing
takes place. The process() method is implemented using the Template
Method design pattern, in which there is a separate method for performing
each step of request processing, and all of those methods are called in
sequence from the process() method. For example, there are separate
methods for finding the ActionForm class associated with the current request,
and checking if the current user has one of the required roles to execute
action mapping. This gives us tremendous flexibility. The RequestProcessor
class in the Struts distribution provides a default implementation for each of
the request-processing steps. That means you can override only the methods
that interest you, and use default implementations for rest of the methods.
For example, by default Struts calls request.isUserInRole() to find out if the
user has one of the roles required to execute the current ActionMapping, but
if you want to query a database for this, then then all you have to do is
override the processRoles() method and return true or false, based whether
the user has the required role or not.
First we will see how the process() method is implemented by default, and
then I will explain what each method in the default RequestProcessor class
does, so that you can decide what parts of request processing you want to
change.

public void process(HttpServletRequest request,


HttpServletResponse response)
throws IOException, ServletException {
// Wrap multipart requests with a special wrapper
request = processMultipart(request);
// Identify the path component we will
// use to select a mapping
String path = processPath(request, response);
if (path == null) {
return;
}
if (log.isDebugEnabled()) {
log.debug("Processing a '" + request.getMethod() +
"' for path '" + path + "'");
}
// Select a Locale for the current user if requested
processLocale(request, response);
// Set the content type and no-caching headers
// if requested
processContent(request, response);
processNoCache(request, response);
// General purpose preprocessing hook
if (!processPreprocess(request, response)) {
return;
}
// Identify the mapping for this request
ActionMapping mapping =
processMapping(request, response, path);
if (mapping == null) {
return;
}
// Check for any role required to perform this action
if (!processRoles(request, response, mapping)) {
return;
}
// Process any ActionForm bean related to this request
ActionForm form =
processActionForm(request, response, mapping);
processPopulate(request, response, form, mapping);
if (!processValidate(request, response, form, mapping)) {
return;
}
// Process a forward or include specified by this mapping
if (!processForward(request, response, mapping)) {
return;
}
if (!processInclude(request, response, mapping)) {
return;
}
// Create or acquire the Action instance to
// process this request
Action action =
processActionCreate(request, response, mapping);
if (action == null) {
return;
}
// Call the Action instance itself
ActionForward forward =
processActionPerform(request, response,
action, form, mapping);
// Process the returned ActionForward instance
processForwardConfig(request, response, forward);
}
1. processMultipart(): In this method, Struts will read the request to
find out if its contentType is multipart/form-data. If so, it will parse
it and wrap it in a wrapper implementing HttpServletRequest. When
you are creating an HTML FORM for posting data, the contentType
of the request is application/x-www-form-urlencoded by default.
But if your form is using FILE-type input to allow the user to upload
files, then you have to change the contentType of the form to
multipart/form-data. But by doing that, you can no longer read
form values submitted by user via the getParameter() method of
HttpServletRequest; you have to read the request as an
InputStream and parse it to get the values.
2. processPath(): In this method, Struts will read request URI to
determine the path element that should be used for getting the
ActionMapping element.
3. processLocale(): In this method, Struts will get the Locale for the
current request and, if configured, it will save it in HttpSession as
the value of the org.apache.struts.action.LOCALE attribute.
HttpSession would be created as a side effect of this method. If you
don't want that to happen, then you can set the locale property to
false in ControllerConfig by adding these lines to your struts-
config.xml file:
4. <controller>
5. <set-property property="locale" value="false"/>
6. </controller>
7. processContent(): Sets the contentType for the response by calling
response.setContentType(). This method first tries to get the
contentType as configured in struts-config.xml. It will use text/html
by default. To override that, use the following:
8. <controller>
9. <set-property property="contentType"
value="text/plain"/>
10.</controller>
11.processNoCache(): Struts will set the following three headers for
every response, if configured for no-cache:
12.
13.requested in struts config.xml
14.response.setHeader("Pragma", "No-cache");
15.response.setHeader("Cache-Control", "no-cache");
16.response.setDateHeader("Expires", 1);
If you want to set the no-cache header, add these lines to struts-
config.xml:

<controller>
<set-property property="noCache" value="true"/>
</controller>
17.processPreprocess(): This is a general purpose, pre-processing
hook that can be overridden by subclasses. Its implementation in
RequestProcessor does nothing and always returns true. Returning
false from this method will abort request processing.
18.processMapping(): This will use path information to get an
ActionMapping object. The ActionMapping object represents the
<action> element in your struts-config.xml file.
19.
20.<action path="/newcontact" type="com.sample.NewContactAction"
21. name="newContactForm" scope="request">
22. <forward name="sucess" path="/sucessPage.do"/>
23. <forward name="failure" path="/failurePage.do"/>
24.</action>
The ActionMapping element contains information like the name of the
Action class and ActionForm used in processing this request. It also
has information about ActionForwards configured for the current
ActionMapping.

25.processRoles(): Struts web application security just provides an


authorization scheme. What that means is once user is logged into
the container, Struts' processRoles() method can check if he has
one of the required roles for executing a given ActionMapping by
calling request.isUserInRole().
26.
27. <action path="/addUser" roles="administrator"/>
Say you have AddUserAction and you want only the administrator to
be able to add a new user. What you can do is to add a role attribute
with the value administrator in your AddUserAction action element. So
before executing AddUserAction, it will always make sure that the user
has the administrator role.

28.processActionForm(): Every ActionMapping has a ActionForm class


associated with it. When Struts is processing an ActionMapping, it
will find the name of the associated ActionForm class from the
value of the name attribute in the <action> element.
29.<form-bean name="newContactForm"
30. type="org.apache.struts.action.DynaActionForm">
31. <form-property name="firstName"
32. type="java.lang.String"/>
33. <form-property name="lastName"
34. type="java.lang.String"/>
35.</form-bean>
In our example, it will first check to see if an object of the
org.apache.struts.action.DynaActionForm class is present in request
scope. If so, it will use it; otherwise, it will create a new object and set
it in the request scope.

36.processPopulate(): In this method, Struts will populate the


ActionForm class instance variables with values of matching request
parameters.
37.processValidate(): Struts will call the validate() method of your
ActionForm class. If you return ActionErrors from the validate()
method, it will redirect the user to the page indicated by the input
attribute of the <action>element.
38.processForward() and processInclude(): In these functions, Struts
will check the value of the forward or include attributes of the
<action> element and, if found, put the forward or include request
in the configured page.
39.
40. <action forward="/Login.jsp" path="/loginInput"/>
41. <action include="/Login.jsp" path="/loginInput"/>
You can guess difference in these functions from their names.
processForward() ends up
calling RequestDispatcher.forward(), and processInclude() calls
RequestDispatcher.include(). If you configure both forward and include
attributes, it will always call forward, as it is processed first.

42.processActionCreate(): This function gets the name of the Action


class from the type attribute of the <action> element and create
and return instances of it. In our case it will create an instance of
the com.sample.NewContactAction class.
43.processActionPerform(): This function calls the execute() method of
your Action class, which is where you should write your business
logic.
44.processForwardConfig(): The execute()method of your Action class
will return an object of type ActionForward, indicating which page
should be displayed to the user. So Struts will create
RequestDispatcher for that page and call the
RequestDispatcher.forward() method.
The above list explains what the default implementation of RequestProcessor
does at every stage of request processing and the sequence in which various
steps are executed. As you can see, RequestProcessor is very flexible and it
allows you to configure it by setting properties in the <controller> element.
For example, if your application is going to generate XML content instead of
HTML, then you can inform Struts about this by setting a property of the
controller element.

Creating Your own RequestProcessor


Above, we saw how the default implementation of RequestProcessor works.
Now we will present a example of how to customize it by creating our own
custom RequestProcessor. To demonstrate creating a custom
RequestProcessor, we will change our sample application to implement these
two business requirements:
• We want to create a ContactImageAction class that will generate
images instead of a regular HTML page.
• Before processing every request, we want to check that user is logged
in by checking for userName attribute of the session. If that attribute
is not found, we will redirect the user to the login page.
We will change our sample application in two steps to implement these
business requirements.
1. Create your own CustomRequestProcessor class, which will extend the
RequestProcessor class, like this:
2. public class CustomRequestProcessor
3. extends RequestProcessor {
4. protected boolean processPreprocess (
5. HttpServletRequest request,
6. HttpServletResponse response) {
7. HttpSession session = request.getSession(false);
8. //If user is trying to access login page
9. // then don't check
10. if( request.getServletPath().equals("/loginInput.do")
11. || request.getServletPath().equals("/login.do") )
12. return true;
13. //Check if userName attribute is there is session.
14. //If so, it means user has allready logged in
15. if( session != null &&
16. session.getAttribute("userName") != null)
17. return true;
18. else{
19. try{
20. //If no redirect user to login Page
21. request.getRequestDispatcher
22. ("/Login.jsp").forward(request,response);
23. }catch(Exception ex){
24. }
25. }
26. return false;
27. }
28.
29. protected void processContent(HttpServletRequest request,
30. HttpServletResponse response) {
31. //Check if user is requesting ContactImageAction
32. // if yes then set image/gif as content type
33. if( request.getServletPath().equals("/contactimage.do")){
34. response.setContentType("image/gif");
35. return;
36. }
37. super.processContent(request, response);
38. }
39.}
In the processPreprocess method of our CustomRequestProcessor
class, we are

checking for the userName attribute of the session and if it's not
found, redirect the user to the login page.

For our requirement of generating images as output from the


ContactImageAction class, we have to override the processContent
method and first check if the request is for the /contactimage path. If
so, we set the contentType to image/gif; otherwise, it's text/html.
40.Add these lines to your struts-config.xml file after the <action-
mapping> element to inform Struts that CustomRequestProcessor
should be used as the RequestProcessor class:
41.<controller>
42. <set-property property="processorClass"
43. value="com.sample.util.CustomRequestProcessor"/>
44.</controller>
Please note that overriding processContent() is OK if you have very
few Action classes where you want to generate output whose
contentType is something other than text/html.

If that is not the case, you should create a Struts sub-application for
handling requests for image-generating Actions and set image/gif as
the contentType for it.

The Tiles framework uses its own RequestProcessor for decorating output
generated by Struts.

ActionServlet
If you look into the web.xml file of your Struts web application, it looks like
this:

<web-app >
<servlet>
<servlet-name>action=</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-
class>
<!-- All your init-params go here-->
</servlet>
<servlet-mapping>
<servlet-name>action</servlet-name>
<url-pattern>*.do</url-pattern>
</servlet-mapping>
</web-app >
That means ActionServlet is responsible for handling all of your requests to
Struts. You can create a sub-class of the ActionServlet class if you want to do
something at application startup or shutdown or on every request, but you
should try creating a PlugIn or RequestProcessor before extending the
ActionServlet class. Before Servlet 1.1, the Tiles framework was based on
extending the ActionServlet class to decorate a generated response. But from
1.1 on, it's used the TilesRequestProcessor class.
Conclusion
Deciding to develop your own MVC framework is a very big decision--you
should think about the time and resources it will take to develop and
maintain that code. Struts is a very powerful and stable framework and you
can change it to accommodate most of your business requirements.
On the other hand, the decision to extend Struts should not be taken lightly.
If you put some low-performance code in your RequestProcessor class, it will
execute on every request and can reduce the performance of your whole
application. And there will be situations where it will better for you to create
your own MVC framework than extend Struts.

The Reflection API


Uses of Reflection
Reflection is commonly used by programs which require the ability to
examine or modify the runtime behavior of applications running in the Java
virtual machine. This is a relatively advanced feature and should be used
only by developers who have a strong grasp of the fundamentals of the
language. With that caveat in mind, reflection is a powerful technique and
can enable applications to perform operations which would otherwise be
impossible.
Extensibility Features
An application may make use of external, user-defined classes
by creating instances of extensibility objects using their fully-
qualified names.
Class Browsers and Visual Development Environments
A class browser needs to be able to enumerate the members
of classes. Visual development environments can benefit from
making use of type information available in reflection to aid
the developer in writing correct code.
Debuggers and Test Tools
Debuggers need to be able to examine private members on
classes. Test harnesses can make use of reflection to
systematically call a discoverable set APIs defined on a class,
to insure a high level of code coverage in a test suite.
Drawbacks of Reflection
Reflection is powerful, but should not be used indiscriminately. If it is
possible to perform an operation without using reflection, then it is preferable
to avoid using it. The following concerns should be kept in mind when
accessing code via reflection.
Performance Overhead
Because reflection involves types that are dynamically
resolved, certain Java virtual machine optimizations cannot be
performed. Consequently, reflective operations have slower
performance than their non-reflective counterparts, and should
be avoided in sections of code which are called frequently in
performance-sensitive applications.
Security Restrictions
Reflection requires a runtime permission which may not be
present when running under a security manager. This is in an
important consideration for code which has to run in a
restricted security context, such as in an Applet.
Exposure of Internals
Since reflection allows code to perform operations that would
be illegal in non-reflective code, such as accessing private
fields and methods, the use of reflection can result in
unexpected side-effects, which may render code dysfunctional
and may destroy portability. Reflective code breaks
abstractions and therefore may change behavior with upgrades
of the platform.

What kind of applications might want to use the Reflection API?


The Reflection API is intended for use by tools such as debuggers, class
browsers, object inspectors, and interpreters.
When should the Reflection API not be used?
You should avoid the temptation to use the reflection mechanism when other
tools more natural to the language would suffice. If you are accustomed to
using function pointers in another language, for example, you might think
that using Method objects are a natural replacement, but usually an object-
oriented tool, such as an interface that is implemented by objects that
perform the needed action, is better. Programs that needlessly use the
Reflection API will be more difficult to read, debug, and maintain.
If the constructor of a class is overloaded, does
the Constructor.newInstance()method use the types of the
arguments to select an overloading?
No. The Constructor.newInstance() method is always invoked on a specific
overloading of the constructor, previously selected by a call
to Class.getConstructor() or by other means. The Reflection API does not
automatically choose between overloadings.
If a class has an overloaded method, does
the Method.invoke() method use the types of the arguments to select
which method will be invoked?
No. Method.invoke() is always invoked on a specific overloading of the
method, previously selected by a call to Class.getMethod() or by other
means. The Reflection API does not automatically choose between
overloadings.
How does access control apply to the invocation of
the Constructor.newInstance(),Method.invoke(), Field.get(),
and Field.set() methods?
When one of these methods is called, the Java Virtual Machine (JVM)
performs the access checks described in the Java Language Specification
(6.6.1), which are usually carried out by the verifier. For example,
when newInstance() is called, the checks made by the JVM compare the
identity of the caller of newInstance() with the access permission and identity
of the constructor being called. It is as if the caller of newInstance() had a
statically-compiled call to the selected constructor. The access check takes
into account both the accessibility of the constructor itself, and the
accessibility of its class.
It seems that Method.invoke() sometimes throws
an IllegalAccessExceptionwhen invoking a public method. What's
going on?
It is a common error to attempt to invoke an overridden method by
retrieving the overriding method from the target object. This will not always
work, because the overriding method will in general be defined in a class
inaccessible to the caller. For example, the following code only works some of
the time, and will fail when the target object's class is too private:
void invokeCommandOn(Object target, String command) {
try {
Method m = target.getClass().getMethod(command, new Class[] {});
m.invoke(target, new Object[] {});
} catch ...
}
The workaround is to use a much more complicated algorithm, which starts
withtarget.getClass() and works up the inheritance chain, looking for a
version of the method in an accessible class.

Trail Lessons
This trail covers common uses of reflection for accessing and manipulating
classes, fields, methods, and constructors. Each lesson contains code
examples, tips, and troubleshooting information.
Classes
This lesson shows the various ways to obtain a Class object
and use it to examine properties of a class, including its
declaration and contents.
Members
This lesson describes how to use the Reflection APIs to find the
fields, methods, and constructors of a class. Examples are
provided for setting and getting field values, invoking
methods, and creating new instances of objects using specific
constructors.
Arrays and Enumerated Types
This lesson introduces two special types of classes: arrays,
which are generated at runtime, and enum types, which define
unique named object instances. Sample code shows how to
retrieve the component type for an array and how to set and
get fields with array or enum types.

What is the difference between EJB2.0 and Hibernate?

You are right. Even i read same thing about the EJB3.0.
As per my understanding of Hibernate, It is O-R mapping tool.(Object to
Relation)
EJB 3.0 will embed Hibernate's features of OR Mapping.
If you have used WSAD, you will get quick clue of OR Mapping.
Currently Entity Bean (EJB2.0) can be mapped to the single database.
Using Hibernate technique, it can map to multiple databases, it will generate
the XML file and the POJO.
The development will be fast as compare to the current EJB architecture with
lots of more features.

Ans1: hibernate is a ORM(object relation mapping ) tool which can be used


for creating a mapping between plain java bean objects (POJO) and a
persitent storage (rdbms).The EJB 3.0 specififcation is divided into two parts
the first which deals with session and MDBs and the second which deals with
persistence and entity beans. The latter part is called JPA(java persistance
API ). HIbernate 3.0 implements the JPA specification.EJB 2.1 is a
specification for defining loosely coupled reusable business componenets.ans
2 & 3 ) EJB 2.1 and hibernate serve two different purposes. Hibernate can be
co related with the entity beans in EJB 2.1.HIbernate offers far more
extensive features then plain entity beans.still there are no containers
(applicaiton servers) available which fully implement the EJB 3.0
specification. depending upon the buisness needs hibernate framework can
be used in conjuction with EJB2.1 to achieve the JPA abstraction.
Basically Ejb and Hibenate is enterly different one But having realtion with
Entitybean in Ejb and Hibernate.

Entity bean is also used for object orientd view . It need lot things to
configures fo make it possible and Lot of coding need for it. After all the
performance is also very few. In ejb egar loading approch has taken for
loading it will cause serious performance issues. In java the most concept is
polymorphisum and inheritance. We cant get the favour of these concept .

For avoiding it new approch had taken thats called hibernate. Using
pojo(plain java object) and .hbm file we can make it possible. There is having
all the advantage of Object oriented
93. EJB Drill 3
Posted on March 22, 2007 by sharat

Can you control when passivation occurs?


The developer, according to the specification, cannot directly control when
passivation occurs. Although for Stateful Session Beans, the container cannot
passivate an instance that is inside a transaction. So using transactions can
be a a strategy to control passivation. The ejbPassivate() method is called
during passivation, so the developer [...]
Filed under: EJB | 1 Comment »

92. EJB Drill Two


Posted on March 22, 2007 by sharat

What is the differnece between EJB and Java BeansExplain Local


InterfacesWhat is EJB ContainerWhat is in-memory replicationWhat is ripple
effectWhat’s new in the EJB 2.0 SpecificationWhat is the difference between a
Coarse Grained Entiry bean and Fine Grained Entity Bean.What are
transaction isolation levels in EJBWhat is the software architectury of
EJBs.Waht is the need [...]
Filed under: EJB | 1 Comment »

91. EJB Short Drill - One


Posted on March 20, 2007 by sharat

What are the differnt kids of EJBs


Stateless session bean- An instance of these non-persistent EJBs provides a
service without storing an interaction or conversation state between
methods. Any instance can be used for any client.Stateful session bean- An
instance of these non-persistent EJBs maintains state across methods and
transactions. Each instance is associated with a [...]

90. What is JNDI


Posted on March 20, 2007 by sharat

The Java Naming and Directory Interface (JNDI) is an application


programming interface (API) for accessing different kinds of naming and
directory services. JNDI is not specific to a particular naming or directory
service, it can be used to access many different kinds of systems including
file systems; distributed objects systems like CORBA, Java RMI, and [...]
89. What’s the difference between servlet/JSP session and
EJB Session
Posted on March 20, 2007 by sharat

A session in a Servlet, is maintained by the Servlet Container through the


HttpSession object, that is acquired through the request object. You cannot
really instantiate a new HttpSession object, and it doesn’t contains any
business logic, but is more of a place where to store objects.
A session in EJB is maintained using the SessionBeans. [...]

88. Compare stateless and statefull session bean


Posted on March 20, 2007 by sharat

Stateless Session Beans


Stateful Sessions Beans
Are pooled in memory, to save the overhead of creating a bean every time
one is needed. WebLogic Server uses a bean instance when needed and puts
it back in the pool when the work is complete.
Stateless sessions beans provide faster performance than stateful beans.
Each client creates a new instance [...]

87. How to choose between stateless and statefull beans


Posted on March 20, 2007 by sharat

Stateless session beans are a good choice if your application does not need
to maintain state for a particular client between business method calls.
WebLogic Server is multi-threaded, servicing multiple clients simultaneously.
With stateless session beans, the EJB container is free to use any available,
pooled bean instance to service a client request, rather than [...]

What is the difference between CMP and BMP


Posted on March 20, 2007 by sharat

Short answer: with bean-managed persistence, you can optimize your


queries and improve performance over the generalized container-managed
heuristics. But container-managed persistence is very convenient, and
vendors will be working to improve its performance as time goes on.]
There is of course a difference as many CMPs use O-R mapping using
metadata, which is slower than hardcoded [...]
What is a table space and where does the primary key stored?
A tablespace is a logical storage unit within an Oracle database. It is logical
because a tablespace is not visible in the file system of the machine on which
the database resides. A tablespace, in turn, consists of at least one datafile
which, in turn, are physically located in the file system of the server. Btw, a
datafile belongs to exactly one tablespace.

Each table, index and so on that is stored in an Oracle database belongs to a


tablespace. The tablespace builds the bridge between the Oracle database
and the filesystem in which the table's or index' data is stored.

Primary key is stored in user tablespace.

SAX vs. DOM Parsers:

At present, two major API specifications define how XML parsers work: SAX
and DOM. The DOM specification defines a tree-based approach to navigating
an XML document. In other words, a DOM parser processes XML data and
creates an object-oriented hierarchical representation of the document that
you can navigate at run-time.
The SAX specification defines an event-based approach whereby parsers scan
through XML data, calling handler functions whenever certain parts of the
document (e.g., text nodes or processing instructions) are found.
How do the tree-based and event-based APIs differ? The tree-based W3C
DOM parser creates an internal tree based on the hierarchical structure of the
XML data. You can navigate and manipulate this tree from your software, and
it stays in memory until you release it. DOM uses functions that return parent
and child nodes, giving you full access to the XML data and providing the
ability to interrogate and manipulate these nodes. DOM manipulation is
straightforward and the API does not take long to understand, particularly if
you have some JavaScript DOM experience.
In SAX's event-based system, the parser doesn't create any internal
representation of the document. Instead, the parser calls handler functions
when certain events (defined by the SAX specification) take place. These
events include the start and end of the document, finding a text node, finding
child elements, and hitting a malformed element.
SAX development is more challenging, because the API requires development
of callback functions that handle the events. The design itself also can
sometimes be less intuitive and modular. Using a SAX parser may require
you to store information in your own internal document representation if you
need to rescan or analyze the information—SAX provides no container for the
document like the DOM tree structure.
Is having two completely different ways to parse XML data a problem? Not
really, both parsers have very different approaches for processing the
information. The W3C DOM specification provides a very rich and intuitive
structure for housing the XML data, but can be quite resource-intensive given
that the entire XML document is typically stored in memory. You can
manipulate the DOM at run-time and stream the updated data as XML, or
transform it to your own format if you require.
The strength of the SAX specification is that it can scan and parse gigabytes
worth of XML documents without hitting resource limits, because it does not
try to create the DOM representation in memory. Instead, it raises events
that you can handle as you see fit. Because of this design, the SAX
implementation is generally faster and requires fewer resources. On the
other hand, SAX code is frequently complex, and the lack of a document
representation leaves you with the challenge of manipulating, serializing, and
traversing the XML document.

Basic I/O:

This lesson covers the Java platform classes used for basic I/O. It focuses
primarily on I/O Streams, a powerful concept that greatly simplifies I/O
operations. The lesson also looks at serialization, which lets a program write
whole objects out to streams and read them back again. Then the lesson
looks at some file system operations, including random access files. Finally, it
touches briefly on the advanced features of the New I/O API. Most of the
classes covered are in the java.io package.

I/O Streams
• Byte Streams handle I/O of raw binary data.
• Character Streams handle I/O of character data, automatically
handling translation to and from the local character set.
• Buffered Streams optimize input and output by reducing the number of
calls to the native API.
• Scanning and Formatting allows a program to read and write
formatted text.
• I/O from the Command Line describes the Standard Streams and the
Console object.
• Data Streams handle binary I/O of primitive data type and String
values.
• Object Streams handle binary I/O of objects.
File I/O
• File Objects help you to write platform-independent code that
examines and manipulates files.
• Random Access Files handle non-sequential file access.
An Overview of the New IO
This section talks about the advanced I/O packages added to version 1.4 of
the Java Platform.

Summary
A summary of the key points covered in this trail.

Isolation Levels :
Transactions not only ensure the full completion (or rollback) of the
statements that they enclose but also isolate the data modified by the
statements. The isolation level describes the degree to which the data being
updated is visible to other transactions.
Suppose that a transaction in one program updates a customer's phone
number, but before the transaction commits another program reads the
same phone number. Will the second program read the updated and
uncommitted phone number or will it read the old one? The answer depends
on the isolation level of the transaction. If the transaction allows other
programs to read uncommitted data, performance may improve because the
other programs don't have to wait until the transaction ends. But there's a
trade-off--if the transaction rolls back, another program might read the
wrong data.
You cannot modify the isolation level of entity beans with container-managed
persistence. These beans use the default isolation level of the DBMS, which is
usually READ_COMMITTED.
For entity beans with bean-managed persistence and for all session beans,
you can set the isolation level programmatically with the API provided by the
underlying DBMS. A DBMS, for example, might allow you to permit
uncommitted reads by invoking the setTransactionIsolation method:
Connection con;
...
con.setTransactionIsolation(TRANSACTION_READ_UNCOMMITTED);
Do not change the isolation level in the middle of a transaction. Usually, such
a change causes the DBMS software to issue an implicit commit. Because the
isolation levels offered by DBMS vendors may vary, you should check the
DBMS documentation for more information. Isolation levels are not
standardized for the J2EE platform.

Transaction Isolation Level

Introduction
In this article, I want to tell you about Transaction Isolation Level in SQL
Server 6.5 and SQL Server 7.0, what kinds of Transaction Isolation Level
exist, and how you can set the appropriate Transaction Isolation Level.

There are four isolation levels:

 READ UNCOMMITTED
 READ COMMITTED

 REPEATABLE READ

 SERIALIZABLE

SQL Server 6.5 supports all of these Transaction Isolation Levels, but has
only three different behaviors, because in SQL Server 6.5 REPEATABLE
READ and SERIALIZABLE are synonyms. It because SQL Server 6.5
supports only page locking (the row level locking does not fully supported as
in SQL Server 7.0) and if REPEATABLE READ isolation level was set, the
another transaction cannot insert the row before the first transaction was
finished, because page will be locked. So, there are no phantoms in SQL
Server 6.5, if REPEATABLE READ isolation level was set.

SQL Server 7.0 supports all of these Transaction Isolation Levels and can
separate REPEATABLE READ and SERIALIZABLE.
Let me to describe each isolation level.

read uncommitted
When it's used, SQL Server not issue shared locks while reading data. So,
you can read an uncommitted transaction that might get rolled back later.
This isolation level is also called dirty read. This is the lowest isolation level.
It ensures only that a physically corrupt data will not be read.
read committed
This is the default isolation level in SQL Server. When it's used, SQL Server
will use shared locks while reading data. It ensures that a physically corrupt
data will not be read and will never read data that another application has
changed and not yet committed, but it does not ensure that the data will not
be changed before the end of the transaction.

repeatable read
When it's used, the dirty reads and nonrepeatable reads cannot occur. It
means that locks will be placed on all data that is used in a query, and
another transactions cannot update the data.

This is the definition of nonrepeatable read from SQL Server Books Online:

nonrepeatable read
When a transaction reads the same row more than one time, and between the
two (or more) reads, a separate transaction modifies that row. Because the
row was modified between reads within the same transaction, each read
produces different values, which introduces inconsistency.

serializable
Most restrictive isolation level. When it's used, then phantom values cannot
occur. It prevents other users from updating or inserting rows into the data
set until the transaction is complete.

This is the definition of phantom from SQL Server Books Online:

phantom
Phantom behavior occurs when a transaction attempts to select a row that
does not exist and a second transaction inserts the row before the first
transaction finishes. If the row is inserted, the row appears as a phantom
to the first transaction, inconsistently appearing and disappearing.
You can set the appropriate isolation level for an entire SQL Server session
by using the SET TRANSACTION ISOLATION LEVEL statement. This is the
syntax from SQL Server Books Online:

SET TRANSACTION ISOLATION LEVEL


{
READ COMMITTED
| READ UNCOMMITTED
| REPEATABLE READ
| SERIALIZABLE
}
You can use DBCC USEROPTIONS command to determine the Transaction
Isolation Level currently set. This command returns the set options that are
active for the current connection. This is the example:

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED


GO
DBCC USEROPTIONS
GO
This is the result:

Set Option Value


------------------------------ ------------------------------------
textsize 64512
language us_english
dateformat mdy
datefirst 7
isolation level read uncommitted

I'm not a db expert, but my dba told me that for our app we need to set the
isolation level to read uncommitted. We're currently using Oracle 10g
Express Edition (XE)

So in my persistence.xml I added this:

<property name="hibernate.connection.isolation" value="1" />

the #1 corresponds to read uncommitted of java.sql.Connection.

When I tried to run the app, I got this exception:

java.sql.SQLException: READ_COMMITTED and SERIALIZABLE are the only


valid transaction levels

Is this a limitation of Oracle XE ? What would cause this and how can I fix it?

hibernate.connection.isolation Set the JDBC transaction isolation level. Check


java.sql.Connection for meaningful values but note that most databases do
not support all isolation levels.
eg. 1, 2, 4, 8

Session Tracking Mechanisms:


1) create a filter that does your tests for a "valid" session but only
check it if the "isNewSession" flag in a session is not set. When you
find a bad session, call session.invalidate() and forward to an
<action-forward> for the login page. You should add redirect="true" to
the action-forward so that the browser will load the login page as if
they were coming in the first time. If you need to have the
sessionExpire page you can have it display and either embed a button
that will bring user to the login page or have a meta redirect tag
that redirects them after x seconds.

The real question is, why are you going to re-check authentication
every time they go in, is there anything more you need to check for it
to be a "valid" session? Remember that a given browser's requests will
always go to the same session; so once they are validated, you can set
a flag in the session that they have been validated and don't worry
about re-authentication.

Session tracking options


HTTP session support also involves session tracking. You can use cookies,
URL rewriting, or Secure Sockets Layer (SSL) information for session
tracking.
the following tracking methods are available:

• Session tracking with cookies


• Session tracking with URL rewriting
• Session tracking with Secure Sockets Layer (SSL) information
Session tracking with cookies
Tracking sessions with cookies is the default. No special programming
is required to track sessions with cookies.
Session tracking with URL rewriting

An application that uses URL rewriting to track sessions must adhere


to certain programming guidelines. The application developer needs to
do the following:
• Program servlets to encode URLs
• Supply a servlet or JavaServer Pages (JSP) file as an entry point
to the application
Using URL rewriting also requires that you enable URL rewriting in the
session management facility.
Note: In certain cases, clients cannot accept cookies. Therefore, you
cannot use cookies as a session tracking mechanism. Applications can
use URL rewriting as a substitute.
Program session servlets to encode URLs
Depending on whether the servlet is returning URLs to the browser or
redirecting them, include either the encodeURL method or the
encodeRedirectURL method in the servlet code. Examples
demonstrating what to replace in your current servlet code follow.

Rewrite URLs to return to the browser


Suppose you currently have this statement:
out.println("<a href=\"/store/catalog\">catalog<a>");
Change the servlet to call the encodeURL method before sending the
URL to the output stream:
out.println("<a href=\"");
out.println(response.encodeURL ("/store/catalog"));
out.println("\">catalog</a>");

Rewrite URLs to redirect


Suppose you currently have the following statement:
response.sendRedirect ("http://myhost/store/catalog");
Change the servlet to call the encodeRedirectURL method before
sending the URL to the output stream:
response.sendRedirect (response.encodeRedirectURL
("http://myhost/store/catalog"));
The encodeURL method and encodeRedirectURL method are part of
the HttpServletResponse object. These calls check to see if URL
rewriting is configured before encoding the URL. If it is not configured,
the calls return the original URL.
If both cookies and URL rewriting are enabled and the
response.encodeURL method or encodeRedirectURL method is called,
the URL is encoded, even if the browser making the HTTP request
processed the session cookie.
You can also configure session support to enable protocol switch
rewriting. When this option is enabled, the product encodes the URL
with the session ID for switching between HTTP and HTTPS protocols.

Supply a servlet or JSP file as an entry point


The entry point to an application, such as the initial screen presented,
may not require the use of sessions. However, if the application in
general requires session support (meaning some part of it, such as a
servlet, requires session support), then after a session is created, all
URLs are encoded to perpetuate the session ID for the servlet (or
other application component) requiring the session support.
The following example shows how you can embed Java code within a
JSP file:
<%
response.encodeURL ("/store/catalog");
%>
Session tracking with SSL information

No special programming is required to track sessions with Secure


Sockets Layer (SSL) information.
To use SSL information, turn on Enable SSL ID tracking in the
session management property page. Because the SSL session ID is
negotiated between the Web browser and HTTP server, this ID cannot
survive an HTTP server failure. However, the failure of an application
server does not affect the SSL session ID if an external HTTP server is
present between WebSphere Application Server and the browser.
SSL tracking is supported for the IBM HTTP Server and iPlanet Web
servers only. You can control the lifetime of an SSL session ID by
configuring options in the Web server. For example, in the IBM HTTP
Server, set the configuration variable SSLV3TIMEOUT to provide an
adequate lifetime for the SSL session ID. An interval that is too short
can cause a premature termination of a session. Also, some Web
browsers might have their own timers that affect the lifetime of the
SSL session ID. These Web browsers may not leave the SSL session ID
active long enough to serve as a useful mechanism for session
tracking. The internal HTTP Server of WebSphere Application Server
also supports SSL tracking.
When using the SSL session ID as the session tracking mechanism in a
cloned environment, use either cookies or URL rewriting to maintain
session affinity. The cookie or rewritten URL contains session affinity
information that enables the Web server to properly route a session
back to the same server for each request.

Enabling URL rewriting or cookies on a WebSphere Application Server


v5.1
When you are testing or publishing a Web project to a WebSphere®
Application Server v5.1, you can use session management to enable URL
rewriting or cookies.

In certain cases, clients cannot accept cookies. Therefore, you cannot use
cookies as a session tracking mechanism. Applications can use URL rewriting
as a substitute.

To enable URL rewriting or cookies on a WebSphere Application Server v5.1:


1. In the Servers view, double-click your WebSphere Application Server
v5.1 test environment or server. The server editor opens.
2. Click the Web tab at the bottom of the editor.
3. In the Server settings section, select the Enable URL rewriting check
box. If URL rewriting is enabled, session tracking uses rewritten URLs
to carry the session IDs.
4. To enable cookies, select the Enable cookies check box. If cookies are
enabled, session tracking recognizes session IDs that arrive as cookies
and tries to use cookies for sending session IDs
5. Save your changes and close the editor.

TreeSet : - this is a sorted unique list for string and integers, when we
add special objects then for sorting and uniqueness this calls the
compareTo method to decide object equality and similiarity bases on
this it decides sorting and uniqueness of object.

Struts:

1. Action Servlet:- it is a simple servlet that acts as a controller and handles


all http request in struts framework.

it simply extends http servlet class and overrides the doGet and
doPost method. The controller servlets job is to give the

request to appropriate class handler that is our action class. To


do this job this controller uses a xml based configuration

file that is also called struts-config.xml.

Whenever a http request comes to container, it reads the


web.xml there we have defined our struts controller that handles

all the requests coming for a specified path as shown below:

<servlet>

<servlet-name>action</servlet-name>

<servlet-class>org.apache.struts.action.ActionServlet
</servlet-class>

<init-param>

<param-name>config</param-name>

<param-value>/WEB-INF/config/myconfig.xml</param-
value>

</init-param>

<load-on-startup>1</load-on-startup>

</servlet>

<servlet-mapping>

<servlet-name>action</servlet-name>

<url-pattern>*.do</url-pattern>

</servlet-mapping>

1. as we can see here the url mapping that is *.do will redirect all the
requests to Action Servlet.

2. here the init parameter tells the name of the xml file to read on
initialization in Action servlet.

3. load on startup value is 1 tells to container that please load the


servlet on server start up and dont wait for first request to come.

Basically the controller does not do much work it handovers the


responsibility of serving to Request Processor class, by calling its

process method. Now the request processor class reads the xml block
from struts config file based on the requested URL.

this block is called as Action Mapping block. We have ActionMapping


class that holds the mapping between the URL and the xml file
action mapping block.

The request processor class uses the action mapping block to find the
Request Handler [ Action ] class for request.

The request processor also uses the formName attribute of action


mapping block to instantiate the Form. the name of the form is

a logical name corresponding to this logical name there is another tag


that is form tag that tell about the full classified class name of

form to be populated.

Request processor puts the form into the scope defined in action
mapping block it might be request or session.

next it populates the form by using the java introspection a special


form of java reflection.

Action class : This is the handler class for a request, the request processor
class instantiate the action class and passes the req,res,mapping

and form objects to its execute method.

The execute method is the place where the programmer should


write his presentation logic not the business logic becuase the

execute method is the only place from where we should always


call to our business component, because if sometime later

someone wants to replace the struts with some other framework


then he dont need to change the business logic.

Action class is there in org.apache.sturts.action package and a


abstract class.

ActionForward : this class encapsulates the next view, the return type of
execute method of action class is actionforward.

the action mapping and action forward works together, the


mapping objects find forward method returns the action forward
object, the findforward method looks for the actual path of jsp
from struts-config file based on the name passed to it as a

parameter. For example

ActionForward fwd = mapping.findForward("nextView");

this searches for path whos name is nextView and forwards on


that jsp.The find forward looks first in local forwards then

global forward.

IMP: the findForward is same like HttpRequestDispatcher.

ActionErrors , ActionError and Validate method: Action Errors is a map of the


Action Error object with some key associated with them.

The validate method returns the ActionErrors


object and takes request and mapping

as input parameters.

we can add a error to error collection as:

errors.add("firstName", new
ActionError("error.firstName.null"));

here firstName is the field name and error.firstName.null


is the key in properties file to show message.

the request processor stops the execution for next things


when it gets a ActionErrors Object from validate
method and also redirects to the page that is specified as
input in the action mapping block.

Threads:

Methods to run a thread:

Extending thread class

public class MyThreadUsingThread extends Thread{

String s=null;

public MyThreadUsingThread(String s1){


s=s1;
start();
}
public void run(){
System.out.println(s);
}
}

Implements Runnable Interface:


public class MyThreadRunnable implements Runnable {

String str = null;

public MyThreadRunnable(String s) {
str = s;
Thread t = new Thread(this);
t.start();
}

public void run() {


System.out
.println(str);

main

synchronized(x)

obj.wait();

here x is the object on which we are doing the synchronization means only
one thread can go into the synch block to call wait method.

while calling wait to notify the current thread we need call notify or notifyAll
method, while calling any of these two methods the thread should have lock
on the boject otherwise it ll throw the runtime exception
IllegalMonitorStateException.
----------------------------------------------

join():-

join method we invoke on the thread object to first complete it. for example
if main thread is having 1 child thread and if we calling the join method on
the child this means we are telling to main thread to join the child thread
after its completion, also we can define time in milliseconds

while calling the join method. after completion of that time stamp the main
thread will join the child thread regardless of the the child thread completion.

Thread Priorities
JVM selects to run a Runnable thread with the highest priority.
All Java threads have a priority in the range 1-10.
Top priority is 10, lowest priority is 1.Normal
priority ie. priority by default is 5.
Thread.MIN_PRIORITY - minimum thread priority
Thread.MAX_PRIORITY - maximum thread priority
Thread.NORM_PRIORITY - maximum thread priority

Whenever a new Java thread is created it has the same priority as the thread
which created it.
Thread priority can be changed by the setpriority() method.

ANT Basics:
Tutorial: Hello World with Ant
This document provides a step by step tutorial for starting java programming
with Ant. It does not contain deeper knowledge about Java or Ant. This
tutorial has the goal to let you see, how to do the easiest steps in Ant.

Content
• Preparing the project
• Enhance the build file
• Enhance the build file
• Using external libraries
• Resources

Preparing the project


We want to separate the source from the generated files, so our java source
files will be in src folder. All generated files should be under build, and there
splitted into several subdirectories for the individual steps: classes for our
compiled files and jar for our own JAR-file.
We have to create only the src directory. (Because I am working on
Windows, here is the win-syntax - translate to your shell):
md src
The following simple Java class just prints a fixed message out to STDOUT,
so just write this code into src\oata\HelloWorld.java.
package oata;

public class HelloWorld {


public static void main(String[] args) {
System.out.println("Hello World");
}
}
Now just try to compile and run that:
md build\classes
javac -sourcepath src -d build\classes src\oata\HelloWorld.java
java -cp build\classes oata.HelloWorld
which will result in

Hello World
Creating a jar-file is not very difficult. But creating a startable jar-file needs
more steps: create a manifest-file containing the start class, creating the
target directory and archiving the files.
echo Main-Class: oata.HelloWorld>myManifest
md build\jar
jar cfm build\jar\HelloWorld.jar myManifest -C build\classes .
java -jar build\jar\HelloWorld.jar
Note: Do not have blanks around the >-sign in the echo Main-
Class instruction because it would falsify it!

Four steps to a running application


After finishing the java-only step we have to think about our build process.
We have to compile our code, otherwise we couldn't start the program. Oh -
"start" - yes, we could provide a target for that. Weshould package our
application. Now it's only one class - but if you want to provide a download,
no one would download several hundreds files ... (think about a complex
Swing GUI - so let us create a jar file. A startable jar file would be nice ...
And it's a good practise to have a "clean" target, which deletes all the
generated stuff. Many failures could be solved just by a "clean build".
By default Ant uses build.xml as the name for a buildfile, so
our .\build.xml would be:
<project>

<target name="clean">
<delete dir="build"/>
</target>

<target name="compile">
<mkdir dir="build/classes"/>
<javac srcdir="src" destdir="build/classes"/>
</target>

<target name="jar">
<mkdir dir="build/jar"/>
<jar destfile="build/jar/HelloWorld.jar" basedir="build/classes">
<manifest>
<attribute name="Main-Class" value="oata.HelloWorld"/>
</manifest>
</jar>
</target>

<target name="run">
<java jar="build/jar/HelloWorld.jar" fork="true"/>
</target>

</project>
Now you can compile, package and run the application via
ant compile
ant jar
ant run
Or shorter with
ant compile jar run
While having a look at the buildfile, we will see some similar steps between
Ant and the java-only commands:
java-only Ant

md build\classes <mkdir dir="build/classes"/>


javac <javac
-sourcepath src srcdir="src"
-d build\classes destdir="build/classes"/>
src\oata\HelloWorld.java <!-- automatically detected -->
echo Main-Class: <!-- obsolete; done via manifest tag -->
oata.HelloWorld>mf <mkdir dir="build/jar"/>
md build\jar <jar
jar cfm destfile="build/jar/HelloWorld.jar"
build\jar\HelloWorld.jar
mf basedir="build/classes">
-C build\classes <manifest>
. <attribute name="Main-Class"
value="oata.HelloWorld"/>
</manifest>
</jar>
java -jar <java jar="build/jar/HelloWorld.jar" fork="true"/>
build\jar\HelloWorld.jar

Enhance the build file


Now we have a working buildfile we could do some enhancements: many
time you are referencing the same directories, main-class and jar-name are
hard coded, and while invocation you have to remember the right order of
build steps.
The first and second point would be addressed with properties, the third with
a special property - an attribute of the <project>-tag and the fourth problem
can be solved using dependencies.
<project name="HelloWorld" basedir="." default="main">

<property name="src.dir" value="src"/>

<property name="build.dir" value="build"/>


<property name="classes.dir" value="${build.dir}/classes"/>
<property name="jar.dir" value="${build.dir}/jar"/>

<property name="main-class" value="oata.HelloWorld"/>

<target name="clean">
<delete dir="${build.dir}"/>
</target>

<target name="compile">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}"/>
</target>

<target name="jar" depends="compile">


<mkdir dir="${jar.dir}"/>
<jar destfile="${jar.dir}/${ant.project.name}.jar" basedir="$
{classes.dir}">
<manifest>
<attribute name="Main-Class" value="${main-class}"/>
</manifest>
</jar>
</target>

<target name="run" depends="jar">


<java jar="${jar.dir}/${ant.project.name}.jar" fork="true"/>
</target>

<target name="clean-build" depends="clean,jar"/>

<target name="main" depends="clean,run"/>

</project>
Now it's easier, just do a ant and you will get
Buildfile: build.xml

clean:

compile:
[mkdir] Created dir: C:\...\build\classes
[javac] Compiling 1 source file to C:\...\build\classes

jar:
[mkdir] Created dir: C:\...\build\jar
[jar] Building jar: C:\...\build\jar\HelloWorld.jar

run:
[java] Hello World

main:

BUILD SUCCESSFUL
Using external libraries
Somehow told us not to use syso-statements. For log-Statements we should
use a Logging-API - customizable on a high degree (including switching off
during usual life (= not development) execution). We use Log4J for that,
because
• it is not part of the JDK (1.4+) and we want to show how to use
external libs
• it can run under JDK 1.2 (as Ant)
• it's highly configurable
• it's from Apache ;-)
We store our external libraries in a new directory lib. Log4J can
be downloaded [1] from Logging's Homepage. Create the lib directory and
extract the log4j-1.2.9.jar into that lib-directory. After that we have to
modify our java source to use that library and our buildfile so that this library
could be accessed during compilation and run.
Working with Log4J is documented inside its manual. Here we use
the MyApp-example from the Short Manual [2]. First we have to modify the
java source to use the logging framework:
package oata;

import org.apache.log4j.Logger;
import org.apache.log4j.BasicConfigurator;

public class HelloWorld {


static Logger logger = Logger.getLogger(HelloWorld.class);

public static void main(String[] args) {


BasicConfigurator.configure();
logger.info("Hello World"); // the old SysO-statement
}
}
Most of the modifications are "framework overhead" which has to be done
once. The blue line is our "old System-out" statement.
Don't try to run ant - you will only get lot of compiler errors. Log4J is not
inside the classpath so we have to do a little work here. But do not change
the CLASSPATH environment variable! This is only for this project and maybe
you would break other environments (this is one of the most famous
mistakes when working with Ant). We introduce Log4J (or to be more
precise: all libraries (jar-files) which are somewhere under .\lib) into our
buildfile:
<project name="HelloWorld" basedir="." default="main">
...
<property name="lib.dir" value="lib"/>
<path id="classpath">
<fileset dir="${lib.dir}" includes="**/*.jar"/>
</path>

...

<target name="compile">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}"
classpathref="classpath"/>
</target>

<target name="run" depends="jar">


<java fork="true" classname="${main-class}">
<classpath>
<path refid="classpath"/>
<path location="${jar.dir}/${ant.project.name}.jar"/>
</classpath>
</java>
</target>

...

</project>
In this example we start our application not via its Main-Class manifest-
attribute, because we could not provide a jarname and a classpath. So add
our class in the red line to the already defined path and start as usual.
Running ant would give (after the usual compile stuff):
[java] 0 [main] INFO oata.HelloWorld - Hello World
What's that?
• [java] Ant task running at the moment
• 0 sorry don't know - some Log4J stuff
• [main] the running thread from our application
• INFO log level of that statement
• oata.HelloWorld source of that statement
• - separator
• Hello World the message
For another layout ... have a look inside Log4J's documentation about using
other PatternLayout's.

Configuration files
Why we have used Log4J? "It's highly configurable"? No - all is hard coded!
But that is not the debt of Log4J - it's ours. We had
coded BasicConfigurator.configure(); which implies a simple, but hard coded
configuration. More confortable would be using a property file. In the java
source delete the BasicConfiguration-line from the main() method (and the
related import-statement). Log4J will search then for a configuration as
described in it's manual. Then create a new file src/log4j.properties. That's
the default name for Log4J's configuration and using that name would make
life easier - not only the framework knows what is inside, you too!
log4j.rootLogger=DEBUG, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%m%n
This configuration creates an output channel ("Appender") to console named
as stdout which prints the message (%m) followed by a line feed (%n) -
same as the earlier System.out.println() :-) Oooh kay - but we haven't
finished yet. We should deliver the configuration file, too. So we change the
buildfile:
...
<target name="compile">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}"
classpathref="classpath"/>
<copy todir="${classes.dir}">
<fileset dir="${src.dir}" excludes="**/*.java"/>
</copy>
</target>
...
This copies all resources (as long as they haven't the suffix ".java") to the
build directory, so we could start the application from that directory and
these files will included into the jar.

Testing the class


In this step we will introduce the usage of the JUnit [3] testframework in
combination with Ant. Because Ant has a built-in JUnit 3.8.2 you could start
directly using it. Write a test class insrc\HelloWorldTest.java:
public class HelloWorldTest extends junit.framework.TestCase {

public void testNothing() {


}

public void testWillAlwaysFail() {


fail("An error message");
}

}
Because we dont have real business logic to test, this test class is very small:
just show how to start. For further information see the JUnit documentation
[3] and the manual of junit task. Now we add a junit instruction to our
buildfile:
...

<target name="run" depends="jar">


<java fork="true" classname="${main-class}">
<classpath>
<path refid="classpath"/>
<path id="application" location="${jar.dir}/$
{ant.project.name}.jar"/>
</classpath>
</java>
</target>

<target name="junit" depends="jar">


<junit printsummary="yes">
<classpath>
<path refid="classpath"/>
<path refid="application"/>
</classpath>

<batchtest fork="yes">
<fileset dir="${src.dir}" includes="*Test.java"/>
</batchtest>
</junit>
</target>

...

We reuse the path to our own jar file as defined in run-target by giving it an
ID. The printsummary=yes lets us see more detailed information than just a
"FAILED" or "PASSED" message. How much tests failed? Some errors?
Printsummary lets us know. The classpath is set up to find our classes. To
run tests the batchtest here is used, so you could easily add more test
classes in the future just by naming them *Test.java. This is a common
naming scheme.
After a ant junit you'll get:
...
junit:
[junit] Running HelloWorldTest
[junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0,01 sec
[junit] Test HelloWorldTest FAILED

BUILD SUCCESSFUL
...
We can also produce a report. Something that you (and other) could read
after closing the shell .... There are two steps: 1. let <junit> log the
information and 2. convert these to something readable (browsable).
...
<property name="report.dir" value="${build.dir}/junitreport"/>
...
<target name="junit" depends="jar">
<mkdir dir="${report.dir}"/>
<junit printsummary="yes">
<classpath>
<path refid="classpath"/>
<path refid="application"/>
</classpath>

<formatter type="xml"/>

<batchtest fork="yes" todir="${report.dir}">


<fileset dir="${src.dir}" includes="*Test.java"/>
</batchtest>
</junit>
</target>

<target name="junitreport">
<junitreport todir="${report.dir}">
<fileset dir="${report.dir}" includes="TEST-*.xml"/>
<report todir="${report.dir}"/>
</junitreport>
</target>
Because we would produce a lot of files and these files would be written to
the current directory by default, we define a report directory, create it before
running the junit and redirect the logging to it. The log format is XML
so junitreport could parse it. In a second target junitreport should create a
browsable HTML-report for all generated xml-log files in the report directory.
Now you can open the ${report.dir}\index.html and see the result (looks
something like JavaDoc).
Personally I use two different targets for junit and junitreport. Generating the
HTML report needs some time and you dont need the HTML report just for
testing, e.g. if you are fixing an error or a integration server is doing a job.

<?xml version="1.0"?>
<!-- Build file for our first application -->
<project name="Ant test project" default="build"
basedir=".">
<target name="build" >
<javac srcdir="src" destdir="build/src"
debug="true"
includes="**/*.java"
/>
</target>
</project>

First line of the build.xml file represents the document type declaration. Next
line is comment entry. Third line is the project tag. Each buildfile contains
one project tag and all the instruction are written in the project tag.
The project tag:
<project name="Ant test project" default="build" basedir=".">
requires three attributes namely name, default and basedir.
Here is the description of the attributes:
Attribute Description
name Represents the name of the project.
default Name of the default target to use when no target is
supplied.
basedir Name of the base directory from which all path calculations
are done.
All the attributes are required.
One project may contain one or more targets. In this example there is only
one target.
<target name="build" >
<javac srcdir="src" destdir="build/src" debug="true"
includes="**/*.java"
/>
</target>
Which uses task javac to compile the java files.
Here is the code of our test1.java file which is to be compiled by the Ant
utility.

class test1{
public static void main (String
args[]){
System.out.println("This is example
1");
}
}

Hibernate Properties:

Table 3.4. Hibernate JDBC and Connection Properties

Property name Purpose

A non-zero value determines the


hibernate.jdbc.fetch_size JDBC fetch size (calls
Statement.setFetchSize()).

A non-zero value enables use of


JDBC2 batch updates by Hibernate.
hibernate.jdbc.batch_size
eg. recommended values between
5 and 30
Set this property to true if your
JDBC driver returns correct row
counts from executeBatch() (it is
hibernate.jdbc.batch_versioned_da usually safe to turn this option on).
ta Hibernate will then use batched
DML for automatically versioned
data. Defaults to false.

eg. true | false


hibernate.jdbc.factory_class Select a custom Batcher. Most
applications will not need this
Property name Purpose

configuration property.

eg. classname.of.Batcher
Enables use of JDBC2 scrollable
resultsets by Hibernate. This
property is only necessary when
hibernate.jdbc.use_scrollable_resul using user supplied JDBC
tset connections, Hibernate uses
connection metadata otherwise.

eg. true | false


Use streams when writing/reading
hibernate.jdbc.use_streams_for_bi binary or serializable types to/from
nary JDBC (system-level property).

eg. true | false


Enable use of JDBC3
PreparedStatement.getGeneratedK
eys() to retrieve natively generated
keys after insert. Requires JDBC3+
driver and JRE1.4+, set to false if
hibernate.jdbc.use_get_generated_ your driver has problems with the
keys Hibernate identifier generators. By
default, tries to determine the
driver capabilites using connection
metadata.

eg. true|false
The classname of a custom
ConnectionProvider which provides
hibernate.connection.provider_clas
JDBC connections to Hibernate.
s
eg.
classname.of.ConnectionProvider
hibernate.connection.isolation Set the JDBC transaction isolation
level. Check java.sql.Connection for
meaningful values but note that
most databases do not support all
isolation levels.
Property name Purpose

eg. 1, 2, 4, 8
Enables autocommit for JDBC
pooled connections (not
hibernate.connection.autocommit recommended).

eg. true | false


Specify when Hibernate should
release JDBC connections. By
default, a JDBC connection is held
until the session is explicitly closed
or disconnected. For an application
server JTA datasource, you should
use after_statement to
aggressively release connections
after every JDBC call. For a non-
JTA connection, it often makes
sense to release the connection at
the end of each transaction, by
using after_transaction. auto will
choose after_statement for the JTA
hibernate.connection.release_mode
and CMT transaction strategies and
after_transaction for the JDBC
transaction strategy.

eg. auto (default) | on_close |


after_transaction | after_statement
Note that this setting only affects
Sessions returned from
SessionFactory.openSession. For
Sessions obtained through
SessionFactory.getCurrentSession,
the CurrentSessionContext
implementation configured for use
controls the connection release
mode for those Sessions. See
Section 2.5, “Contextual Sessions”
Pass the JDBC property
hibernate.connection.<propertyNa
propertyName to
me>
DriverManager.getConnection().

hibernate.jndi.<propertyName> Pass the property propertyName to


Property name Purpose

the JNDI InitialContextFactory.

Table 3.5. Hibernate Cache Properties

Property name Purpose

The classname of a custom


CacheProvider.
hibernate.cache.provider_class
eg.
classname.of.CacheProvider
Optimize second-level cache
operation to minimize writes,
at the cost of more frequent
reads. This setting is most
hibernate.cache.use_minimal_puts useful for clustered caches
and, in Hibernate3, is enabled
by default for clustered cache
implementations.

eg. true|false
Enable the query cache,
individual queries still have to
hibernate.cache.use_query_cache be set cachable.

eg. true|false
May be used to completely
disable the second level
cache, which is enabled by
hibernate.cache.use_second_level_cache default for classes which
specify a <cache> mapping.

eg. true|false
The classname of a custom
QueryCache interface,
hibernate.cache.query_cache_factory defaults to the built-in
StandardQueryCache.

eg. classname.of.QueryCache
hibernate.cache.region_prefix A prefix to use for second-
level cache region names.
Property name Purpose

eg. prefix
Forces Hibernate to store
data in the second-level
hibernate.cache.use_structured_entries cache in a more human-
friendly format.

eg. true|false
Table 3.6. Hibernate Transaction Properties

Property name Purpose

The classname of a
TransactionFactory to use with
Hibernate Transaction API
hibernate.transaction.factory_class (defaults to
JDBCTransactionFactory).

eg.
classname.of.TransactionFactory
A JNDI name used by
JTATransactionFactory to obtain
jta.UserTransaction the JTA UserTransaction from the
application server.

eg. jndi/composite/name
The classname of a
TransactionManagerLookup -
required when JVM-level caching
hibernate.transaction.manager_looku is enabled or when using hilo
p_class generator in a JTA environment.

eg.
classname.of.TransactionManage
rLookup
hibernate.transaction.flush_before_co If enabled, the session will be
mpletion automatically flushed during the
before completion phase of the
transaction. Built-in and
automatic session context
management is preferred, see
Section 2.5, “Contextual
Property name Purpose

Sessions”.

eg. true | false


If enabled, the session will be
automatically closed during the
after completion phase of the
hibernate.transaction.auto_close_sess transaction. Built-in and utomatic
ion session context management is
preferred, see Section 2.5,
“Contextual Sessions”.

eg. true | false


Table 3.7. Miscellaneous Properties

Property name Purpose

Supply a (custom) strategy for the


scoping of the "current" Session. See
Section 2.5, “Contextual Sessions” for
hibernate.current_session_co
more information about the built-in
ntext_class
strategies.

eg. jta | thread | managed |


custom.Class
Chooses the HQL parser implementation.

eg.
hibernate.query.factory_class org.hibernate.hql.ast.ASTQueryTranslato
rFactory or
org.hibernate.hql.classic.ClassicQueryTra
nslatorFactory
Mapping from tokens in Hibernate
queries to SQL tokens (tokens might be
hibernate.query.substitutions function or literal names, for example).

eg. hqlLiteral=SQL_LITERAL,
hqlFunction=SQLFUNC
hibernate.hbm2ddl.auto Automatically validate or export schema
DDL to the database when the
SessionFactory is created. With create-
drop, the database schema will be
dropped when the SessionFactory is
Property name Purpose

closed explicitly.

eg. validate | update | create | create-


drop
Enables use of CGLIB instead of runtime
reflection (System-level property).
Reflection can sometimes be useful when
hibernate.cglib.use_reflection troubleshooting, note that Hibernate
_optimizer always requires CGLIB even if you turn
off the optimizer. You can not set this
property in hibernate.cfg.xml.

eg. true | false

3.4.1. SQL Dialects


You should always set the hibernate.dialect property to the correct
org.hibernate.dialect.Dialect subclass for your database. If you specify a
dialect, Hibernate will use sensible defaults for some of the other properties
listed above, saving you the effort of specifying them manually.
Table 3.8. Hibernate SQL Dialects (hibernate.dialect)

RDBMS Dialect

DB2 org.hibernate.dialect.DB2Dialect

DB2 AS/400 org.hibernate.dialect.DB2400Dialect

DB2 OS390 org.hibernate.dialect.DB2390Dialect

PostgreSQL org.hibernate.dialect.PostgreSQLDialect

MySQL org.hibernate.dialect.MySQLDialect

MySQL with InnoDB org.hibernate.dialect.MySQLInnoDBDialect

MySQL with MyISAM org.hibernate.dialect.MySQLMyISAMDialect

Oracle (any version) org.hibernate.dialect.OracleDialect

Oracle 9i/10g org.hibernate.dialect.Oracle9Dialect

Sybase org.hibernate.dialect.SybaseDialect

Sybase Anywhere org.hibernate.dialect.SybaseAnywhereDialect


RDBMS Dialect

Microsoft SQL Server org.hibernate.dialect.SQLServerDialect

SAP DB org.hibernate.dialect.SAPDBDialect

Informix org.hibernate.dialect.InformixDialect

HypersonicSQL org.hibernate.dialect.HSQLDialect

Ingres org.hibernate.dialect.IngresDialect

Progress org.hibernate.dialect.ProgressDialect

Mckoi SQL org.hibernate.dialect.MckoiDialect

Interbase org.hibernate.dialect.InterbaseDialect

Pointbase org.hibernate.dialect.PointbaseDialect

FrontBase org.hibernate.dialect.FrontbaseDialect

Firebird org.hibernate.dialect.FirebirdDialect

SQL:

1) What are the various return types of execute(), executeUpdate(),


executeQurrey().

boolean execute()
Executes the SQL statement in this PreparedStatement object, which may be
any kind of SQL statement.It returns Boolean.

ResultSet executeQuery()
Executes the SQL query in this PreparedStatement object and returns the
ResultSet object generated by the query.

int executeUpdate()
Executes the SQL statement in this PreparedStatement object, which must
be an SQL INSERT, UPDATE or DELETE statement; or an SQL statement that
returns nothing, such as a DDL statement.It return dint value.
execute()- is for invoking the functions or stored
procedures of SQL by the CallableStatement.

executeUpdata()- is for the operations such as insert,


update or delete on SQL by PreparedStatement, Statement.

executeQuery() - is for operation select of Sql by


PreparedStatement or Statement.

What are the adaptor classes?

In addition to the inner and anonymous classes, another useful type of class
is the adapter class. Here this applies in particular to classes used to reduce
the code for event listeners.
While the listener architecture has greatly improved the efficiency and
capabilities of event handling, there are some complications and annoyances
that come along with it. In particular, we find that the listener interfaces hold
up to 6 methods that must be implemented.
Remember than an interface holds only abstract methods and its
implementation requires that ALL of its methods be implemented, i.e.
overridden with real methods. So for those methods that you are not using,
you must still implement the methods with empty code bodies.
We illustrate this with the following example where we implement an
anonymous MouseListener class but only need to use one of the methods.
We can avoid implementing all of these unneeded methods by taking
advantage of the adapter classes in the java.awt.event package. For eight of
the listener interfaces there is a corresponding adapter class for each. These
adapters simply implement all of the the methods for the listener interface
method with empty code bodes. Though the adapters are abstract classes,
the methods are real so you only need to override the method(s) of interest.
Here is the same example like the one above except that it uses a
MouseAdapter:

MERGE
Purpose
Use the MERGE statement to select rows from one or more sources for
update or insertion into one or more tables. You can specify conditions to
determine whether to update or insert into the target tables.
This statement is a convenient way to combine multiple operations. It lets
you avoid multiple INSERT, UPDATE, and DELETE DML statements.
MERGE is a deterministic statement. That is, you cannot update the same
row of the target table multiple times in the same MERGE statement.
Prerequisites
You must have the INSERT and UPDATE object privileges on the target table
and the SELECT object privilege on the source table. To specify the DELETE
clause of the merge_update_clause, you must also have the DELETE object
privilege on the target table.

merge::=

Description of the illustration merge.gif

MERGE INTO bonuses D


USING (SELECT employee_id, salary, department_id FROM employees
WHERE department_id = 80) S
ON (D.employee_id = S.employee_id)
WHEN MATCHED THEN UPDATE SET D.bonus = D.bonus + S.salary*.01
DELETE WHERE (S.salary > 8000)
WHEN NOT MATCHED THEN INSERT (D.employee_id, D.bonus)
VALUES (S.employee_id, S.salary*0.1)
WHERE (S.salary <= 8000);
How to use MERGE in Oracle?

Note: This article was written for


educational purpose only. Please refer
to the related vendor documentation for
detail.

How to use MERGE in Oracle


MERGE is used to select rows from one or more sources, and update or insert into a
table or view. You cannot update the same row of the target table multiple times in
the same MERGE statement.

Syntax:

MERGE INTO <table/view>


USING <table/view/subquery> ON (condition)

--update

WHEN MATCHED THEN


UPDATE SET column = (value/expr),..
WHERE (condition) | DELETE WHERE (condition)

--insert

WHEN NOT MATCHED THEN


INSERT (column,..) VALUES (value/expr,..)
WHERE (condition)

How many types we can create a new instance?


Three ways

1) new ClassName();
2) Class.ForName(ClassName.class()).newInstance();
3) OldInstance.clone();

 1.11 The Externalizable Interface


For Externalizable objects, only the identity of the class of the object is saved
by the container; the class must save and restore the contents. The
Externalizable interface is defined as follows:
package java.io;

public interface Externalizable extends Serializable


{
public void writeExternal(ObjectOutput out)
throws IOException;

public void readExternal(ObjectInput in)


throws IOException, java.lang.ClassNotFoundException;
}
The class of an Externalizable object must do the following:
 Implement the java.io.Externalizable interface

 Implement a writeExternal method to save the state of the object

(It must explicitly coordinate with its supertype to save its state.)

 Implement a readExternal method to read the data written by the


writeExternal method from the stream and restore the state of the object

(It must explicitly coordinate with the supertype to save its state.)

 Have the writeExternal and readExternal methods be solely responsible


for the format, if an externally defined format is written

Note ¯ The writeExternal and readExternal methods are public and raise the
risk that a client may be able to write or read information in the object other
than by using its methods and fields. These methods must be used only
when the information held by the object is not sensitive or when exposing it
does not present a security risk.

Q: What is the Locale class?


A: The Locale class is used to tailor program output to the conventions
of a particular geographic, political, or cultural region .

Q: What is an enumeration?
A: An enumeration is an interface containing methods for accessing the
underlying data structure from which the enumeration is obtained.
It is a construct which collection classes return when you request a
collection of all the objects stored in the collection. It allows
sequential access to all the elements stored in the collection.

[ Received from Sandesh Sadhale]

Q 04: Explain Java class loaders? Explain dynamic class loading? LF


A 04: Class loaders are hierarchical. Classes are introduced into the JVM as
they are referenced by name in a class that is already running in the JVM.
So how is the very first class loaded? The very first class is specially loaded
with
the help of static main() method declared in your class. All the subsequently
loaded classes are loaded by the classes, which are already loaded and
running. A class loader creates a namespace. All JVMs include at least one
class loader that is embedded within the JVM called the primordial (or
bootstrap) class loader. Now let’s look at non-primordial class loaders. The
JVM has hooks in it to allow user defined class loaders to be used in place of
primordial class loader. Let us look at the class loaders created by the JVM.
CLASS LOADER reloadable? Explanation
x

Class loaders are hierarchical and use a delegation model when loading a
class. Class loaders request their parent to load the class first before
attempting to load it themselves. When a class loader loads a class, the child
class loaders in the hierarchy will never reload the class again. Hence
uniqueness is maintained. Classes loaded by a child class loader have
visibility into classes loaded by its parents up the hierarchy but the reverse
is not true as explained in the above diagram.
Important: Two objects loaded by different class loaders are never equal
even if they carry the same values, which mean a class is uniquely identified
in the context of the associated class loader. This applies to singletons too,
where each class
loader will have its own singleton. [Refer Q45 in Java section for
singleton design pattern]
How to debug javascript?
- A fully functional and robust debugging environment for Javascript
programmers developing for Internet Explorer

I’ve finally found a good Jscript/Javascript debugging solution; it sounds


vanilla and mundane, but after researching for days, I found a guy on a
USENET group who mentioned his group is using the Microsoft Script
Editor included as a stand-alone utility with Microsoft Office with FrontPage
2003 and FP_XP. This is a commercial use release of Office for corporate
customers. This editor may also be included with other versions of Office as
well. It may also be included with any recent version of Frontpage.

The MS Script Editor is an amazing IDE for script development and has a
javascript debugger built into it that is closely tied to IE6 and allows for
seamless Javascript debugging of very sophisticated web-based applications.
Not to be confused with the lackluster Microsoft Javascript Debugger
(1997), the Microsoft Script Editor (2001) is a fully functioning
editor/debugger that is very robust and has never crashed for me
after many months of use. Best of all, it’s FREE if you already have
one of the many MS products that includes it as an optional feature.

EJB Transaction Attributes?


Answer
The Enterprise JavaBeans model supports six different transaction rules:
• TX_BEAN_MANAGED. The TX_BEAN_MANAGED setting indicates that
the enterprise bean manually manages its own transaction control. EJB
supports manual transaction demarcation using the Java Transaction
API. This is very tricky and should not be attempted without a really
good reason.

• TX_NOT_SUPPORTED. The TX_NOT_SUPPORTED setting indicates that


the enterprise bean cannot execute within the context of a transaction.
If a client (i.e., whatever called the method-either a remote client or
another enterprise bean) has a transaction when it calls the enterprise
bean, the container suspends the transaction for the duration of the
method call.

• TX_SUPPORTS. The TX_SUPPORTS setting indicates that the enterprise


bean can run with or without a transaction context. If a client has a
transaction when it calls the enterprise bean, the method will join the
client's transaction context. If the client does not have a transaction,
the method will run without a transaction.

• TX_REQUIRED. The TX_REQUIRED setting indicates that the enterprise


bean must execute within the context of a transaction. If a client has a
transaction when it calls the enterprise bean, the method will join the
client's transaction context. If the client does not have a transaction,
the container automatically starts a new transaction for the method.
Attributes

• TX_REQUIRES_NEW. The TX_REQUIRES_NEW setting indicates that


the enterprise bean must execute within the context of a new
transaction. The container always starts a new transaction for the
method. If the client has a transaction when it calls the enterprise
bean, the container suspends the client's transaction for the duration
of the method call.

• TX_MANDATORY. The TX_MANDATORY setting indicates that the


enterprise bean must always execute within the context of the client's
transaction. If the client does not have a transaction when it calls the
enterprise bean, the container throws the TransactionRequired
exception and the request fails.

What is a Service Locator Pattern?


Dealing with the JNDI API and InitialContexts is complicated because doing
this involves repeated usage of the InitialContext objects, lookup operations,
casting of objects (solved somewhat using Generics as mentioned further
on), and handling low-level exceptions. Also, JNDI lookups are often
duplicated in code because many classes (especially for web applications
accessing a database) need access to the same resource or service. Creating
all these InitialContexts and performing a lookup on an EJB home object,
DataSource or JMS Topic slows down your network and application
performance.
The ServiceLocator pattern creates a single point of control and provides a
caching ability to solve all of these issues. Since a Singleton can only have
one instance of itself, we leverage that advantage to create a cache inside
the Singleton that will check to see if the object that client wants is in it's
cache. If it is, it simply passes that along. If it is not, it looks it up for the
client and then adds it to it's cache for the next request. When some other
class comes along and asks for the same object, the ServiceLocator pattern
simply reaches in it's pocket and gives it the object it already looked up.

What is a singleton?

A singleton (snap shot above is taken directly from the code below) is a
pattern that ensures a class has only one instance, and provides a global
point of access to that class. It ensures that all objects that use an instance
of this class use the same instance. Why would that be useful in a J2EE
application? Think about all the JNDI look-ups a typical enterprise application
(or web application, if using Tomcat's DB Connection Pooling) would perform
in the course of it's execution. Remember that JNDI look-ups are not pointers
to your object itself (that would be the datasource, jms topic or ejbHome
itself) but to the location of that asset. Which means that these look-ups can
be cached and retreived without doing a network-intensive lookup for an
asset each time a new class is invoked.
However, to do this correctly you would have to make sure that the classes
that were calling on your cache wasn't creating a new cache object every
time it wanted to do a lookup. That would render your caching mechanism
useless. This is where the Singleton pattern comes in handy, and the way we
do this is with the ServiceLocator pattern.

Validation in struts through Struts Validation framework is a Client Side


Validation example and calling validate() method of the form is a Server Side
Validation.

Service Locator V/s Sessoin Facade ?


Session Facade
It has Facade in it and a Facade is usually used to give a simple entrypoint
by providing a standartized interface. The same is now with the Session
Facade. You have a Session bean that represents a high-level business
component that interacts and calls lower-level business components.

Imagine having client a accessing 5 business objects with the remote calls,
which is not an efficient way (network latency) or accessing the SessionFace
and the session face is doing all the work through local access.

There are many uses, important one is to reduce network traffic I you are
calling many EJB from your Servlet then this is not advised, because it has to
make many network trips, so what you do you call a Stateless session bean
and this in turn calls other EJB, since they are in same container there is less
network calls other thing you can do now is you can convert them to LOCAL
EJB which has not network calls. This increases your server bandwidthJ.
Problem solver this is good for a highly available system.
Service Locator
As J2EE components are using JDNI to lookup for ejb interfaces,DataSources,
JMS components, connections etc. isntead of writing all the lookup in many
code piecess across the project, you write a service locator that gives you a
centralized place to handle the lookup's. It's easier to maintain and to control
such a setup.
Use a Service Locator object to abstract all JNDI usage and to hide the
complexities of initial context creation, EJB home object lookup, and EJB
object re-creation. Multiple clients can reuse the Service Locator object to
reduce code complexity, provide a single point of control, and improve
performance by providing a caching facility.

Difference between ClassNotFoundException and


NoCalssDefinitionFoundException?
ClassNotFoundException: comes when .class file is not fount for the
passed class in Class.forName() method or the java file itself is absent there
in workspace.
NoCalssDefinitionFoundException: Comes when we try to compile a class
through a command line and the folder containing that .java file is not set
into class path variable.

How to debug procedure?


put a dbms output_putline statment which will dislapay that your procedure
is executing successfully up to which stage.

In Java, any thread can be a Daemon thread. Daemon threads are like a service
providers for other threads or objects running in the same process as the daemon
thread. Daemon threads are used for background supporting tasks and are only
needed while normal threads are executing. If normal threads are not running and
remaining threads are daemon threads then the interpreter exits.
setDaemon(true/false) – This method is used to specify that a thread is
daemon thread.

public boolean isDaemon() – This method is used to determine the thread


is daemon thread or not.
The following program demonstrates the Daemon Thread:
public class DaemonThread extends Thread {
public void run() {
System.out.println("Entering run method");

try {
System.out.println("In run Method: currentThread() is"
+ Thread.currentThread());

while (true) {
try {
Thread.sleep(500);
} catch (InterruptedException x) {
}
System.out.println("In run method: woke up again");
}
} finally {
System.out.println("Leaving run Method");
}
}
public static void main(String[] args) {
System.out.println("Entering main Method");

DaemonThread t = new DaemonThread();


t.setDaemon(true);
t.start();

try {
Thread.sleep(3000);
} catch (InterruptedException x) {
}

System.out.println("Leaving main method");


}

}
Threads that work in the background to support the runtime environment are called
daemon threads. For example, the clock handler thread, the idle thread, the
garbage collector thread, the screen updater thread, and the garbage collector
thread are all daemon threads. The virtual machine exits whenever all non-daemon
threads have completed.
public final void setDaemon(boolean isDaemon)
public final boolean isDaemon()
By default a thread you create is not a daemon thread. However you can use the
setDaemon(true) method to turn it into one.
Well, the actual word is daemon thread but, they are sometimes referred as

demon thread as they just keep on running and haunt you back

A daemon thread is a thread that exists/runs in the background. It basically


is a low priority thread.
The use of daemon thread is that JVM exits when the last running thread
finishes. In a GUI this typically happens when the last window is closed
(which ends the dispatcher thread).If you set a Thread to daemon status
then that thread doesn't prevent the JVM from exiting if it's still running.So
it's used for background threads. Typically these threads speand most of
their time waiting for some condition to arise which they then quietly handle.
They might be waiting for an incoming socket connection. They might wake
up every five minutes and check that a file hasn't been updated. They might
be waiting for some task to be added to a queue which they then execute.
etc. etc.

Stored Procedure and Functions:

Stored Routines (Procedures and Functions) are supported in version MySQL 5.0.
Stored Procedure is a set of statements, which allow ease and flexibility for a
programmer because stored procedure is easy to execute than reissuing the
number of individual SQL statements. Stored procedure can call another stored
procedure also. Stored Procedure can very useful where multiple client applications
are written in different languages or it can be work on different platforms but they
need to perform the same database operations.
Store procedure can improve the performance because by using the stored
procedure less information needs to be sent between the server and the client. It
increase the load on the database server because less work is done on the client
side and much work is done on the server side.
CREATE PROCEDURE Syntax
The general syntax of Creating a Stored Procedure is :
CREATE PROCEDURE proc_name ([proc_parameter[......]])
routine_body
proc_name : procedure name
proc_parameter : [ IN | OUT | INOUT ] param_name type
routine_body : Valid SQL procedure statement
The parameter list is available with in the parentheses. Parameter can be declared
to use any valid data type, except that the COLLATE attribute cannot be used. By
default each parameter is an IN parameter. For specifying other type of parameter
used the OUT or INOUT keyword before the parameter name.
An IN parameter is used to pass the value into a procedure. The procedure can be
change the value but when the procedure return the value then modification is not
visible to the caller. An OUT parameter is used to pass the value from the
procedure to the caller but its visible to the caller. An INOUT parameter is initialized
by the caller and it can be modified by the procedure, and any change made by the
procedure is visible to the caller.
For each OUT or INOUT parameter you have to pass a user –defined variable
because then the procedure returns the value then only you can obtain it values.
But if you invoking the procedure from the other procedure then you can also pass
a routine parameter or variable as an IN or INOUT parameter.
The routine_body contains the valid SQL procedure statement that can be a simple
statement like SELECT or INSERT or they can be a compound statement written
using BEGIN and END. Compound statement can consists declarations, loops or
other control structure.
Now we are describing you a example of a simple stored procedure which uses an
OUT parameter. It uses the mysql client delimiter command for changing the
statement delimiter from ; to // till the procedure is being defined. Example :
mysql> delimiter //
mysql> CREATE PROCEDURE Sproc(OUT
p1 INT)
-> SELECT COUNT(*) INTO p1 FROM
Emp;
-> //
Query OK, 0 rows affected (0.21 sec)

mysql> delimiter ;
mysql> CALL Sproc(@a);
Query OK, 0 rows affected (0.12 sec)
mysql> select @a;
+------+
| @a |
+------+
|5 |
+------+
1 row in set (0.00 sec)
CREATE FUNCTION Syntax
The general syntax of Creating a Function is :
CREATE FUNCTION func_name ([func_parameter[,...]]) RETURNS type
routine_body
func_name : Function name
func_parameter : param_name type
type : Any valid MySQL datatype
routine_body : Valid SQL procedure statement
The RETURN clause is mandatory for FUNCTION. It used to indicate the return type
of function.
Now we are describing you a simple example a function. This function take a
parameter and it is used to perform an operation by using an SQL function and
return the result. In this example there is no need to use delimiter because it
contains no internal ; statement delimiters. Example :
mysql> CREATE FUNCTION func(str
CHAR(20))
-> RETURNS CHAR(50)
-> RETURN CONCAT('WELCOME TO,
',str,'!');
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT func('RoseIndia');
+------------------------+
| func('RoseIndia') |
+------------------------+
| WELCOME TO, RoseIndia! |
+------------------------+
1 row in set (0.00 sec)

Views:
VIEW is a virtual table, which acts like a table but actually it contains no data. That
is based on the result set of a SELECT statement. A VIEW consists rows and
columns from one or more than one tables. A VIEW is a query that’s stored as an
object. A VIEW is nothing more than a way to select a subset of table’s columns.
When you defined a view then you can reference it like any other table in a
database. A VIEW provides as a security mechanism also. VIEWS ensures that
users are able to modify and retrieve only that data which seen by them.
By using Views you can ensure about the security of data by restricting access to
the following data:
• Specific columns of the tables.
• Specific rows of the tables.
• Specific rows and columns of the tables.
• Subsets of another view or a subset of views and tables
• Rows fetched by using joins.
• Statistical summary of data in a given tables.
CREATE VIEW Statement

CREATE VIEW Statement is used to create a new database view. The general
syntax of CREATE VIEW Statement is:
CREATE VIEW view_name [(column_list)] [WITH ENCRYPTION] AS
select_statement [WITH CHECK OPTION]
View_name specifies the name for the new view. column_list specifies the name
of the columns to be used in view. column_list must have the same number of
columns that specified in select_statement. If column_list option is not available
then view is created with the same columns that specified in select_statement.
WITH ENCRYPTION option encrypts the text to the view in the syscomments
table.
AS option specifies the action that is performed by the view. select_statement is
used to specify the SELECT statement that defines a view. The optional WITH
CHECK OPTION clause applies to the data modification statement like INSERT and
UPDATE statements to fulfill the criteria given in the select_statement defining
the view. This option also ensures that the data can visible after the modifications
are made permanent.
Some restrictions imposed on views are given below :
• A view can be created only in the current database.
• The view name must follow the rules for identifiers and
• The view name must not be the same as that of the base table
• A view can be created only that time if there is a SELECT permission on its
base table.
• A SELECT INTO statement cannot be used in view declaration statement.
• A trigger or an index cannot be defined on a view.
• The CREATE VIEW statement cannot be combined with other SQL statements
in a single batch.

JOINS:
Sometimes you required the data from more than one table. When you select the
data from more than one table this is known as Joining. A join is a SQL query that
is used to select the data from more than one table or views. When you define
multiple tables or views in the FROM clause of a query the MySQL performs a join
that linking the rows from multiple tables together.
Types of Joins :
• INNER Joins
• OUTER Joins
• SELF Joins
We are going to describe you the Join with the help of following two tables :
mysql> SELECT * FROM
Client;
+------+---------------
+----------+
| C_ID | Name | City
|
+------+---------------
+----------+
| 1 | A K Ltd | Delhi |
| 2 | V K Associate | Mumbai
|
| 3 | R K India | Banglore
|
| 4 | R S P Ltd | Kolkata
|
+------+---------------
+----------+
4 rows in set (0.00 sec)
mysql> SELECT * FROM
Products;
+---------+-------------+------
+
| Prod_ID | Prod_Detail | C_ID
|
+---------+-------------+------
+
| 111 | Monitor |1 |
| 112 | Processor | 2 |
| 113 | Keyboard | 2 |
| 114 | Mouse |3 |
| 115 | CPU |5 |
+---------+-------------+------
+
5 rows in set (0.00 sec)
INNER Joins
The INNER join is considered as the default Join type. Inner join returns the column
values from one row of a table combined with the column values from one row of
another table that satisfy the search condition for the join. The general syntax of
INNER Join is :
SELECT <column_name1>, <column_name2> FROM <tbl_name> INNER
JOIN <tbl_name> ON <join_conditions>
The following example takes all the records from table Client and finds the
matching records in table Product. But if no match is found then the record from
table Client is not included in the results. But if multiple results are found in table
Product with the given condition then one row will be return for each.
Example :
mysql> SELECT * FROM Client
-> INNER JOIN Products
-> ON Client.C_ID=Products.C_ID;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 3 | R K India | Banglore | 114 | Mouse
|3 |
+------+---------------+----------+---------
+-------------+------+
4 rows in set (0.04 sec)
OUTER Joins
Sometimes when we are performing a Join between the two tables, we need all the
records from one table even there is no corresponding record in other table. We can
do this with the help of OUTER Join. In other words an OUTER Join returns the all
rows that returned by an INNER Join plus all the rows from one table that did not
match any row from the other table. Outer Join are divided in two types : LEFT
OUTER Join, RIGHT OUTER Join
LEFT OUTER Join
LEFT OUTER Join is used to return all the rows that returned by an INNER Join plus
all the rows from first table that did not match with any row from the second table
but with the NULL values for each column from second table. The general syntax of
LEFT OUTER Join is :
SELECT <column_name1>, <column_name2> FROM <tbl_name> LEFT
OUTER JOIN <tbl_name> ON <join_conditions>
In the following example we are selected every row from the Client table which
don’t have a match in the Products Table. Example :
mysql> SELECT * FROM CLIENT
-> LEFT OUTER JOIN Products
-> ON Client.C_ID=Products.C_ID;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 3 | R K India | Banglore | 114 | Mouse
|3 |
| 4 | R S P Ltd | Kolkata | NULL |
| NULL |
+------+---------------+----------+---------
+-------------+------+
5 rows in set (0.00 sec)
In the following example we are using the ORDER BY Clause with the LEFT OUTER
Join.
mysql> SELECT * FROM Client
-> LEFT OUTER JOIN Products
-> ON Client.C_ID=Products.C_ID
-> ORDER BY Client.City;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 3 | R K India | Banglore | 114 | Mouse
|3 |
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 4 | R S P Ltd | Kolkata | NULL |
| NULL |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
+------+---------------+----------+---------
+-------------+------+
5 rows in set (0.08 sec)
In the result of LEFT OUTER Join " R S P Ltd " is included even though it has no
rows in the Products table.
RIGHT OUTER Join
RIGHT OUTER Join is much same as the LEFT OUTER JOIN. But RIGHT OUTER Join
is used to return all the rows that returned by an INNER Join plus all the rows from
second table that did not match with any row from the first table but with the NULL
values for each column from first table. The general syntax of RIGHT OUTER Join
is :
SELECT <column_name1>, <column_name2> FROM <tbl_name> RIGHT
OUTER JOIN <tbl_name> ON <join_conditions>
In the following example we are selected every row from the Products table which
don’t have a match in the Client Table. Example :
mysql> SELECT * FROM Client
-> RIGHT OUTER JOIN Products
-> ON Client.C_ID=Products.C_ID;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 3 | R K India | Banglore | 114 | Mouse
|3 |
| NULL | | | 115 | CPU |
5 |
+------+---------------+----------+---------
+-------------+------+
5 rows in set (0.03 sec)
SELF Join
SELF Join means a table can be joined with itself. SELF Join is useful when we want
to compare values in a column to other values in the same column. For creating a
SELF Join we have to list a table twice in the FROM clause and assign it a different
alias each time. For referring the table we have to use this aliases.
The following example provide you the list of those Clients that belongs to same
city of C_ID=1.
mysql> SELECT b.C_ID,b.Name,b.City FROM
Client a, Client b
-> WHERE a.City=b.City AND a.C_ID=1;
+------+----------+-------+
| C_ID | Name | City |
+------+----------+-------+
| 1 | A K Ltd | Delhi |
| 5 | A T Ltd | Delhi |
| 6 | D T Info | Delhi |
+------+----------+-------+
3 rows in set (0.00 sec)
we can write this SELF JOIN Query in Subquery like this also :
mysql> SELECT * FROM
Client
-> WHERE City=(
-> SELECT City FROM
Client
-> WHERE C_ID=1);
+------+----------+-------+
| C_ID | Name | City |
+------+----------+-------+
| 1 | A K Ltd | Delhi |
| 5 | A T Ltd | Delhi |
| 6 | D T Info | Delhi |
+------+----------+-------+
3 rows in set (0.03 sec)

Cursor:

Cursors are used when the SQL Select statement is expected to return more than
one row. Cursors are supported inside procedures and functions. Cursors must be
declared and its definition contains the query. The cursor must be defined in the
DECLARE section of the program. A cursor must be opened before processing and
close after processing.

Syntax to declare the cursor :


DECLARE <cursor_name> CURSOR FOR <select_statement>
Multiple cursors can be declared in the procedures and functions but each cursor
must have a unique name. And in defining the cursor the select_statement cannot
have INTO clause.

Syntax to open the cursor :


OPEN <cursor_name>
By this statement we can open the previously declared cursor.

Syntax to store data in the cursor :


FETCH <cursor_name> INTO <var1>,<var2>…….
The above statement is used to fetch the next row if a row exists by using the
defined open cursor.

Syntax to close the cursor :


CLOSE <cursor_name>

mysql> CREATE PROCEDURE DemoCurs1()


-> BEGIN
-> DECLARE d INT DEFAULT 0;
-> DECLARE id,sal,perk INT;
-> DECLARE name,city,desig VARCHAR(20);
-> DECLARE cur CURSOR FOR SELECT * FROM Emp;
-> DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET d=1;
-> DECLARE CONTINUE HANDLER FOR SQLSTATE '23000' SET d=1;
-> OPEN cur;
-> lbl: LOOP
-> IF d=1 THEN
-> LEAVE lbl;
-> END IF;
-> IF NOT d=1 THEN
-> FETCH cur INTO id,name,city,desig,sal,perk;
-> INSERT INTO Emp2 VALUES(id,name,city,desig,sal,perk);
-> END IF;
-> END LOOP;
-> CLOSE cur;
-> END;
-> //
mysql> CREATE PROCEDURE DemoCurs1()
-> BEGIN
-> DECLARE d INT DEFAULT 0;
-> DECLARE id,sal,perk INT;
-> DECLARE name,city,desig VARCHAR(20);
-> DECLARE cur CURSOR FOR SELECT * FROM Emp;
-> DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET d=1;
-> DECLARE CONTINUE HANDLER FOR SQLSTATE '23000' SET d=1;
-> OPEN cur;
-> lbl: LOOP
-> IF d=1 THEN
-> LEAVE lbl;
-> END IF;
-> IF NOT d=1 THEN
-> FETCH cur INTO id,name,city,desig,sal,perk;
-> INSERT INTO Emp2 VALUES(id,name,city,desig,sal,perk);
-> END IF;
-> END LOOP;
-> CLOSE cur;
-> END;
-> //

Triggers:

A Trigger is a named database object which defines some action that the database
should take when some databases related event occurs. Triggers are executed
when you issues a data manipulation command like INSERT, DELETE, UPDATE on a
table for which the trigger has been created. They are automatically executed and
also transparent to the user. But for creating the trigger the user must have the
CREATE TRIGGER privilege. In this section we will describe you about the syntax to
create and drop the triggers and describe you some examples of how to use them.
CREATE TRIGGER
The general syntax of CREATE TRIGGER is :
CREATE TRIGGER trigger_name trigger_time trigger_event ON
tbl_name FOR EACH ROW trigger_statement
By using above statement we can create the new trigger. The trigger can associate
only with the table name and that must be refer to a permanent table.
Trigger_time means trigger action time. It can be BEFORE or AFTER. It is used to
define that the trigger fires before or after the statement that executed it.
Trigger_event specifies the statement that executes the trigger. The
trigger_event can be any of the DML Statement : INSERT, UPDATE, DELETE.
We can not have the two trigger for a given table, which have the same trigger
action time and event. For Instance : we cannot have two BEFORE INSERT triggers
for same table. But we can have a BEFORE INSERT and BEFORE UPDATE trigger for
a same table.

Trigger_statement have the statement that executes when the trigger fires but if
you want to execute multiple statement the you have to use the BEGIN…END
compound statement.
We can refer the columns of the table that associated with trigger by using the OLD
and NEW keyword. OLD.column_name is used to refer the column of an existing
row before it is deleted or updated and NEW.column_name is used to refer the
column of a new row that is inserted or after updated existing row.

In INSERT trigger we can use only NEW.column_name because there is no old


row and in a DELETE trigger we can use only OLD.column_name because there is
no new row. But in UPDATE trigger we can use both, OLD.column_name is used
to refer the columns of a row before it is updated and NEW.Column_name is used
to refer the column of the row after it is updated.
CREATE TRIGGER ins_trig BEFORE INSERT ON Emp
FOR EACH ROW
BEGIN
UPATE Employee SET Salary=Salary-300 WHERE Perks>500;
END;

JDBC:
Java Database Connectivity is similar to Open Database Connectivity (ODBC) which
is used for accessing and managing database, but the difference is that JDBC is
designed specifically for Java programs, whereas ODBC is not depended upon any
language.
The JDBC Driver Manager.

The JDBC Driver Manager is a very important class that defines objects which
connect Java applications to a JDBC driver. Usually Driver Manager is the backbone
of the JDBC architecture. It's very simple and small that is used to provide a
means of managing the different types of JDBC database driver running on an
application. The main responsibility of JDBC database driver is to load all the
drivers found in the system properly as well as to select the most appropriate
driver from opening a connection to a database. The Driver Manager also helps to
select the most appropriate driver from the previously loaded drivers when a new
open database is connected.
The JDBC-ODBC Bridge.

The JDBC-ODBC bridge, also known as JDBC type 1 driver is a database driver that
utilize the ODBC driver to connect the database. This driver translates JDBC
method calls into ODBC function calls. The Bridge implements Jdbc for any
database for which an Odbc driver is available. The Bridge is always implemented
as the sun.jdbc.odbc Java package and it contains a native library used to access
ODBC.

The Life Cycles of Enterprise Beans


An enterprise bean goes through various stages during its lifetime, or life cycle.
Each type of enterprise bean--session, entity, or message-driven--has a different
life cycle.
The descriptions that follow refer to methods that are explained along with the code
examples in the next two chapters. If you are new to enterprise beans, you should
skip this section and try out the code examples first.
The Life Cycle of a Stateful Session Bean
Figure 3-3 illustrates the stages that a session bean passes through during its
lifetime. The client initiates the life cycle by invoking the create method. The EJB
container instantiates the bean and then invokes the setSessionContext and
ejbCreate methods in the session bean. The bean is now ready to have its business
methods invoked.

Figure 3-3 Life Cycle of a Stateful Session Bean


While in the ready stage, the EJB container may decide to deactivate, or passivate,
the bean by moving it from memory to secondary storage. (Typically, the EJB
container uses a least-recently-used algorithm to select a bean for passivation.)
The EJB container invokes the bean's ejbPassivate method immediately before
passivating it. If a client invokes a business method on the bean while it is in the
passive stage, the EJB container activates the bean, moving it back to the ready
stage, and then calls the bean's ejbActivate method.
At the end of the life cycle, the client invokes the remove method and the EJB
container calls the bean's ejbRemove method. The bean's instance is ready for
garbage collection.
Your code controls the invocation of only two life-cycle methods--the create and
remove methods in the client. All other methods in Figure 3-3 are invoked by the
EJB container. The ejbCreate method, for example, is inside the bean class,
allowing you to perform certain operations right after the bean is instantiated. For
instance, you may wish to connect to a database in the ejbCreate method. See
Chapter 16 for more information.
The Life Cycle of a Stateless Session Bean
Because a stateless session bean is never passivated, its life cycle has just two
stages: nonexistent and ready for the invocation of business methods. Figure 3-4
illustrates the stages of a stateless session bean.

Figure 3-4 Life Cycle of a Stateless Session Bean


The Life Cycle of an Entity Bean
Figure 3-5 shows the stages that an entity bean passes through during its lifetime.
After the EJB container creates the instance, it calls the setEntityContext method of
the entity bean class. The setEntityContext method passes the entity context to the
bean.
After instantiation, the entity bean moves to a pool of available instances. While in
the pooled stage, the instance is not associated with any particular EJB object
identity. All instances in the pool are identical. The EJB container assigns an identity
to an instance when moving it to the ready stage.
There are two paths from the pooled stage to the ready stage. On the first path,
the client invokes the create method, causing the EJB container to call the
ejbCreate and ejbPostCreate methods. On the second path, the EJB container
invokes the ejbActivate method. While in the ready stage, an entity bean's business
methods may be invoked.
There are also two paths from the ready stage to the pooled stage. First, a client
may invoke the remove method, which causes the EJB container to call the
ejbRemove method. Second, the EJB container may invoke the ejbPassivate
method.

Figure 3-5 Life Cycle of an Entity Bean


At the end of the life cycle, the EJB container removes the instance from the pool
and invokes the unsetEntityContext method.
In the pooled state, an instance is not associated with any particular EJB object
identity. With bean-managed persistence, when the EJB container moves an
instance from the pooled state to the ready state, it does not automatically set the
primary key. Therefore, the ejbCreate and ejbActivate methods must assign a value
to the primary key. If the primary key is incorrect, the ejbLoad and ejbStore
methods cannot synchronize the instance variables with the database. In the
section The SavingsAccountEJB Example, the ejbCreate method assigns the primary
key from one of the input parameters. The ejbActivate method sets the primary key
(id) as follows:
id = (String)context.getPrimaryKey();

In the pooled state, the values of the instance variables are not needed. You can
make these instance variables eligible for garbage collection by setting them to null
in the ejbPasssivate method.
The Life Cycle of a Message-Driven Bean
Figure 3-6 illustrates the stages in the life cycle of a message-driven bean.
The EJB container usually creates a pool of message-driven bean instances. For
each instance, the EJB container instantiates the bean and performs these tasks:
1. It calls the setMessageDrivenContext method to pass the context object to
the instance.
1. It calls the instance's ejbCreate method.
Figure 3-6 Life Cycle of a Message-Driven Bean
Like a stateless session bean, a message-driven bean is never passivated, and it
has only two states: nonexistent and ready to receive messages.
At the end of the life cycle, the container calls the ejbRemove method. The bean's
instance is then ready for garbage collection.

Comparable V/S comparator Interface:

Programmers are usually confused between “Comparable” and “Comparator”


interface. Comparable interface has “compareTo” method which is normally used
for natural ordering but Comparator interface has “compare” method which takes
two arguments. It can be used where you want to sort objects based of more then
one parameter. Following example will make it more clear.

package test;/*
** Use the Collections.sort to sort a List
**
** When you need natural sort order you can implement
** the Comparable interface.
**
** If You want an alternate sort order or sorting on different properties
* then implement a Comparator for your class.
*/
import java.util.*;

public class Farmer implements Comparable


{
String name;
int age;
long income;

public Farmer(String name, int age)


{
this.name = name;
this.age = age;
}

public Farmer(String name, int age,long income)


{
this.name = name;
this.age = age;
this.income=income;
}
public String getName()
{
return name;
}

public int getAge()


{
return age;
}

public String toString()


{
return name + " : " + age;
}

/*
** Implement the natural order for this class
*/
public int compareTo(Object o)
{
return getName().compareTo(((Farmer)o).getName());
}

static class AgeComparator implements Comparator


{
/*
* (non-Javadoc)
* @see java.util.Comparator#compare(java.lang.Object,
java.lang.Object)
*
*/
public int compare(Object o1, Object o2)
{
Farmer p1 = (Farmer)o1;
Farmer p2 = (Farmer)o2;
if(p1.getIncome()==0 &amp;&amp; p2.getIncome()==0 )
return p1.getAge() - p2.getAge();
else
return (int)(p1.getIncome() -p2.getIncome());
}
}

public static void main(String[] args)


{
List farmer = new ArrayList();
farmer.add( new Farmer("Joe", 34) );
farmer.add( new Farmer("Ali", 13) );
farmer.add( new Farmer("Mark", 25) );
farmer.add( new Farmer("Dana", 66) );

Collections.sort(farmer);
System.out.println("Sort in Natural order");
System.out.println("t" + farmer);

Collections.sort(farmer, Collections.reverseOrder());
System.out.println("Sort by reverse natural order");
System.out.println("t" + farmer);

List farmerIncome = new ArrayList();


farmerIncome.add( new Farmer("Joe", 34,33));
farmerIncome.add( new Farmer("Ali", 13,3));
farmerIncome.add( new Farmer("Mark", 25,666));
farmerIncome.add( new Farmer("Dana", 66,2));

Collections.sort(farmer, new AgeComparator());


System.out.println("Sort using Age Comparator");
System.out.println("t" + farmer);

Collections.sort(farmerIncome, new AgeComparator());


System.out.println("Sort using Age Comparator But Income
Wise");
System.out.println("t" + farmerIncome);
}

public long getIncome() {


return income;
}

public void setIncome(long income) {


this.income = income;
}

public void setAge(int age) {


this.age = age;
}

public void setName(String name) {


this.name = name;
}
}

Output

Sort in Natural order


[Ali : 13, Dana : 66, Joe : 34, Mark : 25]
Sort by reverse natural order
[Mark : 25, Joe : 34, Dana : 66, Ali : 13]
Sort using Age Comparator
[Ali : 13, Mark : 25, Joe : 34, Dana : 66]
Sort using Age Comparator But Income Wise
[Joe : 34, Ali : 13, Mark : 25, Dana : 66]

The Comparable interface is (IMO) best used to implement a natural order


for lists of elements of the class. For example the low-to-high order of
numbers represented by the Integer class. This ties in nicely with the
order used by the variants of TreeSet, TreeMap, Arrays.sort() and
Collections.sort() where no Compataror is specified.

The Comparator interface is then used to create exotic orders, where


exotic is anything not implemented by the Comparable interface. This also
means the most obvious order for elements if a class doesn't implement
Comparable, though a subclass may sometimes be a better choice then.

JVM,JRE,Java Compiler FAQs-1


1)How can I write a program that takes command line input?
A: Java programs that take input from the command line declare a special static method called
main, which takes a String array as an argument and returns void. The example program below
loops through any arguments passed to the program on the command line and lists their values.
2)What does public static void main(String[]) mean?
A: This is a special static method signature that is used to run Java programs from a command
line interface (CLI). There is nothing special about the method itself, it is a standard Java
method, but the Java interpreter is designed to call this method when a class reference is given on
the command line, as below.
3)Why are command line arguments passed as a String?
A: Command line arguments are passed to the application's main method by the Java runtime
system before the application class or any supporting objects are instantiated. It would be much
more complex to define and construct arbitrary object types to pass to the main method and
primitive values alone are not versatile enough to provide the range of input data that strings can.
String arguments can be parsed for primitive values and can also be used for arbitrary text input,
file and URL references.
4)Why doesn't the main method throw an error with no arguments?
A: When you invoke the Java Virtual Machine on a class without any arguments, the class' main
method receives a String array of zero length. Thus, the method signature is fulfilled. Provided
the main method does not make any reference to elements in the array, or checks the array length
before doing so, no exception will occur.
5)Why do we only use the main method to start a program?
A: The entry point method main is used to the provide a standard convention for starting Java
programs. The choice of the method name is somewhat arbitrary, but is partly designed to avoid
clashes with the Thread start() and Runnable run() methods, for example.
6)Can the main method be overloaded?
A: Yes, any Java method can be overloaded, provided there is no final method with the same
signature already. The Java interpreter will only invoke the standard entry point signature for the
main method, with a string array argument, but your application can call its own main method as
required.
7)Can the main method be declared final?
A: Yes, the static void main(String[]) method can be declared final.
8)I get an exception if I remove the static modifier from main!
A: The static void main(String[]) method is a basic convention of the Java programming
language that provides an entry point into the runtime system. The main method must be
declared static because no objects exist when you first invoke the Java Virtual Machine (JVM),
so there are no references to instance methods. The JVM creates the initial runtime environment
in which this static method can be called, if you remove the static modifier, it will throw a
NoSuchMethodException.

9)How can the static main method use instance variables?


A: For very simple programs it is possible to write a main method that only uses static variables
and methods. For more complex systems, the main method is used to create an instance of itself,
or another primary class, as the basis of the application. The primary application object reference
uses instance methods to create and interact with other objects, do the work and return when the
application terminates.
public class SimpleClass {

public void doSomething() {

// Instance method statements


}

public static main(final String[] args) {

SimpleClass instance = new SimpleClass();

instance.doSomething();
}
}

10)main method from another class?


A: Yes, the main method can be called from a separate class. First you must prepare the string
array of arguments to pass to the method, then call the method through a static reference to the
host class, MaxFactors in the example below.
String[] arguments = new String[] {"123"};

MaxFactors.main(arguments);

Potrebbero piacerti anche