Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Note: This tutorial covers the Java Beans functionality available in the Beans
module of the NetBeans IDE. Due to this module has been removed from the
NetBeans IDE 6.0 (see the NetBeans project page for more information) it is
recommended to use version 5.5.1 to obtain the full range of the Java Beans
features.
To open the BeanInfo dialog box, expand the appropriate class hierarchy to
the bean Patterns node. Right-click the bean Patterns node and choose
BeanInfo Editor from the pop-up menu. All elements of the selected class
that match bean-naming conventions will be displayed at the left in the
BeanInfo Editor dialog box as shown in the following figure:
Select one of the following nodes to view and edit its properties at the right
of the dialog box:
• BeanInfo
• Bean
• Properties
• Methods
• Event Sources
Special symbols (green and red) appear next to the subnode to indicate
whether an element will be included or excluded from the BeanInfo class.
If the Get From Introspection option is not selected, the node's subnodes are
available for inclusion in the BeanInfo class. To include all subnodes, right-
click a node and choose Include All. You can also include each element
individually by selecting its subnode and setting the Include in BeanInfo
property. If the Get From Introspection option is selected, the setting the
properties of subnodes has no effect in the generated BeanInfo code.
The following attributes are available for the nodes for each bean, property,
event sources, and method:
• Name - A name of the selected element as it appears in code.
• Preferred - An attribute to specify where this property appears
in the Inspector window under the Properties node.
• Expert - An attribute to specify where this property appears in
the Inspector window under the Other Properties node.
• Hidden - An attribute to mark an element for tool use only.
• Display Name Code - A display name of the property.
• Short Description Code - A short description of the property.
• Include in BeanInfo - An attribute to include the selected
element in the BeanInfo class.
• Bound - An attribute to make the bean property bound.
• Constrained - An attribute to make the bean property
constrained.
• Mode - An attribute to set the property's mode and generate
getter and setter methods.
• Property Editor Class - An attribute to specify a custom class
to act as a property editor for the property.
For Event Source nodes, the following Expert properties are available:
• Unicast (read-only)
• In Default Event Set
Introspection Sample
The following example represents code to perform introspection:
import java.beans.BeanInfo;
import java.beans.Introspector;
import java.beans.IntrospectionException;
import java.beans.PropertyDescriptor;
RequestProcessor :
Introduction
I have seen lot of projects where the developers implemented a proprietary
MVC framework, not because they wanted to do something fundamentally
different from Struts, but because they were not aware of how to extend
Struts. You can get total control by developing your own MVC framework, but
it also means you have to commit a lot of resources to it, something that
may not be possible in projects with tight schedules.
Struts is not only a very powerful framework, but also very extensible. You
can extend Struts in three ways.
1. PlugIn: Create your own PlugIn class if you want to execute some
business logic at application startup or shutdown.
2. RequestProcessor: Create your own RequestProcessor if you want to
execute some business logic at a particular point during the request-
processing phase. For example, you might extend RequestProcessor to
check that the user is logged in and he has one of the roles to execute
a particular action before executing every request.
3. ActionServlet: You can extend the ActionServlet class if you want to
execute your business logic at either application startup or shutdown,
or during request processing. But you should use it only in cases
where neither PlugIn nor RequestProcessor is able to fulfill your
requirement.
In this article, we will use a sample Struts application to demonstrate how to
extend Struts using each of these three approaches. Downloadable sample
code for each is available below in the Resources section at the end of this
article. Two of the most successful examples of Struts extensions are the
Struts Validation framework and the Tiles framework.
I assume that you are already familiar with the Struts framework and know
how to create simple applications using it. Please see the Resources section if
you want to know more about Struts.
PlugIn
According to the Struts documentation "A plugin is a configuration wrapper
for a module-specific resource or service that needs to be notified about
application startup and shutdown events." What this means is that you can
create a class implementing the PlugIn interface to do something at
application startup or shutdown.
Say I am creating a web application where I am using Hibernate as the
persistence mechanism, and I want to initialize Hibernate as soon as the
application starts up, so that by the time my web application receives the
first request, Hibernate is already configured and ready to use. We also want
to close down Hibernate when the application is shutting down. We can
implement this requirement with a Hibernate PlugIn by following two simple
steps.
1. Create a class implementing the PlugIn interface, like this:
2.
3. public class HibernatePlugIn implements PlugIn{
4. private String configFile;
5. // This method will be called at application shutdown time
6. public void destroy() {
7. System.out.println("Entering HibernatePlugIn.destroy()");
8. //Put hibernate cleanup code here
9. System.out.println("Exiting HibernatePlugIn.destroy()");
10. }
11. //This method will be called at application startup time
12. public void init(ActionServlet actionServlet, ModuleConfig config)
13. throws ServletException {
14. System.out.println("Entering HibernatePlugIn.init()");
15. System.out.println("Value of init parameter " +
16. getConfigFile());
17. System.out.println("Exiting HibernatePlugIn.init()");
18. }
19. public String getConfigFile() {
20. return name;
21. }
22. public void setConfigFile(String string) {
23. configFile = string;
24. }
25.}
The class implementing PlugIn interface must implement two
methods: init() and destroy(). init() is called when the application
starts up, and destroy() is called at shutdown. Struts allows you to
pass init parameters to your PlugIn class. For passing parameters, you
have to create JavaBean-type setter methods in your PlugIn class for
every parameter. In our HibernatePlugIn class, I wanted to pass the
name of the configFile instead of hard-coding it in the application.
26. Inform Struts about the new PlugIn by adding these lines to struts-
config.xml:
27.
28.<struts-config>
29. ...
30. <!-- Message Resources -->
31. <message-resources parameter=
32. "sample1.resources.ApplicationResources"/>
33.
34. <!-- Declare your plugins -->
35. <plug-in className="com.sample.util.HibernatePlugIn">
36. <set-property property="configFile"
37. value="/hibernate.cfg.xml"/>
38. </plug-in>
39.</struts-config>
The className attribute is the fully qualified name of the class implementing
the PlugIn interface. Add a <set-property> element for every initialization
parameter which you want to pass to your PlugIn class. In our example, I
wanted to pass the name of the config file, so I added the <set-property>
element with the value of config file path.
Both the Tiles and Validator frameworks use PlugIns for initialization by
reading configuration files. Two more things which you can do in your PlugIn
class are:
• If your application depends on some configuration files, then you can
check their availability in the PlugIn class and throw a ServletException
if the configuration file is not available. This will result in ActionServlet
becoming unavailable.
• The PlugIn interface's init() method is your last chance if you want to
change something in ModuleConfig, which is a collection of static
configuration information that describes a Struts-based module. Struts
will freeze ModuleConfig once all PlugIns are processed.
How a Request is Processed
ActionServlet is the only servlet in Struts framework, and is responsible for
handling all of the requests. Whenever it receives a request, it first tries to
find a sub-application for the current request. Once a sub-application is
found, it creates a RequestProcessor object for that sub-application and calls
its process() method by passing it HttpServletRequest and
HttpServletResponse objects.
The RequestProcessor.process() is where most of the request processing
takes place. The process() method is implemented using the Template
Method design pattern, in which there is a separate method for performing
each step of request processing, and all of those methods are called in
sequence from the process() method. For example, there are separate
methods for finding the ActionForm class associated with the current request,
and checking if the current user has one of the required roles to execute
action mapping. This gives us tremendous flexibility. The RequestProcessor
class in the Struts distribution provides a default implementation for each of
the request-processing steps. That means you can override only the methods
that interest you, and use default implementations for rest of the methods.
For example, by default Struts calls request.isUserInRole() to find out if the
user has one of the roles required to execute the current ActionMapping, but
if you want to query a database for this, then then all you have to do is
override the processRoles() method and return true or false, based whether
the user has the required role or not.
First we will see how the process() method is implemented by default, and
then I will explain what each method in the default RequestProcessor class
does, so that you can decide what parts of request processing you want to
change.
<controller>
<set-property property="noCache" value="true"/>
</controller>
17.processPreprocess(): This is a general purpose, pre-processing
hook that can be overridden by subclasses. Its implementation in
RequestProcessor does nothing and always returns true. Returning
false from this method will abort request processing.
18.processMapping(): This will use path information to get an
ActionMapping object. The ActionMapping object represents the
<action> element in your struts-config.xml file.
19.
20.<action path="/newcontact" type="com.sample.NewContactAction"
21. name="newContactForm" scope="request">
22. <forward name="sucess" path="/sucessPage.do"/>
23. <forward name="failure" path="/failurePage.do"/>
24.</action>
The ActionMapping element contains information like the name of the
Action class and ActionForm used in processing this request. It also
has information about ActionForwards configured for the current
ActionMapping.
checking for the userName attribute of the session and if it's not
found, redirect the user to the login page.
If that is not the case, you should create a Struts sub-application for
handling requests for image-generating Actions and set image/gif as
the contentType for it.
The Tiles framework uses its own RequestProcessor for decorating output
generated by Struts.
ActionServlet
If you look into the web.xml file of your Struts web application, it looks like
this:
<web-app >
<servlet>
<servlet-name>action=</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet</servlet-
class>
<!-- All your init-params go here-->
</servlet>
<servlet-mapping>
<servlet-name>action</servlet-name>
<url-pattern>*.do</url-pattern>
</servlet-mapping>
</web-app >
That means ActionServlet is responsible for handling all of your requests to
Struts. You can create a sub-class of the ActionServlet class if you want to do
something at application startup or shutdown or on every request, but you
should try creating a PlugIn or RequestProcessor before extending the
ActionServlet class. Before Servlet 1.1, the Tiles framework was based on
extending the ActionServlet class to decorate a generated response. But from
1.1 on, it's used the TilesRequestProcessor class.
Conclusion
Deciding to develop your own MVC framework is a very big decision--you
should think about the time and resources it will take to develop and
maintain that code. Struts is a very powerful and stable framework and you
can change it to accommodate most of your business requirements.
On the other hand, the decision to extend Struts should not be taken lightly.
If you put some low-performance code in your RequestProcessor class, it will
execute on every request and can reduce the performance of your whole
application. And there will be situations where it will better for you to create
your own MVC framework than extend Struts.
Trail Lessons
This trail covers common uses of reflection for accessing and manipulating
classes, fields, methods, and constructors. Each lesson contains code
examples, tips, and troubleshooting information.
Classes
This lesson shows the various ways to obtain a Class object
and use it to examine properties of a class, including its
declaration and contents.
Members
This lesson describes how to use the Reflection APIs to find the
fields, methods, and constructors of a class. Examples are
provided for setting and getting field values, invoking
methods, and creating new instances of objects using specific
constructors.
Arrays and Enumerated Types
This lesson introduces two special types of classes: arrays,
which are generated at runtime, and enum types, which define
unique named object instances. Sample code shows how to
retrieve the component type for an array and how to set and
get fields with array or enum types.
You are right. Even i read same thing about the EJB3.0.
As per my understanding of Hibernate, It is O-R mapping tool.(Object to
Relation)
EJB 3.0 will embed Hibernate's features of OR Mapping.
If you have used WSAD, you will get quick clue of OR Mapping.
Currently Entity Bean (EJB2.0) can be mapped to the single database.
Using Hibernate technique, it can map to multiple databases, it will generate
the XML file and the POJO.
The development will be fast as compare to the current EJB architecture with
lots of more features.
Entity bean is also used for object orientd view . It need lot things to
configures fo make it possible and Lot of coding need for it. After all the
performance is also very few. In ejb egar loading approch has taken for
loading it will cause serious performance issues. In java the most concept is
polymorphisum and inheritance. We cant get the favour of these concept .
For avoiding it new approch had taken thats called hibernate. Using
pojo(plain java object) and .hbm file we can make it possible. There is having
all the advantage of Object oriented
93. EJB Drill 3
Posted on March 22, 2007 by sharat
Stateless session beans are a good choice if your application does not need
to maintain state for a particular client between business method calls.
WebLogic Server is multi-threaded, servicing multiple clients simultaneously.
With stateless session beans, the EJB container is free to use any available,
pooled bean instance to service a client request, rather than [...]
At present, two major API specifications define how XML parsers work: SAX
and DOM. The DOM specification defines a tree-based approach to navigating
an XML document. In other words, a DOM parser processes XML data and
creates an object-oriented hierarchical representation of the document that
you can navigate at run-time.
The SAX specification defines an event-based approach whereby parsers scan
through XML data, calling handler functions whenever certain parts of the
document (e.g., text nodes or processing instructions) are found.
How do the tree-based and event-based APIs differ? The tree-based W3C
DOM parser creates an internal tree based on the hierarchical structure of the
XML data. You can navigate and manipulate this tree from your software, and
it stays in memory until you release it. DOM uses functions that return parent
and child nodes, giving you full access to the XML data and providing the
ability to interrogate and manipulate these nodes. DOM manipulation is
straightforward and the API does not take long to understand, particularly if
you have some JavaScript DOM experience.
In SAX's event-based system, the parser doesn't create any internal
representation of the document. Instead, the parser calls handler functions
when certain events (defined by the SAX specification) take place. These
events include the start and end of the document, finding a text node, finding
child elements, and hitting a malformed element.
SAX development is more challenging, because the API requires development
of callback functions that handle the events. The design itself also can
sometimes be less intuitive and modular. Using a SAX parser may require
you to store information in your own internal document representation if you
need to rescan or analyze the information—SAX provides no container for the
document like the DOM tree structure.
Is having two completely different ways to parse XML data a problem? Not
really, both parsers have very different approaches for processing the
information. The W3C DOM specification provides a very rich and intuitive
structure for housing the XML data, but can be quite resource-intensive given
that the entire XML document is typically stored in memory. You can
manipulate the DOM at run-time and stream the updated data as XML, or
transform it to your own format if you require.
The strength of the SAX specification is that it can scan and parse gigabytes
worth of XML documents without hitting resource limits, because it does not
try to create the DOM representation in memory. Instead, it raises events
that you can handle as you see fit. Because of this design, the SAX
implementation is generally faster and requires fewer resources. On the
other hand, SAX code is frequently complex, and the lack of a document
representation leaves you with the challenge of manipulating, serializing, and
traversing the XML document.
Basic I/O:
This lesson covers the Java platform classes used for basic I/O. It focuses
primarily on I/O Streams, a powerful concept that greatly simplifies I/O
operations. The lesson also looks at serialization, which lets a program write
whole objects out to streams and read them back again. Then the lesson
looks at some file system operations, including random access files. Finally, it
touches briefly on the advanced features of the New I/O API. Most of the
classes covered are in the java.io package.
I/O Streams
• Byte Streams handle I/O of raw binary data.
• Character Streams handle I/O of character data, automatically
handling translation to and from the local character set.
• Buffered Streams optimize input and output by reducing the number of
calls to the native API.
• Scanning and Formatting allows a program to read and write
formatted text.
• I/O from the Command Line describes the Standard Streams and the
Console object.
• Data Streams handle binary I/O of primitive data type and String
values.
• Object Streams handle binary I/O of objects.
File I/O
• File Objects help you to write platform-independent code that
examines and manipulates files.
• Random Access Files handle non-sequential file access.
An Overview of the New IO
This section talks about the advanced I/O packages added to version 1.4 of
the Java Platform.
Summary
A summary of the key points covered in this trail.
Isolation Levels :
Transactions not only ensure the full completion (or rollback) of the
statements that they enclose but also isolate the data modified by the
statements. The isolation level describes the degree to which the data being
updated is visible to other transactions.
Suppose that a transaction in one program updates a customer's phone
number, but before the transaction commits another program reads the
same phone number. Will the second program read the updated and
uncommitted phone number or will it read the old one? The answer depends
on the isolation level of the transaction. If the transaction allows other
programs to read uncommitted data, performance may improve because the
other programs don't have to wait until the transaction ends. But there's a
trade-off--if the transaction rolls back, another program might read the
wrong data.
You cannot modify the isolation level of entity beans with container-managed
persistence. These beans use the default isolation level of the DBMS, which is
usually READ_COMMITTED.
For entity beans with bean-managed persistence and for all session beans,
you can set the isolation level programmatically with the API provided by the
underlying DBMS. A DBMS, for example, might allow you to permit
uncommitted reads by invoking the setTransactionIsolation method:
Connection con;
...
con.setTransactionIsolation(TRANSACTION_READ_UNCOMMITTED);
Do not change the isolation level in the middle of a transaction. Usually, such
a change causes the DBMS software to issue an implicit commit. Because the
isolation levels offered by DBMS vendors may vary, you should check the
DBMS documentation for more information. Isolation levels are not
standardized for the J2EE platform.
Introduction
In this article, I want to tell you about Transaction Isolation Level in SQL
Server 6.5 and SQL Server 7.0, what kinds of Transaction Isolation Level
exist, and how you can set the appropriate Transaction Isolation Level.
READ UNCOMMITTED
READ COMMITTED
REPEATABLE READ
SERIALIZABLE
SQL Server 6.5 supports all of these Transaction Isolation Levels, but has
only three different behaviors, because in SQL Server 6.5 REPEATABLE
READ and SERIALIZABLE are synonyms. It because SQL Server 6.5
supports only page locking (the row level locking does not fully supported as
in SQL Server 7.0) and if REPEATABLE READ isolation level was set, the
another transaction cannot insert the row before the first transaction was
finished, because page will be locked. So, there are no phantoms in SQL
Server 6.5, if REPEATABLE READ isolation level was set.
SQL Server 7.0 supports all of these Transaction Isolation Levels and can
separate REPEATABLE READ and SERIALIZABLE.
Let me to describe each isolation level.
read uncommitted
When it's used, SQL Server not issue shared locks while reading data. So,
you can read an uncommitted transaction that might get rolled back later.
This isolation level is also called dirty read. This is the lowest isolation level.
It ensures only that a physically corrupt data will not be read.
read committed
This is the default isolation level in SQL Server. When it's used, SQL Server
will use shared locks while reading data. It ensures that a physically corrupt
data will not be read and will never read data that another application has
changed and not yet committed, but it does not ensure that the data will not
be changed before the end of the transaction.
repeatable read
When it's used, the dirty reads and nonrepeatable reads cannot occur. It
means that locks will be placed on all data that is used in a query, and
another transactions cannot update the data.
This is the definition of nonrepeatable read from SQL Server Books Online:
nonrepeatable read
When a transaction reads the same row more than one time, and between the
two (or more) reads, a separate transaction modifies that row. Because the
row was modified between reads within the same transaction, each read
produces different values, which introduces inconsistency.
serializable
Most restrictive isolation level. When it's used, then phantom values cannot
occur. It prevents other users from updating or inserting rows into the data
set until the transaction is complete.
phantom
Phantom behavior occurs when a transaction attempts to select a row that
does not exist and a second transaction inserts the row before the first
transaction finishes. If the row is inserted, the row appears as a phantom
to the first transaction, inconsistently appearing and disappearing.
You can set the appropriate isolation level for an entire SQL Server session
by using the SET TRANSACTION ISOLATION LEVEL statement. This is the
syntax from SQL Server Books Online:
I'm not a db expert, but my dba told me that for our app we need to set the
isolation level to read uncommitted. We're currently using Oracle 10g
Express Edition (XE)
Is this a limitation of Oracle XE ? What would cause this and how can I fix it?
The real question is, why are you going to re-check authentication
every time they go in, is there anything more you need to check for it
to be a "valid" session? Remember that a given browser's requests will
always go to the same session; so once they are validated, you can set
a flag in the session that they have been validated and don't worry
about re-authentication.
In certain cases, clients cannot accept cookies. Therefore, you cannot use
cookies as a session tracking mechanism. Applications can use URL rewriting
as a substitute.
TreeSet : - this is a sorted unique list for string and integers, when we
add special objects then for sorting and uniqueness this calls the
compareTo method to decide object equality and similiarity bases on
this it decides sorting and uniqueness of object.
Struts:
it simply extends http servlet class and overrides the doGet and
doPost method. The controller servlets job is to give the
<servlet>
<servlet-name>action</servlet-name>
<servlet-class>org.apache.struts.action.ActionServlet
</servlet-class>
<init-param>
<param-name>config</param-name>
<param-value>/WEB-INF/config/myconfig.xml</param-
value>
</init-param>
<load-on-startup>1</load-on-startup>
</servlet>
<servlet-mapping>
<servlet-name>action</servlet-name>
<url-pattern>*.do</url-pattern>
</servlet-mapping>
1. as we can see here the url mapping that is *.do will redirect all the
requests to Action Servlet.
2. here the init parameter tells the name of the xml file to read on
initialization in Action servlet.
process method. Now the request processor class reads the xml block
from struts config file based on the requested URL.
The request processor class uses the action mapping block to find the
Request Handler [ Action ] class for request.
form to be populated.
Request processor puts the form into the scope defined in action
mapping block it might be request or session.
Action class : This is the handler class for a request, the request processor
class instantiate the action class and passes the req,res,mapping
ActionForward : this class encapsulates the next view, the return type of
execute method of action class is actionforward.
global forward.
as input parameters.
errors.add("firstName", new
ActionError("error.firstName.null"));
Threads:
String s=null;
public MyThreadRunnable(String s) {
str = s;
Thread t = new Thread(this);
t.start();
}
main
synchronized(x)
obj.wait();
here x is the object on which we are doing the synchronization means only
one thread can go into the synch block to call wait method.
while calling wait to notify the current thread we need call notify or notifyAll
method, while calling any of these two methods the thread should have lock
on the boject otherwise it ll throw the runtime exception
IllegalMonitorStateException.
----------------------------------------------
join():-
join method we invoke on the thread object to first complete it. for example
if main thread is having 1 child thread and if we calling the join method on
the child this means we are telling to main thread to join the child thread
after its completion, also we can define time in milliseconds
while calling the join method. after completion of that time stamp the main
thread will join the child thread regardless of the the child thread completion.
Thread Priorities
JVM selects to run a Runnable thread with the highest priority.
All Java threads have a priority in the range 1-10.
Top priority is 10, lowest priority is 1.Normal
priority ie. priority by default is 5.
Thread.MIN_PRIORITY - minimum thread priority
Thread.MAX_PRIORITY - maximum thread priority
Thread.NORM_PRIORITY - maximum thread priority
Whenever a new Java thread is created it has the same priority as the thread
which created it.
Thread priority can be changed by the setpriority() method.
ANT Basics:
Tutorial: Hello World with Ant
This document provides a step by step tutorial for starting java programming
with Ant. It does not contain deeper knowledge about Java or Ant. This
tutorial has the goal to let you see, how to do the easiest steps in Ant.
Content
• Preparing the project
• Enhance the build file
• Enhance the build file
• Using external libraries
• Resources
Hello World
Creating a jar-file is not very difficult. But creating a startable jar-file needs
more steps: create a manifest-file containing the start class, creating the
target directory and archiving the files.
echo Main-Class: oata.HelloWorld>myManifest
md build\jar
jar cfm build\jar\HelloWorld.jar myManifest -C build\classes .
java -jar build\jar\HelloWorld.jar
Note: Do not have blanks around the >-sign in the echo Main-
Class instruction because it would falsify it!
<target name="clean">
<delete dir="build"/>
</target>
<target name="compile">
<mkdir dir="build/classes"/>
<javac srcdir="src" destdir="build/classes"/>
</target>
<target name="jar">
<mkdir dir="build/jar"/>
<jar destfile="build/jar/HelloWorld.jar" basedir="build/classes">
<manifest>
<attribute name="Main-Class" value="oata.HelloWorld"/>
</manifest>
</jar>
</target>
<target name="run">
<java jar="build/jar/HelloWorld.jar" fork="true"/>
</target>
</project>
Now you can compile, package and run the application via
ant compile
ant jar
ant run
Or shorter with
ant compile jar run
While having a look at the buildfile, we will see some similar steps between
Ant and the java-only commands:
java-only Ant
<target name="clean">
<delete dir="${build.dir}"/>
</target>
<target name="compile">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}"/>
</target>
</project>
Now it's easier, just do a ant and you will get
Buildfile: build.xml
clean:
compile:
[mkdir] Created dir: C:\...\build\classes
[javac] Compiling 1 source file to C:\...\build\classes
jar:
[mkdir] Created dir: C:\...\build\jar
[jar] Building jar: C:\...\build\jar\HelloWorld.jar
run:
[java] Hello World
main:
BUILD SUCCESSFUL
Using external libraries
Somehow told us not to use syso-statements. For log-Statements we should
use a Logging-API - customizable on a high degree (including switching off
during usual life (= not development) execution). We use Log4J for that,
because
• it is not part of the JDK (1.4+) and we want to show how to use
external libs
• it can run under JDK 1.2 (as Ant)
• it's highly configurable
• it's from Apache ;-)
We store our external libraries in a new directory lib. Log4J can
be downloaded [1] from Logging's Homepage. Create the lib directory and
extract the log4j-1.2.9.jar into that lib-directory. After that we have to
modify our java source to use that library and our buildfile so that this library
could be accessed during compilation and run.
Working with Log4J is documented inside its manual. Here we use
the MyApp-example from the Short Manual [2]. First we have to modify the
java source to use the logging framework:
package oata;
import org.apache.log4j.Logger;
import org.apache.log4j.BasicConfigurator;
...
<target name="compile">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}"
classpathref="classpath"/>
</target>
...
</project>
In this example we start our application not via its Main-Class manifest-
attribute, because we could not provide a jarname and a classpath. So add
our class in the red line to the already defined path and start as usual.
Running ant would give (after the usual compile stuff):
[java] 0 [main] INFO oata.HelloWorld - Hello World
What's that?
• [java] Ant task running at the moment
• 0 sorry don't know - some Log4J stuff
• [main] the running thread from our application
• INFO log level of that statement
• oata.HelloWorld source of that statement
• - separator
• Hello World the message
For another layout ... have a look inside Log4J's documentation about using
other PatternLayout's.
Configuration files
Why we have used Log4J? "It's highly configurable"? No - all is hard coded!
But that is not the debt of Log4J - it's ours. We had
coded BasicConfigurator.configure(); which implies a simple, but hard coded
configuration. More confortable would be using a property file. In the java
source delete the BasicConfiguration-line from the main() method (and the
related import-statement). Log4J will search then for a configuration as
described in it's manual. Then create a new file src/log4j.properties. That's
the default name for Log4J's configuration and using that name would make
life easier - not only the framework knows what is inside, you too!
log4j.rootLogger=DEBUG, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%m%n
This configuration creates an output channel ("Appender") to console named
as stdout which prints the message (%m) followed by a line feed (%n) -
same as the earlier System.out.println() :-) Oooh kay - but we haven't
finished yet. We should deliver the configuration file, too. So we change the
buildfile:
...
<target name="compile">
<mkdir dir="${classes.dir}"/>
<javac srcdir="${src.dir}" destdir="${classes.dir}"
classpathref="classpath"/>
<copy todir="${classes.dir}">
<fileset dir="${src.dir}" excludes="**/*.java"/>
</copy>
</target>
...
This copies all resources (as long as they haven't the suffix ".java") to the
build directory, so we could start the application from that directory and
these files will included into the jar.
}
Because we dont have real business logic to test, this test class is very small:
just show how to start. For further information see the JUnit documentation
[3] and the manual of junit task. Now we add a junit instruction to our
buildfile:
...
<batchtest fork="yes">
<fileset dir="${src.dir}" includes="*Test.java"/>
</batchtest>
</junit>
</target>
...
We reuse the path to our own jar file as defined in run-target by giving it an
ID. The printsummary=yes lets us see more detailed information than just a
"FAILED" or "PASSED" message. How much tests failed? Some errors?
Printsummary lets us know. The classpath is set up to find our classes. To
run tests the batchtest here is used, so you could easily add more test
classes in the future just by naming them *Test.java. This is a common
naming scheme.
After a ant junit you'll get:
...
junit:
[junit] Running HelloWorldTest
[junit] Tests run: 2, Failures: 1, Errors: 0, Time elapsed: 0,01 sec
[junit] Test HelloWorldTest FAILED
BUILD SUCCESSFUL
...
We can also produce a report. Something that you (and other) could read
after closing the shell .... There are two steps: 1. let <junit> log the
information and 2. convert these to something readable (browsable).
...
<property name="report.dir" value="${build.dir}/junitreport"/>
...
<target name="junit" depends="jar">
<mkdir dir="${report.dir}"/>
<junit printsummary="yes">
<classpath>
<path refid="classpath"/>
<path refid="application"/>
</classpath>
<formatter type="xml"/>
<target name="junitreport">
<junitreport todir="${report.dir}">
<fileset dir="${report.dir}" includes="TEST-*.xml"/>
<report todir="${report.dir}"/>
</junitreport>
</target>
Because we would produce a lot of files and these files would be written to
the current directory by default, we define a report directory, create it before
running the junit and redirect the logging to it. The log format is XML
so junitreport could parse it. In a second target junitreport should create a
browsable HTML-report for all generated xml-log files in the report directory.
Now you can open the ${report.dir}\index.html and see the result (looks
something like JavaDoc).
Personally I use two different targets for junit and junitreport. Generating the
HTML report needs some time and you dont need the HTML report just for
testing, e.g. if you are fixing an error or a integration server is doing a job.
<?xml version="1.0"?>
<!-- Build file for our first application -->
<project name="Ant test project" default="build"
basedir=".">
<target name="build" >
<javac srcdir="src" destdir="build/src"
debug="true"
includes="**/*.java"
/>
</target>
</project>
First line of the build.xml file represents the document type declaration. Next
line is comment entry. Third line is the project tag. Each buildfile contains
one project tag and all the instruction are written in the project tag.
The project tag:
<project name="Ant test project" default="build" basedir=".">
requires three attributes namely name, default and basedir.
Here is the description of the attributes:
Attribute Description
name Represents the name of the project.
default Name of the default target to use when no target is
supplied.
basedir Name of the base directory from which all path calculations
are done.
All the attributes are required.
One project may contain one or more targets. In this example there is only
one target.
<target name="build" >
<javac srcdir="src" destdir="build/src" debug="true"
includes="**/*.java"
/>
</target>
Which uses task javac to compile the java files.
Here is the code of our test1.java file which is to be compiled by the Ant
utility.
class test1{
public static void main (String
args[]){
System.out.println("This is example
1");
}
}
Hibernate Properties:
configuration property.
eg. classname.of.Batcher
Enables use of JDBC2 scrollable
resultsets by Hibernate. This
property is only necessary when
hibernate.jdbc.use_scrollable_resul using user supplied JDBC
tset connections, Hibernate uses
connection metadata otherwise.
eg. true|false
The classname of a custom
ConnectionProvider which provides
hibernate.connection.provider_clas
JDBC connections to Hibernate.
s
eg.
classname.of.ConnectionProvider
hibernate.connection.isolation Set the JDBC transaction isolation
level. Check java.sql.Connection for
meaningful values but note that
most databases do not support all
isolation levels.
Property name Purpose
eg. 1, 2, 4, 8
Enables autocommit for JDBC
pooled connections (not
hibernate.connection.autocommit recommended).
eg. true|false
Enable the query cache,
individual queries still have to
hibernate.cache.use_query_cache be set cachable.
eg. true|false
May be used to completely
disable the second level
cache, which is enabled by
hibernate.cache.use_second_level_cache default for classes which
specify a <cache> mapping.
eg. true|false
The classname of a custom
QueryCache interface,
hibernate.cache.query_cache_factory defaults to the built-in
StandardQueryCache.
eg. classname.of.QueryCache
hibernate.cache.region_prefix A prefix to use for second-
level cache region names.
Property name Purpose
eg. prefix
Forces Hibernate to store
data in the second-level
hibernate.cache.use_structured_entries cache in a more human-
friendly format.
eg. true|false
Table 3.6. Hibernate Transaction Properties
The classname of a
TransactionFactory to use with
Hibernate Transaction API
hibernate.transaction.factory_class (defaults to
JDBCTransactionFactory).
eg.
classname.of.TransactionFactory
A JNDI name used by
JTATransactionFactory to obtain
jta.UserTransaction the JTA UserTransaction from the
application server.
eg. jndi/composite/name
The classname of a
TransactionManagerLookup -
required when JVM-level caching
hibernate.transaction.manager_looku is enabled or when using hilo
p_class generator in a JTA environment.
eg.
classname.of.TransactionManage
rLookup
hibernate.transaction.flush_before_co If enabled, the session will be
mpletion automatically flushed during the
before completion phase of the
transaction. Built-in and
automatic session context
management is preferred, see
Section 2.5, “Contextual
Property name Purpose
Sessions”.
eg.
hibernate.query.factory_class org.hibernate.hql.ast.ASTQueryTranslato
rFactory or
org.hibernate.hql.classic.ClassicQueryTra
nslatorFactory
Mapping from tokens in Hibernate
queries to SQL tokens (tokens might be
hibernate.query.substitutions function or literal names, for example).
eg. hqlLiteral=SQL_LITERAL,
hqlFunction=SQLFUNC
hibernate.hbm2ddl.auto Automatically validate or export schema
DDL to the database when the
SessionFactory is created. With create-
drop, the database schema will be
dropped when the SessionFactory is
Property name Purpose
closed explicitly.
RDBMS Dialect
DB2 org.hibernate.dialect.DB2Dialect
PostgreSQL org.hibernate.dialect.PostgreSQLDialect
MySQL org.hibernate.dialect.MySQLDialect
Sybase org.hibernate.dialect.SybaseDialect
SAP DB org.hibernate.dialect.SAPDBDialect
Informix org.hibernate.dialect.InformixDialect
HypersonicSQL org.hibernate.dialect.HSQLDialect
Ingres org.hibernate.dialect.IngresDialect
Progress org.hibernate.dialect.ProgressDialect
Interbase org.hibernate.dialect.InterbaseDialect
Pointbase org.hibernate.dialect.PointbaseDialect
FrontBase org.hibernate.dialect.FrontbaseDialect
Firebird org.hibernate.dialect.FirebirdDialect
SQL:
boolean execute()
Executes the SQL statement in this PreparedStatement object, which may be
any kind of SQL statement.It returns Boolean.
ResultSet executeQuery()
Executes the SQL query in this PreparedStatement object and returns the
ResultSet object generated by the query.
int executeUpdate()
Executes the SQL statement in this PreparedStatement object, which must
be an SQL INSERT, UPDATE or DELETE statement; or an SQL statement that
returns nothing, such as a DDL statement.It return dint value.
execute()- is for invoking the functions or stored
procedures of SQL by the CallableStatement.
In addition to the inner and anonymous classes, another useful type of class
is the adapter class. Here this applies in particular to classes used to reduce
the code for event listeners.
While the listener architecture has greatly improved the efficiency and
capabilities of event handling, there are some complications and annoyances
that come along with it. In particular, we find that the listener interfaces hold
up to 6 methods that must be implemented.
Remember than an interface holds only abstract methods and its
implementation requires that ALL of its methods be implemented, i.e.
overridden with real methods. So for those methods that you are not using,
you must still implement the methods with empty code bodies.
We illustrate this with the following example where we implement an
anonymous MouseListener class but only need to use one of the methods.
We can avoid implementing all of these unneeded methods by taking
advantage of the adapter classes in the java.awt.event package. For eight of
the listener interfaces there is a corresponding adapter class for each. These
adapters simply implement all of the the methods for the listener interface
method with empty code bodes. Though the adapters are abstract classes,
the methods are real so you only need to override the method(s) of interest.
Here is the same example like the one above except that it uses a
MouseAdapter:
MERGE
Purpose
Use the MERGE statement to select rows from one or more sources for
update or insertion into one or more tables. You can specify conditions to
determine whether to update or insert into the target tables.
This statement is a convenient way to combine multiple operations. It lets
you avoid multiple INSERT, UPDATE, and DELETE DML statements.
MERGE is a deterministic statement. That is, you cannot update the same
row of the target table multiple times in the same MERGE statement.
Prerequisites
You must have the INSERT and UPDATE object privileges on the target table
and the SELECT object privilege on the source table. To specify the DELETE
clause of the merge_update_clause, you must also have the DELETE object
privilege on the target table.
merge::=
Syntax:
--update
--insert
1) new ClassName();
2) Class.ForName(ClassName.class()).newInstance();
3) OldInstance.clone();
(It must explicitly coordinate with its supertype to save its state.)
(It must explicitly coordinate with the supertype to save its state.)
Note ¯ The writeExternal and readExternal methods are public and raise the
risk that a client may be able to write or read information in the object other
than by using its methods and fields. These methods must be used only
when the information held by the object is not sensitive or when exposing it
does not present a security risk.
Q: What is an enumeration?
A: An enumeration is an interface containing methods for accessing the
underlying data structure from which the enumeration is obtained.
It is a construct which collection classes return when you request a
collection of all the objects stored in the collection. It allows
sequential access to all the elements stored in the collection.
Class loaders are hierarchical and use a delegation model when loading a
class. Class loaders request their parent to load the class first before
attempting to load it themselves. When a class loader loads a class, the child
class loaders in the hierarchy will never reload the class again. Hence
uniqueness is maintained. Classes loaded by a child class loader have
visibility into classes loaded by its parents up the hierarchy but the reverse
is not true as explained in the above diagram.
Important: Two objects loaded by different class loaders are never equal
even if they carry the same values, which mean a class is uniquely identified
in the context of the associated class loader. This applies to singletons too,
where each class
loader will have its own singleton. [Refer Q45 in Java section for
singleton design pattern]
How to debug javascript?
- A fully functional and robust debugging environment for Javascript
programmers developing for Internet Explorer
The MS Script Editor is an amazing IDE for script development and has a
javascript debugger built into it that is closely tied to IE6 and allows for
seamless Javascript debugging of very sophisticated web-based applications.
Not to be confused with the lackluster Microsoft Javascript Debugger
(1997), the Microsoft Script Editor (2001) is a fully functioning
editor/debugger that is very robust and has never crashed for me
after many months of use. Best of all, it’s FREE if you already have
one of the many MS products that includes it as an optional feature.
What is a singleton?
A singleton (snap shot above is taken directly from the code below) is a
pattern that ensures a class has only one instance, and provides a global
point of access to that class. It ensures that all objects that use an instance
of this class use the same instance. Why would that be useful in a J2EE
application? Think about all the JNDI look-ups a typical enterprise application
(or web application, if using Tomcat's DB Connection Pooling) would perform
in the course of it's execution. Remember that JNDI look-ups are not pointers
to your object itself (that would be the datasource, jms topic or ejbHome
itself) but to the location of that asset. Which means that these look-ups can
be cached and retreived without doing a network-intensive lookup for an
asset each time a new class is invoked.
However, to do this correctly you would have to make sure that the classes
that were calling on your cache wasn't creating a new cache object every
time it wanted to do a lookup. That would render your caching mechanism
useless. This is where the Singleton pattern comes in handy, and the way we
do this is with the ServiceLocator pattern.
Imagine having client a accessing 5 business objects with the remote calls,
which is not an efficient way (network latency) or accessing the SessionFace
and the session face is doing all the work through local access.
There are many uses, important one is to reduce network traffic I you are
calling many EJB from your Servlet then this is not advised, because it has to
make many network trips, so what you do you call a Stateless session bean
and this in turn calls other EJB, since they are in same container there is less
network calls other thing you can do now is you can convert them to LOCAL
EJB which has not network calls. This increases your server bandwidthJ.
Problem solver this is good for a highly available system.
Service Locator
As J2EE components are using JDNI to lookup for ejb interfaces,DataSources,
JMS components, connections etc. isntead of writing all the lookup in many
code piecess across the project, you write a service locator that gives you a
centralized place to handle the lookup's. It's easier to maintain and to control
such a setup.
Use a Service Locator object to abstract all JNDI usage and to hide the
complexities of initial context creation, EJB home object lookup, and EJB
object re-creation. Multiple clients can reuse the Service Locator object to
reduce code complexity, provide a single point of control, and improve
performance by providing a caching facility.
In Java, any thread can be a Daemon thread. Daemon threads are like a service
providers for other threads or objects running in the same process as the daemon
thread. Daemon threads are used for background supporting tasks and are only
needed while normal threads are executing. If normal threads are not running and
remaining threads are daemon threads then the interpreter exits.
setDaemon(true/false) – This method is used to specify that a thread is
daemon thread.
try {
System.out.println("In run Method: currentThread() is"
+ Thread.currentThread());
while (true) {
try {
Thread.sleep(500);
} catch (InterruptedException x) {
}
System.out.println("In run method: woke up again");
}
} finally {
System.out.println("Leaving run Method");
}
}
public static void main(String[] args) {
System.out.println("Entering main Method");
try {
Thread.sleep(3000);
} catch (InterruptedException x) {
}
}
Threads that work in the background to support the runtime environment are called
daemon threads. For example, the clock handler thread, the idle thread, the
garbage collector thread, the screen updater thread, and the garbage collector
thread are all daemon threads. The virtual machine exits whenever all non-daemon
threads have completed.
public final void setDaemon(boolean isDaemon)
public final boolean isDaemon()
By default a thread you create is not a daemon thread. However you can use the
setDaemon(true) method to turn it into one.
Well, the actual word is daemon thread but, they are sometimes referred as
demon thread as they just keep on running and haunt you back
Stored Routines (Procedures and Functions) are supported in version MySQL 5.0.
Stored Procedure is a set of statements, which allow ease and flexibility for a
programmer because stored procedure is easy to execute than reissuing the
number of individual SQL statements. Stored procedure can call another stored
procedure also. Stored Procedure can very useful where multiple client applications
are written in different languages or it can be work on different platforms but they
need to perform the same database operations.
Store procedure can improve the performance because by using the stored
procedure less information needs to be sent between the server and the client. It
increase the load on the database server because less work is done on the client
side and much work is done on the server side.
CREATE PROCEDURE Syntax
The general syntax of Creating a Stored Procedure is :
CREATE PROCEDURE proc_name ([proc_parameter[......]])
routine_body
proc_name : procedure name
proc_parameter : [ IN | OUT | INOUT ] param_name type
routine_body : Valid SQL procedure statement
The parameter list is available with in the parentheses. Parameter can be declared
to use any valid data type, except that the COLLATE attribute cannot be used. By
default each parameter is an IN parameter. For specifying other type of parameter
used the OUT or INOUT keyword before the parameter name.
An IN parameter is used to pass the value into a procedure. The procedure can be
change the value but when the procedure return the value then modification is not
visible to the caller. An OUT parameter is used to pass the value from the
procedure to the caller but its visible to the caller. An INOUT parameter is initialized
by the caller and it can be modified by the procedure, and any change made by the
procedure is visible to the caller.
For each OUT or INOUT parameter you have to pass a user –defined variable
because then the procedure returns the value then only you can obtain it values.
But if you invoking the procedure from the other procedure then you can also pass
a routine parameter or variable as an IN or INOUT parameter.
The routine_body contains the valid SQL procedure statement that can be a simple
statement like SELECT or INSERT or they can be a compound statement written
using BEGIN and END. Compound statement can consists declarations, loops or
other control structure.
Now we are describing you a example of a simple stored procedure which uses an
OUT parameter. It uses the mysql client delimiter command for changing the
statement delimiter from ; to // till the procedure is being defined. Example :
mysql> delimiter //
mysql> CREATE PROCEDURE Sproc(OUT
p1 INT)
-> SELECT COUNT(*) INTO p1 FROM
Emp;
-> //
Query OK, 0 rows affected (0.21 sec)
mysql> delimiter ;
mysql> CALL Sproc(@a);
Query OK, 0 rows affected (0.12 sec)
mysql> select @a;
+------+
| @a |
+------+
|5 |
+------+
1 row in set (0.00 sec)
CREATE FUNCTION Syntax
The general syntax of Creating a Function is :
CREATE FUNCTION func_name ([func_parameter[,...]]) RETURNS type
routine_body
func_name : Function name
func_parameter : param_name type
type : Any valid MySQL datatype
routine_body : Valid SQL procedure statement
The RETURN clause is mandatory for FUNCTION. It used to indicate the return type
of function.
Now we are describing you a simple example a function. This function take a
parameter and it is used to perform an operation by using an SQL function and
return the result. In this example there is no need to use delimiter because it
contains no internal ; statement delimiters. Example :
mysql> CREATE FUNCTION func(str
CHAR(20))
-> RETURNS CHAR(50)
-> RETURN CONCAT('WELCOME TO,
',str,'!');
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT func('RoseIndia');
+------------------------+
| func('RoseIndia') |
+------------------------+
| WELCOME TO, RoseIndia! |
+------------------------+
1 row in set (0.00 sec)
Views:
VIEW is a virtual table, which acts like a table but actually it contains no data. That
is based on the result set of a SELECT statement. A VIEW consists rows and
columns from one or more than one tables. A VIEW is a query that’s stored as an
object. A VIEW is nothing more than a way to select a subset of table’s columns.
When you defined a view then you can reference it like any other table in a
database. A VIEW provides as a security mechanism also. VIEWS ensures that
users are able to modify and retrieve only that data which seen by them.
By using Views you can ensure about the security of data by restricting access to
the following data:
• Specific columns of the tables.
• Specific rows of the tables.
• Specific rows and columns of the tables.
• Subsets of another view or a subset of views and tables
• Rows fetched by using joins.
• Statistical summary of data in a given tables.
CREATE VIEW Statement
CREATE VIEW Statement is used to create a new database view. The general
syntax of CREATE VIEW Statement is:
CREATE VIEW view_name [(column_list)] [WITH ENCRYPTION] AS
select_statement [WITH CHECK OPTION]
View_name specifies the name for the new view. column_list specifies the name
of the columns to be used in view. column_list must have the same number of
columns that specified in select_statement. If column_list option is not available
then view is created with the same columns that specified in select_statement.
WITH ENCRYPTION option encrypts the text to the view in the syscomments
table.
AS option specifies the action that is performed by the view. select_statement is
used to specify the SELECT statement that defines a view. The optional WITH
CHECK OPTION clause applies to the data modification statement like INSERT and
UPDATE statements to fulfill the criteria given in the select_statement defining
the view. This option also ensures that the data can visible after the modifications
are made permanent.
Some restrictions imposed on views are given below :
• A view can be created only in the current database.
• The view name must follow the rules for identifiers and
• The view name must not be the same as that of the base table
• A view can be created only that time if there is a SELECT permission on its
base table.
• A SELECT INTO statement cannot be used in view declaration statement.
• A trigger or an index cannot be defined on a view.
• The CREATE VIEW statement cannot be combined with other SQL statements
in a single batch.
JOINS:
Sometimes you required the data from more than one table. When you select the
data from more than one table this is known as Joining. A join is a SQL query that
is used to select the data from more than one table or views. When you define
multiple tables or views in the FROM clause of a query the MySQL performs a join
that linking the rows from multiple tables together.
Types of Joins :
• INNER Joins
• OUTER Joins
• SELF Joins
We are going to describe you the Join with the help of following two tables :
mysql> SELECT * FROM
Client;
+------+---------------
+----------+
| C_ID | Name | City
|
+------+---------------
+----------+
| 1 | A K Ltd | Delhi |
| 2 | V K Associate | Mumbai
|
| 3 | R K India | Banglore
|
| 4 | R S P Ltd | Kolkata
|
+------+---------------
+----------+
4 rows in set (0.00 sec)
mysql> SELECT * FROM
Products;
+---------+-------------+------
+
| Prod_ID | Prod_Detail | C_ID
|
+---------+-------------+------
+
| 111 | Monitor |1 |
| 112 | Processor | 2 |
| 113 | Keyboard | 2 |
| 114 | Mouse |3 |
| 115 | CPU |5 |
+---------+-------------+------
+
5 rows in set (0.00 sec)
INNER Joins
The INNER join is considered as the default Join type. Inner join returns the column
values from one row of a table combined with the column values from one row of
another table that satisfy the search condition for the join. The general syntax of
INNER Join is :
SELECT <column_name1>, <column_name2> FROM <tbl_name> INNER
JOIN <tbl_name> ON <join_conditions>
The following example takes all the records from table Client and finds the
matching records in table Product. But if no match is found then the record from
table Client is not included in the results. But if multiple results are found in table
Product with the given condition then one row will be return for each.
Example :
mysql> SELECT * FROM Client
-> INNER JOIN Products
-> ON Client.C_ID=Products.C_ID;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 3 | R K India | Banglore | 114 | Mouse
|3 |
+------+---------------+----------+---------
+-------------+------+
4 rows in set (0.04 sec)
OUTER Joins
Sometimes when we are performing a Join between the two tables, we need all the
records from one table even there is no corresponding record in other table. We can
do this with the help of OUTER Join. In other words an OUTER Join returns the all
rows that returned by an INNER Join plus all the rows from one table that did not
match any row from the other table. Outer Join are divided in two types : LEFT
OUTER Join, RIGHT OUTER Join
LEFT OUTER Join
LEFT OUTER Join is used to return all the rows that returned by an INNER Join plus
all the rows from first table that did not match with any row from the second table
but with the NULL values for each column from second table. The general syntax of
LEFT OUTER Join is :
SELECT <column_name1>, <column_name2> FROM <tbl_name> LEFT
OUTER JOIN <tbl_name> ON <join_conditions>
In the following example we are selected every row from the Client table which
don’t have a match in the Products Table. Example :
mysql> SELECT * FROM CLIENT
-> LEFT OUTER JOIN Products
-> ON Client.C_ID=Products.C_ID;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 3 | R K India | Banglore | 114 | Mouse
|3 |
| 4 | R S P Ltd | Kolkata | NULL |
| NULL |
+------+---------------+----------+---------
+-------------+------+
5 rows in set (0.00 sec)
In the following example we are using the ORDER BY Clause with the LEFT OUTER
Join.
mysql> SELECT * FROM Client
-> LEFT OUTER JOIN Products
-> ON Client.C_ID=Products.C_ID
-> ORDER BY Client.City;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 3 | R K India | Banglore | 114 | Mouse
|3 |
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 4 | R S P Ltd | Kolkata | NULL |
| NULL |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
+------+---------------+----------+---------
+-------------+------+
5 rows in set (0.08 sec)
In the result of LEFT OUTER Join " R S P Ltd " is included even though it has no
rows in the Products table.
RIGHT OUTER Join
RIGHT OUTER Join is much same as the LEFT OUTER JOIN. But RIGHT OUTER Join
is used to return all the rows that returned by an INNER Join plus all the rows from
second table that did not match with any row from the first table but with the NULL
values for each column from first table. The general syntax of RIGHT OUTER Join
is :
SELECT <column_name1>, <column_name2> FROM <tbl_name> RIGHT
OUTER JOIN <tbl_name> ON <join_conditions>
In the following example we are selected every row from the Products table which
don’t have a match in the Client Table. Example :
mysql> SELECT * FROM Client
-> RIGHT OUTER JOIN Products
-> ON Client.C_ID=Products.C_ID;
+------+---------------+----------+---------
+-------------+------+
| C_ID | Name | City | Prod_ID |
Prod_Detail | C_ID |
+------+---------------+----------+---------
+-------------+------+
| 1 | A K Ltd | Delhi | 111 | Monitor
|1 |
| 2 | V K Associate | Mumbai | 112 |
Processor | 2 |
| 2 | V K Associate | Mumbai | 113 |
Keyboard | 2 |
| 3 | R K India | Banglore | 114 | Mouse
|3 |
| NULL | | | 115 | CPU |
5 |
+------+---------------+----------+---------
+-------------+------+
5 rows in set (0.03 sec)
SELF Join
SELF Join means a table can be joined with itself. SELF Join is useful when we want
to compare values in a column to other values in the same column. For creating a
SELF Join we have to list a table twice in the FROM clause and assign it a different
alias each time. For referring the table we have to use this aliases.
The following example provide you the list of those Clients that belongs to same
city of C_ID=1.
mysql> SELECT b.C_ID,b.Name,b.City FROM
Client a, Client b
-> WHERE a.City=b.City AND a.C_ID=1;
+------+----------+-------+
| C_ID | Name | City |
+------+----------+-------+
| 1 | A K Ltd | Delhi |
| 5 | A T Ltd | Delhi |
| 6 | D T Info | Delhi |
+------+----------+-------+
3 rows in set (0.00 sec)
we can write this SELF JOIN Query in Subquery like this also :
mysql> SELECT * FROM
Client
-> WHERE City=(
-> SELECT City FROM
Client
-> WHERE C_ID=1);
+------+----------+-------+
| C_ID | Name | City |
+------+----------+-------+
| 1 | A K Ltd | Delhi |
| 5 | A T Ltd | Delhi |
| 6 | D T Info | Delhi |
+------+----------+-------+
3 rows in set (0.03 sec)
Cursor:
Cursors are used when the SQL Select statement is expected to return more than
one row. Cursors are supported inside procedures and functions. Cursors must be
declared and its definition contains the query. The cursor must be defined in the
DECLARE section of the program. A cursor must be opened before processing and
close after processing.
Triggers:
A Trigger is a named database object which defines some action that the database
should take when some databases related event occurs. Triggers are executed
when you issues a data manipulation command like INSERT, DELETE, UPDATE on a
table for which the trigger has been created. They are automatically executed and
also transparent to the user. But for creating the trigger the user must have the
CREATE TRIGGER privilege. In this section we will describe you about the syntax to
create and drop the triggers and describe you some examples of how to use them.
CREATE TRIGGER
The general syntax of CREATE TRIGGER is :
CREATE TRIGGER trigger_name trigger_time trigger_event ON
tbl_name FOR EACH ROW trigger_statement
By using above statement we can create the new trigger. The trigger can associate
only with the table name and that must be refer to a permanent table.
Trigger_time means trigger action time. It can be BEFORE or AFTER. It is used to
define that the trigger fires before or after the statement that executed it.
Trigger_event specifies the statement that executes the trigger. The
trigger_event can be any of the DML Statement : INSERT, UPDATE, DELETE.
We can not have the two trigger for a given table, which have the same trigger
action time and event. For Instance : we cannot have two BEFORE INSERT triggers
for same table. But we can have a BEFORE INSERT and BEFORE UPDATE trigger for
a same table.
Trigger_statement have the statement that executes when the trigger fires but if
you want to execute multiple statement the you have to use the BEGIN…END
compound statement.
We can refer the columns of the table that associated with trigger by using the OLD
and NEW keyword. OLD.column_name is used to refer the column of an existing
row before it is deleted or updated and NEW.column_name is used to refer the
column of a new row that is inserted or after updated existing row.
JDBC:
Java Database Connectivity is similar to Open Database Connectivity (ODBC) which
is used for accessing and managing database, but the difference is that JDBC is
designed specifically for Java programs, whereas ODBC is not depended upon any
language.
The JDBC Driver Manager.
The JDBC Driver Manager is a very important class that defines objects which
connect Java applications to a JDBC driver. Usually Driver Manager is the backbone
of the JDBC architecture. It's very simple and small that is used to provide a
means of managing the different types of JDBC database driver running on an
application. The main responsibility of JDBC database driver is to load all the
drivers found in the system properly as well as to select the most appropriate
driver from opening a connection to a database. The Driver Manager also helps to
select the most appropriate driver from the previously loaded drivers when a new
open database is connected.
The JDBC-ODBC Bridge.
The JDBC-ODBC bridge, also known as JDBC type 1 driver is a database driver that
utilize the ODBC driver to connect the database. This driver translates JDBC
method calls into ODBC function calls. The Bridge implements Jdbc for any
database for which an Odbc driver is available. The Bridge is always implemented
as the sun.jdbc.odbc Java package and it contains a native library used to access
ODBC.
In the pooled state, the values of the instance variables are not needed. You can
make these instance variables eligible for garbage collection by setting them to null
in the ejbPasssivate method.
The Life Cycle of a Message-Driven Bean
Figure 3-6 illustrates the stages in the life cycle of a message-driven bean.
The EJB container usually creates a pool of message-driven bean instances. For
each instance, the EJB container instantiates the bean and performs these tasks:
1. It calls the setMessageDrivenContext method to pass the context object to
the instance.
1. It calls the instance's ejbCreate method.
Figure 3-6 Life Cycle of a Message-Driven Bean
Like a stateless session bean, a message-driven bean is never passivated, and it
has only two states: nonexistent and ready to receive messages.
At the end of the life cycle, the container calls the ejbRemove method. The bean's
instance is then ready for garbage collection.
package test;/*
** Use the Collections.sort to sort a List
**
** When you need natural sort order you can implement
** the Comparable interface.
**
** If You want an alternate sort order or sorting on different properties
* then implement a Comparator for your class.
*/
import java.util.*;
/*
** Implement the natural order for this class
*/
public int compareTo(Object o)
{
return getName().compareTo(((Farmer)o).getName());
}
Collections.sort(farmer);
System.out.println("Sort in Natural order");
System.out.println("t" + farmer);
Collections.sort(farmer, Collections.reverseOrder());
System.out.println("Sort by reverse natural order");
System.out.println("t" + farmer);
Output
instance.doSomething();
}
}
MaxFactors.main(arguments);