Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
About
Careers
Clients
Consulting
Training
Support
Articles
Blog
In the first article of this series I explored what regression testing is, why it matters, and by breaking down the OBIEE stack
into its constituent parts where it is possible to do it for OBIEE. In this posting, I explore some approaches that lend
themselves well to automation for testing that existing analyses and dashb oards are not affected by RPD changes.
Recent Posts
Analytics w ith Kibana and
Elasticsearch through Hadoop
part 3 Visualising the data in
Kibana
Analytics w ith Kibana and
Elasticsearch through Hadoop
part 2 Getting data into
Elasticsearch
Analytics w ith Kibana and
Elasticsearch through Hadoop
part 1 Introduction
UKOUG Partner of the Year
Aw ards
Oracle BI Cloud Service for SaaS
Application Reporting Part 1:
Integrating BICS to
Salesforce.com using REST APIs
Top Posts
pdfcrowd.com
Top Posts
OBIEE 11g Security Week :
Managing Application Roles and
Policies, and Managing Security
Migrations and Deployments
Upgrading OBIEE to 11.1.1.7
OBIEE 11gR1 : Architecture and
Use of WebLogic Server
OBIEE 11g Security Week :
Connecting to Active Directory,
and Obtaining Group Membership
from Database Tables
Analytics w ith Kibana and
Elasticsearch through Hadoop part 3 - Visualising the data in
Kibana
Random Posts
The data that it passes back up to Presentation Services for rendering in the users web browser is the raw data that feeds
into what the user will see. Obviously the data gets processed further in graphs, pivot tables, narrative views, and so on but
the actual filtering, aggregation and calculation applied to data is all complete by the time that it leaves the BI Server.
How the BI Server responds to data requests (in the form of Logical SQL queries) is governed by the RPD, the metadata
model that abstracts the physical source(s) into a logical Business model. Because of this abstraction it means that all
Logical queries (i.e. analysis/dashboard data requests) are compiled into Physical SQL for sending to the data source(s) at
runtime only. Whenever the RPD changes, the way in which any logical query is handled may change.
So if we focus on regression testing the impact that changes to the RPD have on the data alone then the available methods
become clearer and easier. We can take the Logical SQL alone that Presentation Services generates for an analysis and
sends to the BI Server, and we can run it directly against the BI Server ourselves to get the resulting [logical] dataset. This can
be done using any ODBC or JDBC client, such as nqcmd (which is supplied with OBIEE at installation).
Tags
exalytics
pdfcrowd.com
Endeca exalytics
extremebi git
goldengate
hadoop Hive init.d install
linux MDS XML monitoring
new features nqcmd OBIA
oracle data
integrator Oracle
Endeca Oracle Endeca
Information Discovery ow b
Faith in reason
If:
the Logical SQL remains the same (i.e. the analysis has not changed nor Presentation Services binaries changed
but see caveat below)
the data returned by the BI Server as a result of the Logical SQL before and after the RPD change is made is the same
Then we can reason (through the above illustration of the OBIEE stack) that
the resulting report shown to the user will remain the same.
pdfcrowd.com
Using this logic we can strip away the top layer of testing (trying to detect if a web page matches another) and test directly
against the data without having to actually run the report itself.
In practice
To use this method in practice, the process is as follows:
1.
2.
3.
4.
5.
Obtain the Logical SQL for each analysis in your chosen dashboards
Run the Logical SQL through BI Server, save the data
Make the RPD changes
Rerun the Logical SQL through BI Server, save the data
Compare data before & after to detect if any changes occurred
The Logical SQL can be obtained from Usage Tracking, for example:
pdfcrowd.com
SELECT QUERY_TEXT
FROM
S_NQ_ACCT
WHERE START_TS > SYSDATE - 7
or you can take it directly from the nqquery.log. For more than a few analyses, Usage Tracking is definitely the more practical
option.
You can also generate Logical SQL directly from an analysis in the catalog using runcat.sh see later in this post for details.
If you dont know which dashboards to take the Logical SQL from, ask yourself which are going to cause the most upset if
they stop working, as well as making sure that you have a representative sample across all usage of your RPDs Subject
Areas.
Give the Logical SQL of each analysis an ID and have a log book of which dashboard it is associated with, when it was taken,
etc. Then run it through nqcmd (or alternative) to return the first version of the data.
nqcmd -d AnalyticsWeb -u weblogic -p Password01 -s analysis01.lsql -o analysis01.before
where:
-d
is the BI Server DSN. For remote testing this is defined as part of the configuration/installation of the client. For testing
local to the server it will probably be AnalyticsWeb on Linux and coreapplication_OHxxxxxx on Windows
-u
and -p are the credentials of the user under whose ID the test should be executed
-s
-o
pdfcrowd.com
Once youve run the initial data sample, make your RPD changes, and then rerun the data collection with the same
command as before (except to a different output file). If the nqcmd command fails then its an indication that your RPD has
failed regression testing already (because it means that the actual analysis will presumably also fail).
An important point here is that if your underlying source data changes or any time-based filter results change then the test
results will be invalid. If you are running an analysis looking at Sales for Yesterday, and the regression test takes several
days then Yesterday may change (depending on your init-block approach) and so will the results.
A second important point to note is that you must take into account the BI Server cache. If enabled, either disable it during
your testing, or use the DISABLE_CACHE_HIT request variable with your Logical SQL statements.
pdfcrowd.com
Having taken the before and after data collections for each analysis, its a simple matter of comparing the before/after for
each and reporting any differences. Any differences are typically going to mean a regression has occurred, and you can use
the before/after data files to identify exactly where. On Linux the diff command works perfectly well for this
In this case we can see that the after test failed with a missing table error. If both files are identical (meaning there is no
difference in the data before and after the RPD change), there is no output from diff:
pdfcrowd.com
Tools like diff are not pretty but in practice you wouldnt be running all this manually, it would be scripted, reporting on
exceptions only.
So a typical regression test suite for an existing RPD would be a set of these nqcmd calls to an indexed list of Logical SQL
statements, collecting the results for comparison with the same executions once the RPD changes have been made.
Tips
Instead of collecting actual data, you could run the results of nqcmd directly through md5 and store just the hash of each
resultset, making for faster comparisons. The drawback of this approach would be that to examine any discrepancies
youd need to rerun both the before & after tests. There is also the theoretical risk of a hash collision (where the same
hash is generated for two non-matching datasets) to be aware of.
diff
sets a shell return code depending on whether there is a difference in the data (RC=1) or not (RC=0), which makes
uses stdout and stderr, so instead of specifying -o for an output file, you can redirect the output of the Logical SQL
for each analysis to a results file (file descriptor 1) and an error file (file descriptor 2), making spotting errors easier:
nqcmd -d AnalyticsWeb -u weblogic -p Password01 -s analysis01.lsql 1>analysis01.out
pdfcrowd.com
One thing to watch for is that the Physical query logged in Usage Tracking (although not in nqquery.log) has a Session ID
embedded at the front of it which will make direct comparison more difficult.
The Physical SQL is dependant on the RPD; if the RPD changes then the Physical SQL may change. However, if neither
the RPD nor inbound Logical SQL has changed and only the underlying data source has changed (for example, a
schema modification or database migration) then we can ignore the OBIEE stack itself and simply test the results of the
Physical SQL statement(s) associated with the analysis/dashboard in question and make sure that the same data is
being returned before and after the change
pdfcrowd.com
The Logical SQL for this can be seen in the Advanced tab
pdfcrowd.com
Now without modifying the analysis at all, we change the RPD to add in a Sort order column:
Reload the RPD in Presentation Services (Reload Files and Metadata) and reload the analysis and examine the Logical SQL.
Even though we have not changed the analysis at all, the Logical SQL has changed:
pdfcrowd.com
When executed, the analysis results are now sorted according to the Month_YYYYMM column, i.e. chronologically:
The same happens with the Descriptor ID Column setting for a Logical Column the generated Logical SQL will change if
this is modified. Changes to Logical Dimensions can also affect the Logical SQL that is generated if an analysis is using
hierarchical columns. For example, if the report has a hierarchical column expanded down one level, and that level is then
deleted from the logical dimension in the RPD, the analysis will instead show the next level down when next run.
pdfcrowd.com
pdfcrowd.com
runcat.sh is a powerful utility but a bit of a sensitive soul for syntax. First off, its easiest to call it from its home folder:
cd $FMW_HOME/instance1/bifoundation/OracleBIPresentationServicesComponent/coreapplication_obips1
Or further information for a particular command (in our case, were using the report command):
./runcat.sh -cmd report -help
pdfcrowd.com
./runcat.sh -cmd report -online http://server:port/analytics/saw.dll -credentials creds.txt -forceOutputFile output.lsql -folder
pdfcrowd.com
Having called runcat.sh once, you then make the RPD change, reload the RPD in Presentation Services, and then call
runcat.sh again and compare the generated Logical SQL (e.g. using diff) - if its the same then you can be sure that when the
analysis runs it is going to do so with the same Logical SQL and thus use the nqcmd method above for comparing
before/after datasets.
To call runcat.sh you need the credentials in a flatfile that looks like this:
login=weblogic
pwd=Password01
If the plaintext password makes you uneasy then consider the partial workaround that is proposed in a blog post that I wrote
last year : Make Use of OBIEEs Command Line Tools with Reduced Exposure of Plain Text Passwords
Bringing it together
Combining both nqcmd and runcat.sh gives us a logic flow as follows.
1.
2.
3.
4.
pdfcrowd.com
You may wonder why we have two Logical SQL statements present that for use with nqcmd, and that from runcat.sh.
The Logical SQL for use with nqcmd will typically come from actual analysis execution (nqquery.log / Usage Tracking) with
pdfcrowd.com
filter values present. To compare generated Logical SQL, the source analysis needs to be used.
Summary
So to summarise, automated regression testing of OBIEE can done using just the tools that are shipped with OBIEE (and a
wee bit of scripting to automate them). In this article Ive demonstrated how automated regression testing of OBIEE can be
done, and suggested how it should be done if the changes are just in the RPD. Working directly with the BI Server, Logical
SQL and resultset is much more practical and easier to automate at scale. Understanding the caveat to this approach that
it relies on the Logical SQL remaining the same and understanding in what situations this may apply is important. I
demonstrated another automated method that could be used to automatically flag any tests for which the dataset
comparison results would be invalid.
Testing the data that analyses request and receive from the BI Server can be done using nqcmd by passing in the raw
Logical SQL, and this Logical SQL we can also programatically validate using the Catalog Manager tool in command line
mode, runcat.sh.
Looking back at the diagram from the first post in this series, you can see the opportunities for regression testing in the
OBIEE stack, with the point being that a clear comprehension of this would allow one to accurately target testing, rather than
assuming that it must take place at the front end:
pdfcrowd.com
If we now add to this diagram the tools that I have discussed in the article, it looks like this:
pdfcrowd.com
I know Ive not covered Selenium, but Ive included it in the diagram for completeness. The other tool that I plan to cover in a
future posting is the OBIEE Web Services as they also could have a role to play in testing for specific regressions.
Conclusion
Take a step back from the detail, I have shown here a viable and pragmatic approach to regression testing OBIEE in a
manner that can actually be implemented and automated at scale. It is important to be aware that this is not 100% test
coverage. For example, it omits important considerations such as testing for security regressions (can people now see what
they shouldnt). However, I would argue that some regression testing is better than none, and regression testing with ones
eyes open to its limitations that can be addressed manually is a sensible approach.
Regression testing doesnt have to be automated. A sensible mix of automation and manual checking is a good idea to try
pdfcrowd.com
and maximise the test coverage, perhaps following an 80/20 rule. The challenges around regression testing the front end
mean that it is sensible to explore more focussed testing further down the stack where possible. Just because the front end
regression testing cant be automated, it doesnt mean that the front end shouldnt be regression tested but perhaps it is
more viable to spend time visually checking and confirming something than investing orders of magnitude more hours in
building an automated solution that will only ever be less accurate and less flexible.
Regression Testing OBIEE is not something that can be solved by throwing software at it. By definition, software must be told
what to do and OBIEE is too flexible, too complex, to be able to be constrained in such a manner that a single software
solution will be able to accurately detect all regressions.
Successfully planning and executing regression testing for OBIEE is something that needs to be not only part of a
project from the outset but is something that the developers must take active responsib ility for. They have the user interviews,
the functional specs, they know what the system should be doing, and they know what changes they have made to it and
so they should know what needs testing and in what way. A siloed approach to development where regression testing is
someone elses problem is never going to be as effectively (in terms of accuracy and time/money) as one in which
developers actively participant in the design of regression tests for the specific project and development tasks at hand.
Many thanks to Gianni Ceresa for his thoughts and assistance on this sub ject.
Related Posts:
OBIEE Regression Testing An Introduction
Visual Regression Testing of OBIEE with PhantomCSS
Built-In OBIEE Load Testing with nqcmd
Tags: diff, logical sql, nqcmd, obiee, odbc, regression, testing
Share
21
Tw eet
26
Like
Comments
open in browser PRO version
pdfcrowd.com
JP Says:
February 13th, 2014 at 11:08 am
This Sounds good and I appreciating for the detail explanation. But I have a doubt On generate the Logical SQL with
Dashboard default filters (Like: I have a year Prompt with default value 2014 ). Normally in analysis the year value may
not be 2014.
and one more question if we have multiple values in prompt can we generate the logical SQL by applying all the values.
Thanks,
JP
Services
About Us
> Consulting
> Training
> Support
>
>
>
>
About us
About our team
Contac t us
Our clients
Consulting
Services
>
>
>
>
>
>
Projects
Expert Services
OBIEE 11g
Sustainability
On Discoverer?
Orac le DW
Training
Resources
Blog Authors
> OBIEE
Bootcamp
> OBIEE End-User
> Exalytics
> ODI 11g
Bootcamp
> Oracle BI Apps
> Articles
> Blog
> OBIEE 11g
>
>
>
>
>
>
>
Mark Rittman
Venkat J
Peter Scott
Borkur S
Mike Vickers
Robin Moffatt
Jon Mead
pdfcrowd.com
Privacy Policy
pdfcrowd.com