Sei sulla pagina 1di 123

Engineering and Information Technology Research Report 2011

Engineering and Information Technology


Research Report 2011
Never Stand Still

School of Engineering and Information Technology

Contact us

If you would like further information, please contact


the Research Student Admissions Coordinator:
A/Prof Mark Pickering
Telephone: +61 2 6268 8238
Fax: +61 2 6268 8443
Email: m.pickering@adfa.edu.au
The School of Engineering and Information Technology
The University of New South Wales Canberra
PO Box 7916
CANBERRA BC ACT 2610
Cricos Provider Code: 00100G CMU 13492

Once we accept our limits, we go beyond them Albert Einstein

The School of Engineering and Information Technology is one of four


Schools of the University of New South Wales located at the ADFA
campus in Canberra. Research is a key focus for the School, and inspires
our approach to teaching and other activities.

Cover image: Rotary UAV

Back cover image: Hypersonic Tunnel

Production:
Editor: Dr Sreenatha Anavatti
School of Engineering and Information Technology
Design: Creative Media Unit

Foreword
The School of Engineering and Information Technology (SEIT) is one of the four
schools of the University of New South Wales located at the Australian Defence
Force Academy campus in Canberra. Outstanding research is a key focus for the
School. This inspires our approach to teaching and other activities in the School.
The Schools diverse research interests span our base disciplines and reach into a
wide variety of application areas including space, control, cyber security, air-traffic
management, complex imaging and many others.
The research funding to the School comes from various sources including the
Australian Research Council, Centre of Excellence, Research Networks, the
Department of Defence, Defence Science and Technology Organisation and the
University of New South Wales along with other private organisations. The research
output from the School has shown excellent growth with as many as nearly 350
publications included in the Higher Education Research Data Collection.
The School is encouraging quality research and healthy competition inside
the School by innovative ideas like special awards for Excellence in Research
Publications for Academics and Ph.D. scholars. The number of publications in
highly ranked journals as well as high impact journals has increased considerably
over the last few years.
A number of academics in the School received high recognitions in the last year
enhancing the profile of the School significantly. Professor Ian Petersen was elected
a fellow of the Australian Academy of Science and was awarded the prestigious
Australian Research Council (ARC) Laureate Fellowship. Dr Sameer Alam was one of
the ACT Young Tall Poppies of the year for being an outstanding young scientist.
The number and quality of Ph.D. scholars is increasing
continuously indicating the involvement of the School in
research training. The School has been able to attract
quality students from various countries around the
world, providing a dynamic work force.
This research report summarises the research
achievements of the Schools community during the
year 2011.
Prof Elanor Huntington
Head of School
April 2012

Engineering and Information Technology Research Report 2011

Frequently Used
Abbreviations
ADFA
ANSTO

Australian Nuclear Science and Technology Organisation

ARC

Australian Research Council

BUS

School of Business

CRC

Cooperative Research Centre

CSIRO
DEST

Commonwealth Scientific and Industrial


Research Organisation
Department of Innovation, Industry, Science and Research

DEEWR

Department of Education, Employment


and Workplace Relations

DSARC

Defence and Security Applications Research Centre

DSTO

Defence Science and Technology Organisation

HASS

School of Humanities and Social Sciences

NASA

National Aeronautics and Space Administration

NICTA

National Information and Computer Technology Australia

PEMS

School of Physical, Environmental


and Mathematical Sciences

RRTO

Research and Research Training Office

SEIT
UNSW
USAF

Australian Defence Force Academy

Engineering and Information Technology Research Report 2011

School of Engineering and Information Technology


University of New South Wales
United States Air Force

Contents
Research Activities

Acoustics and Vibration


4
Air Traffic Management
9
Aviation Research
15
Composite Materials and Structures
17
Computational Intelligence
28
Concrete Technology and Materials
32
36
Control Theory and Control Applications
Cyber Security
39
Developmental Systems and Machine Learning
41
Engineering in Medicine
44
Maximum-Entropy Analyses of Flow Systems
46
Geotechnical Engineering and Pavement Geotechnics
48
High Frequency Engineering
53
High-Speed Flows and Microfluidics
59
67
Image Coding
Imaging Through Turbulence
70
Immiscible Contaminants in Natural Porous Media
73
Operations Research and Optimisation
75
Opto-Electronics 82
Remote Sensing
88
Social Networks Group
91
Software Engineering
94
Viable Systems Planning, Strategy and Architecture
95
Systems Engineering
98
Underwater Communications
100
Unmanned Vehicles
103
Virtual Environments & Simulation
107

Research Facilities

111

2011 SEIT Academics

115

Engineering and Information Technology Research Report 2011

Research
Activities
Acoustics and Vibration
SEIT Academics
Prof. Joseph Lai
Dr Krishnakumar Shankar
Dr Sreenatha Anavatti
Dr Murat Tahtali
A/Prof. Don Fraser

SEIT Postgraduate Students


Mr Nick LeMarshall
Mr Sebastian Oberst
Ms Zhi Fang Zhang
Mr Md. Younus Ali

SEIT Research Staff


Mrs Marion Burgess
Dr Sebastian Oberst

Other Collaborators
Bosch Chassis
Dr Antti Papinniemi
Dr Zhiye Zhao
CSIRO
Dr Theo A Evans
Oita University, Japan
Prof. Toru Otsuru
Sceintific Technology Pty Ltd
Dr Andrew Tirkel
University of Adelaide
Mr Gerard Rankin
Universitt der Bundeswehr Mnchen, Germany
Prof. Steffen Marburg

Engineering and Information Technology Research Report 2011

Research Description
Research undertaken by the Acoustics and Vibration
group spans a wide range of topics and includes
environmental noise, occupational noise, machinery
noise control, structural dynamics, vibration
monitoring for non-destructive inspection and
interdisciplinary areas that involve acoustics, vibration,
materials and biology. Below are some current
research projects which require the use of state-ofthe-art acoustics and vibration instrumentation and
numerical modelling techniques such as the finite
element method, boundary element method and
nonlinear time series analysis.

Active noise control headphones and awareness


This project is undertaken in collaboration with Dr
Brett Moelsworth from School of Aviation at UNSW,
Kensington. The main aim of the project is to
examine the effect of noise cancelling technology
(e.g., headphones) on concurrent task performance
within an aviation environment. The investigations are
undertaken within the laboratory with a simulation
of the noise present in an aircraft cabin. The
effect of the use of noise cancelling headphones
by passengers on their ability to understand
audio information, such as a safety message, as
would be presented during the preparation for a
commercial flight, is being investigated. This work
is being extended to the effect of the use of such
headphones on situational awareness.

Vibration and Acoustic Analysis of the Role of


Nonlinearity in Disc Brake Squeal
Disc brake squeal is a major source of customer
dissatisfaction. The prediction of disc brake squeal
propensity remains difficult despite significant progress
made in the last two decades towards understanding
its nature. A full brake system (Figure 1) is difficult
to model exactly with regard to boundary contact
conditions and material properties. Most of the
numerical analysis of brake squeal is based on linear
methods that have found some success in guiding
the development of brakes in industry. One popular
approach is the complex eigenvalue analysis using
finite element models to predict unstable vibration
modes. However, the complex eigenvalue analysis may
over-predict or under-predict the number of unstable
vibration modes and not all predicted unstable vibration
modes will result in squeal. Therefore, extensive brake
testing in noise dynamometers is required in order
to ensure that the noise performance of brakes is
acceptable. Although the analysis of brake squeal
propensity is primarily based on linear approaches,
it has been recognised that the operation of a brake
contains a number of nonlinearities such as the
excitation through the friction contact between the disc
and pad, material properties, and operating conditions.

Our research has shown that (i) some brake squeal


is caused by nonlinearity; (ii) brake squeal test data
obtained in a brake noise dynamometer display
features typical of deterministic chaos (Figure 2); and
(iii) the noise performance of brake systems can be
ranked by statistical analysis and nonlinear time series
analysis of brake test data. In order to study the role
of nonlinearity in disc brake squeal, the dynamics
of an analytical forced 2 dof friction oscillator sliding
in a plane with a constant friction coefficient is
examined (Figure 3). Results in Figure 4 show that
friction coupling nonlinearity can produce weak
chaotic behaviour and support the findings of inplane pad mode instabilities observed in a numerical
pad-on-disc model. By using models of simplified
brake systems and energy analysis, we have shown
that instabilities associated with pad modes have
significant potential to cause brake squeal although
they are not detected by complex eigenvalue analysis
to be unstable. The challenge would be to develop
a method to exploit nonlinearity for more reliable
prediction of brake squeal propensity.

Figure 1: Model of a one piston, floating calliper brake system with ventilated brake disc.

Engineering and Information Technology Research Report 2011

Figure 2: Attractor re-constructed from time series of brake squeal test data: (a) limit cycle, (b) torus,
(c) choatic attractor

Figure 3: A forced 2 dof friction oscillator sliding


on a the moving x-y plane.

Figure 4: (a) Time series of the position vector; (b) phase-space plots with Poincare section (plane A),
maximal Lyapunov exponent and Kaplan-Yorke dimension; (c) power spectal density estimates.

Engineering and Information Technology Research Report 2011

Discovering how termites use vibratons to


make foraging decisions
Termites are pests affecting one third of Australian
homes. The annual cost of treatment and damage
repair is over $20 billions worldwide. Despite being
blind, termites with poorly developed anatomical
defences possess remarkable abilities for survival
against predators: infestations in houses are often only
discovered when an apparently intact timber object
collapses. Our recent research demonstrate for the
rst time that termites use vibrations to assess food
quantity and quality and to avoid competitors
by eavesdropping.
Considering that termites have a relatively simple
nervous system with the entire cerebral ganglia of
most termites occupying a volume of the order of 0.1
mm3, their abilities to use vibrations to make foraging
decisions are remarkable feats. Yet, little is known
about how termites make foraging decisions based
on vibrations. We are studying the key features in
vibration signals produced by termites to unlock the
secrets of their foraging behaviour.

Effect of Millmeter Waves on Termite


Behaviour
A 24 GHz termite detector that has been successful
in detecting termite movement through walls and
floors was developed by Scientific Technology,
and commercialized by Termatrac. Apart from
thermal effects, little is known about the effects that
Termatrac emissions have on the termites it senses.
However, theoretical and experimental investigation of
the interactions of millimeter and sub-millimeter waves
with living things has a rich and varied history.
This offers the intriguing possibility of termite
provocation or control, using suitable emissions.

Figure 5 shows a typical set up to simulate termites


travelling to and from their nest via mud tubes.
The nest with or without vermiculite is on the
left, the food (timber block) is on the right and the
termites (workers and soldiers) in the central dish
were exposed to 1 W of power at 24 GHz out of a
pyramidal horn antenna directly above the dish. The
thermal image in Figure 6 shows that the test worker
termites (approx 12 mm) moved freely into and out of
the beam and took turns in basking under the horn
achieving a maximum temperature of 31C which is
within their thermal comfort zone and preferable to the
ambient temperature of 22C. We have shown that 12
mm long termites exhibit resonant absorption near
25 GHz.
When termites were exposed to 1.3W at 28.24 GHz,
individual termites were heated to 42C. In order to
entice termites into the beam, a dead termite was
placed in the centre. This resulted in some termites
venturing into the beam to investigate. These termites
became distressed, and presumably sent distress
signals to others, who followed, resulting in a huddle.
This huddle proved suicidal, as shown in Figure 7,
where the peak temperature exceeded 55C. Termites
lack internal thermal regulation and lose most of their
heat by radiation. The latter is proportional to ST4,
where S is the surface area and T is temperature in K.
By huddling, termites reduce their effective radiating
surface area and increase the heat exchange between
neighbours by radiation. This accounts for the extra
13C rise in the huddle. Suicidal huddling behaviour
was also observed in termites trying to cross a water
barrier, and it was also speculated that such huddles
were also due to individuals in distress (who fell into
the water) attracting a crowd of other victims.
Such social behaviour for termites under distress
have implications for termite control.

Figure 5: Schematic of Alarm Experiment.

Engineering and Information Technology Research Report 2011

Figure 6: Worker Termite basking under


the horn.

Scanning Laser Vibrometry


The school recently acquired a Scanning Laster
Vibrometer (PSV-400) which measures velocity of
vibrating objects using a laser and works on the
princliple of doppler shift. The PSV-400 offers a highly
accurate, quick and sensitive method of non-contact
vibration measurement over a two dimensional surface
using a motorized scanning head controlled by the
computer. It comes with sophisticated modal analysis
software that allows data to be displayed both in time
domain and frequency domain, provides frequency
response functions, mode shapes and so on.

Figure 7: Death Huddle

Currently there are a number research projects using


the Scanning Laster Vibrometer being undertaken in
the school, including structural health monitoring of
composite panels with delaminations and vibration
monitoring of mechanically fastened and bonded
joints. The school is also involved in collaborative
research with researchers at the ANU and University
of Tasmania studying the vibration characteristics of
violins made from Tasmanian timber.

Figure 1: Examples of two modes of vibration for a plate of a violin

Engineering and Information Technology Research Report 2011

Air Traffic Management


SEIT Academics
Dr Sameer Alam
Prof. Hussein Abbass
Dr Chris Lokan
Dr William Murray Mount
Mr. Jiangjun Tang
Dr Deborah Cherie Tucek

SEIT Postgraduate Students


Mr. Md. Murad Hossain
Mr. Nizami Jafarov
Mr. Van Viet Pham
Ms. Wenjing Zhao

Funding Agencies and Sources


Australian Research Council
Airservices Australia
Eurocontrol Experimental Centre, France
Thales Australia

Research Description
Air transportation is a large, complex, and integrated
network of systems, procedures, and infrastructure
with a primary goal of safely expediting the air traffic
flow. Present day air traffic systems are reaching
their operational limits and accommodating future
air traffic growth is a challenging task for air traffic
service providers and airlines. Due to the structured
and centralized nature of the system, it may not scale
to meet demand. Therefore there is an urgent need
to investigate and develop new methodologies and
procedures by which the air transportation system
can meet the future challenges from safety, capacity,
environment and human factors perspective.

CAPACITY
Discovering Delay Patterns in Arrival Traffic
with Dynamic Continuous Descent Approaches
using Co-evolutionary Computational Red
Teaming
The gradual introduction of advanced ATM
procedures such as Continuous Descent Approaches
(CDA) creates a challenge when balancing the
capacity-demand of arrival traffic in the presence
of constrained ground (runway, taxiway, gate)
resources. Part of the challenge is to understand
the interdependency between spatial-temporal
distribution of arrival traffic (traffic distribution) and
the dynamics of ground resources. We [Alam, Zhao,
Tang, Lokan and Abbass] used the Computational
Red Teaming (CRT) Framework to identify patterns in
arrival traffic and ground events that lead to delays in
dynamic CDA scenarios. The scenarios represent the
interaction of ground events with traffic distributions.
The search engine in CRT relies on co-evolutionary
search, with the reciprocal interaction of traffic
distributions and ground events evolving to identify
bottlenecks in the system. With each interaction a
variety of metrics are recorded which are then data
mined to identify patterns that lead to delays. Results
identified scenarios whereby delays become seriously
significant. For example, for a model of the Sydney
domestic terminal area in a dynamic CDA scenario,
flights arriving from the South-East direction with
an average inter-arrival time of 53 sec can cause
significant delays if runway 16L is impacted by a
ground event. A paper on this topic was awarded the
best paper at the 9th US-Europe ATM R&D Seminar,
Berlin, Germany, 2011

The ATM research group at UNSW Canberra is a


multi-disciplinary research team that aims to develop
methods for next generation air transportation
concepts and systems which are robust in the face
of increasing demands and external uncertainties.
The research group has designed and developed
practical, implementable algorithms and models
backed by sound methodologies that aim to balance
air traffic demand and airspace capacity while
addressing safety, capacity, environment and human
factors concerns. Our research will enable safe,
efficient, robust, and green air transportation.

Engineering and Information Technology Research Report 2011

Co-evolutionary Simulation of airport ground movements and Terminal area traffic.

A multi-objective approach for Dynamic


Airspace Sectorization using agent based and
geometric models
A key limitation when accommodating the continuing
air traffic growth is the fixed airspace structure
including sector boundaries. The geometry of
sectors has stayed relatively constant despite the
fact that route structures and demand have changed
dramatically over the past decade. Dynamic Airspace
Sectorization is a concept where the airspace is
redesigned dynamically to accommodate changing
traffic demands. Literature suffers from several
operational drawbacks, and their computational
complexity increases fast as the airspace size and
traffic volume increase. We [Tang, Alam, Lokan and
Abbass] evaluate and identify the gaps in existing
3D sectorization methods, and propose an improved
Agent Based Model (iABM) to address these gaps.
We also propose three additional models using
KD-Tree, Bisection and Voronoi Diagrams in 3D,
to partition the airspace to satisfy the convexity
constraint and reduce computational cost.
We then augment these methods with a multi-objective
optimization approach that uses four objectives:
minimizing the variance of controller workload
across the sectors, maximizing the average sector
flight time, and minimizing the distance between
sector boundaries and the traffic flow crossing
points. Experimental results show that iABM has the
best performance on workload balancing, but it is
restrictive when it comes to the convexity constraint.
Bisection- and Voronoi Diagram-based models
perform worse than iABM on workload balancing but
better on average sector flight time, and they can
satisfy the convexity constraint. The KD-tree-based
model has a lower computational cost, but with a poor
performance on the given objectives.

10

Engineering and Information Technology Research Report 2011

Examples of airspace sectors (Minimum


Standard Deviation of workload) generated by 4
airspace sectorization models

ENVIRONMENT
A Multi-Aircraft Dynamic Continuous Descent
Approach Methodology for Low Noise and
Emission Guidance
Continuous Descent Approaches (CDAs) can
significantly reduce fuel burn and noise impact by
keeping arriving aircraft at their cruise altitude for
longer and then having a continuous descent at
near idle thrust with no level flight segments.
The CDA procedures are fixed routes that are
vertically optimized. With the changing traffic
conditions and variable noise abatement rules,
the benefits of CDA are not yet fully realized. We
[Alam, Nguyen, Lokan, Ellejmi, Kirby and Abbass]
proposed a methodology to generate aircraftspecific dynamic CDA routes that are both laterally
and vertically optimized for noise, emission and fuel.
The methodology involves discretizing the terminal
airspace into concentric cylinders with artificial
waypoints and uses enumeration and elimination
(based on aircraft performance envelope) from one
waypoint to another to identify all the possible routes.

From the resulting set of possible CDA routes,


routes are identified that represent the best tradeoff on the given objectives. The dynamic CDA
algorithm is implemented in an air traffic simulator
for Sydney Terminal Area. Dynamic CDA approach
as compared to a typical CDA shows a reduction of
14.96% in noise, 11.6% reduction in NOx emission
and 1.5% reduction in fuel burn. We also investigate
the throughput capacity of transition airspace for
multiple flights performing CDA operation for different
traffic distributions. The methodology incorporates
a delay algorithm which uses the flights estimated
time of arrival at the Intermediate Approach Fix which
allocates a conflict free CDA route by searching
through possible routes. A paper based on this
research was awarded the best paper at 29th AIAA/
IEEE Digital Avionics Systems Conf., Salt Lake City
USA 2010.

Figure: Fuel & Noise Trade-off Trajectories in a Continuous Descent Scenario

Engineering and Information Technology Research Report 2011

11

Traffic & Sector Features effecting Aircraft Collision Risk.

SAFETY
What Can Make an Airspace Unsafe?
Characterizing Collision Risk using
Multi-Objective Optimization
With the continued growth in Air Traffic, researchers
are investigating innovative ways to increase airspace
capacity while maintaining safety. A key safety
indicator for an airspace is its Collision Risk estimate,
which is compared against a Target Level of Safety
(TLS) to provide a quantitative basis for judg ing the
safety of operations in an airspace. However this
quantitative value does not give an insight into the
overall collision risk picture for an airspace, and how
the risk changes given the interaction of a multitude
of factors such as sector/traffic characteristics and
controllers actions for flow management.
In this paper, we propose an evolutionary framework
with multi-objective optimization to evolve collision risk
of air traffic scenarios. We [Alam, Aldis, Barry, Lokan,
Butcher and Abbass] attempt to identify, through
evolutionary mechanism, the minimal controller
actions that can lead to higher collision risks, thereby
identifying the contributing factors to collision risk.
Experiments were conducted in a high-fidelity air
traffic simulation environment, with an integrated
collision risk model. Results indicate that risk-free
traffic scenarios having collision risk below TLS can
become risk-prone by minimal controller actions,
with Climb and Turn manoeuvrers contributing
significantly to increased collision risk.

12

Engineering and Information Technology Research Report 2011

Analysis of the Australian Airport Network:


A Complex Network Approach
As for all means of transportation, the relationship
between origin and destination results in a complex
network of routes, which can then be complemented
with information associated with the routes themselves,
for instance, frequency, traffic load and distance.
The theory of complex networks provides a framework
for investigating the dynamics on the resulting network
structure. In this paper, we investigate the structure and
robustness of the Australian Airport Network (AAN)
which represents Australias civil domestic airport
infrastructure as a complex network. We [Hossain,
Alam and Abbass] are investigating the indices of
degree distribution, characteristics path length,
clustering coefficient and centrality measure as well as
the correlation between them.

Evaluating Ground-Air Network Vulnerability in


an Integrated Terminal Maneuvering Area using
Co-evolutionary Computational Red Teaming
The inherent complexity of terminal maneuvering
area (TMA) necessitates a system level analysis
to understand the total system dynamics and its
vulnerability. The performance of advanced air
traffic control (ATC) procedures, such as dynamic
Continuous Descent Approaches (CDA), may not
be appropriately assessed without considering
the complex interactions among other parts of
the environment in which it operates. This paper
considers a TMA system which integrates the
arrival and departure operations and combines
air- and ground- side resources; thereby assists
in understanding its vulnerability and evaluating
advanced ATC concepts in this environment.
We [Zhao, Alam and Abbass] proposed a
methodology using Computational Red Teaming
(CRT) framework to identify ground-air network
bottlenecks by exploring areas of vulnerability in the
integrated TMA. The search engine in CRT relies
on co-evolutionary search which evolves reciprocal
interaction of traffic distributions and ground events
(including runways, taxiways and gates).
As such, these interactions are considered from the
perspective of identifying inefficiencies, with the
integration of arrival and departure operations.
By evaluating these interactions, we are able to
identify inefficiencies or improvement opportunities
in the implementation of future ATM concepts
and, thereby, understand major bottlenecks which
cause system inefficiencies. For instance, for
a model based on the Sydney Kingsford-Smith
Airports domestic terminal area in a dynamic CDA
scenario, taxiway B can cause significant delays if
it is impacted by a ground event. Another example
identifies taxiways as a critical airport component
with interactions between arrivals and departures
affecting the airports throughput capacity.

Human Factors
Towards a Code of Best Practice for Evaluating
Air Traffic control Interfaces
The quality of computer interfaces in transportation
command and control centres is vital to safe and
smooth operations. Air Traffic Control (ATC) is
probably the most dynamic area in transportation
where a large amount of information is presented
to the air traffic controller within a short timeframe.
Future Air Traffic Interfaces (ATI) are on the
horizon with more information and added levels
of sophistication. Safety is becoming a default
constraint in current systems and evaluating
the usability of these interfaces has been seen
traditionally as crucial for ensuring high operational
safety standards. To this end, a strong business case
for evaluating the usability of interfaces necessarily
requires a full-scale justification of the usability study
and its associated cost. The benefits of performing
such an evaluation also need to be communicated
to decision makers in terms of economic values and
gains. It is at this point that the field of operational
analysis intersects with human factor research. We
[Abbass, Mount, Tueck and Pinheiro] proposed a
methodology for conducting usability studies for ATI.
The methodology has been designed to connect
higher-level organisational objectives with low-level
usability metrics. The methodology will be presented
towards establishing a code of best practice for
the design and conduct of usability studies in this
domain. While the results can be generalised to other
transportation command and control interfaces, this
paper focuses on ATC because this code of best
practice is tailored towards ATC functions.

An example of a scenario design.


The x-axis represent time, the y-axis represents
block-events, while each box in the figure
represents corresponding time-span an event
type will take within a scenario.

Ground arrival network for Sydney Airport

Engineering and Information Technology Research Report 2011

13

Cognitive, Ergonomic and Workflow Metrics of


Advanced Air Traffic Control Interfaces

Automatic Data Collection Tools for Air Traffic


Human Factor Experiments

In this study, we [Abbass, Mount and Tueck]


designed metrics to compare and evaluate future
air traffic control interfaces (FATCI) for Thales.
Our work provided a multi-dimensional picture
of different factors that can be used to evaluate
FATCI. Experiments with Air Traffic Controllers
were conducted at Thales Centre of Excellence in
Melbourne. The work demonstrated the robustness
of the methodology which can scale from evaluating
a function or component within an interface to
a system-level evaluation between two different
systems. The objectivity of the metrics provided an
unbiased evaluation.

Human factors experiments represent a significant


and expensive exercise, especially in safety critical
domains such as Air Traffic Control. In these
domains, gaining access to the user is not easy
and every access to a user, is an opportunity to
collect the right data where mistakes are detrimental
and costly. Moreover, there are many actions and
dynamics happening simultaneously within the
experimental environment.
We [Bui, Jafarov and Abbass] are working on a set of
tools that can help in capturing data automatically in
an Air Traffic environment. We developed a tagging
system that can help an analyst to tag events through
a touch screen. We also developed a sophisticated
voice-control system for communicating and recording
voice on multiple channels to mimic the expensive
hardware communication systems that exist within an
air traffic control centre.

The Automatic Tagging System in use for our


experiments at the Thales Centre of Excellence

14

Engineering and Information Technology Research Report 2011

Aviation Research
SEIT Academics
Ms. Sue Burdekin
Mr. Martin Copeland
Major Heath Pratt
A/Prof Andrew Neely

The answers to these questions were compared


to information that had been collected by the
organisation utilizing other means including technical
review meetings; pilot initiated, and mandatory
occurrence reports, and company safety investigation
reports. This study is on-going.

Research Description
The Aviation Group within the School of Engineering
and Information Technology conducts research into
a range of aviation safety topics including: pilot
behavioural issues; training and evaluation; design
and development; ergonomics, and aspects of the
human/machine interface. The School has an Aviation
Safety Studio which contains a multi-engine flight
simulator and two rotary aircraft simulators, all of
which are utilized for teaching and research purposes.

Human Factors
Mission Operations Safety Audit (MOSA) research
was initially designed as an experimental study,
conducted in an F/A-18 Hornet simulator, to determine
whether military pilots could accurately self-report,
immediately after the flight (mission), on their
operational performance across a predetermined
selection of behavioural categories designed in
conjunction with subject matter experts. To further
test the MOSA methodology, this time in a civil multicrewed operational environment, a second study was
carried out, in the field, with the cooperation of a low
cost carrier in Europe. The aim of the MOSA research
was to validate behavioural self-reported data from
professional pilots, so that management could have
confidence in this safety-critical information, and feed
it back into the training continuum. In doing so, a
safety loop could be established in a cost effective,
operationally specific and timely program of data
collection. Both the military and the civil airline pilot
studies found that professional pilots were able to
effectively self-report on their own performance across
a range of operationally tailored, predetermined
categories of behaviour.
Recently, the MOSA methodology was tested in
another operational environment and national culture;
that of a regional airline operating turbo prop aircraft
between island destinations in the Indian Ocean. The
MOSA protocols were influenced by the previous
MOSA studies, but, once again, they were customized
by airline subject matter experts to reflect categories
of behaviour that were relevant to their national
and organisational culture, and to that specific
operational environment. Confidential self-reported
in-flight performance data from Captains and First
Officers, all of whom had volunteered for the study,
were compared to ratings from a trained observer
(researcher) during forty one flight sectors. To validate
the data, a series of company specific safety related
questions were posed to each participating crew.

Figure 1: Aircraft type used in the MOSA


regional airline study

Figure 2: Volunteer pilots during in-flight


performance evaluation

Figure 3: One of the island airports involved in


the study

Engineering and Information Technology Research Report 2011

15

Aircraft System Modeling Verification Using


Ocular Behaviour Metrics
Simulator developers are adopting advances in
computer processing, graphics processing and
image modeling to produce synthetic environments
with unprecedented levels of fidelity. The availability
of technology to support high fidelity simulators
combined with the potential economic and safety
benefits of simulation as a training medium has
embedded simulation into the aviation training system.
The aviation industrys confidence and acceptance in
the use of simulators as an effective training transfer
medium has been consolidated by the assumption
that high fidelity simulation equates to high training
transfer. However, despite continuing advances in the
fidelity of visual, cockpit layout and motion simulation
(i.e. perceptual fidelity), training system developers
are not seeing a commensurate increase in the quality
of training transfer. There is evidence that simulator
development that focuses on perceptual fidelity
alone may prove superficial in terms of training value
yield and developmental investment return. Some
researchers suggest that simulator development that
fixates on perceptual fidelity, at the expense of system
and behaviour fidelity may in fact be undermining the
students ability to respond to complex and escalating
non-normal situations.

The traditional approach used to develop aircraft


simulator system models has relied upon modifying
normal model behaviour with the use of scripts.
The student operating the simulator would then respond
to the stimuli/cue representing abnormal behaviour
effects produced by the scripts. The scripts used to
modify the models normal behaviour are generally
limited to set sequences as they can rarely cope with
the spectrum of permutations and combinations which
can be manifested by a malfunction in the real world
environment. Furthermore, scripting is output focused
and can rarely respond to student inputs, the resulting
effects usually fail to provide positive and negative
reinforcement required to establish resilient learning.
The current research has been aimed at developing
system models that support student interaction
in order to provide positive and negative learning
reinforcement when dealing with abnormal behaviour
sequences (Malfunction/emergency Training).
The research has also developed validation metrics
such as Eye Tracking technology to discern changes
in ocular behaviour and objectively measure the
training/learning value of these models. Current
research has been successful in objectively
establishing changes in student ocular behaviour and
scan behaviour when exposed to stimuli representing
the different behaviour models. Figure 4 illustrates one
of the research test beds with a pilot using the Tobii
eye tracking glasses, while Figure 5 illustrates the
test results of an experiment investigating the
correlation between stimuli and resulting ocular
behaviour response.

Figure 4: School Research Test Beds With A Pilot


Using The Tobii Eye Tracking Glasses

Figure 5: Experimental Results Investigating


The Correlation Between Stimuli And Resulting
Ocular Behaviour Response

16

Engineering and Information Technology Research Report 2011

Composite Materials
and Structures

SEIT Academics (ACRU Members)


Prof Evgeny Morozov (ACRU Chair)
A/Prof Obada Kayali
Dr Rik Heslehurst
Dr Amar Khennane
A/Prof Andrew Neely
Dr Krishna Shankar
Dr Murat Tahtali
Dr Sarah Zhang
Mr Alan Fien
Dr M A Ashraf

SEIT Postgraduate Students


Mr Mustafizur Rahman
Mr Anup Chakrabortty
Ms Jingfen Chen
Mr Chunguang Wang
Ms Zhifang Zhang
Ms Xiaoshan Lin
Mr Chang Lin
Ms Yuan Fang
Ms Xiaodan Teng
Mr Karthik Ram Ramakrishnan
Mr Md Sayem Uddin
Mr Ahmed Mostafa Thabet
Mr Md Shakhaout Hossain Khan
Mr Md Younus Ali
Mr Chengjun Liu
Mr Xiaofei Wang
Mr Obinna Kenneth Ihesiulor
Mr Lorin James Coutts-Smith
Mr He Tian
Mr Jiting Xie
Ms Yifei Cui
Mr Kuang Yu

Other Collaborators
South China University of Technology, Guangzhou,
China
Dr Jing Li
Siberian State Aerospace University, Krasnoyarsk,
Russia
Prof Alexander Lopatin
Dr Vladimir Nesterov
Australian National University, Canberra Health
A/Prof Christian Lueck
Thomas Lillicrap
Queens University Belfast
Dr Gawn McIlwain

Research Description
Applications of advanced composite materials in
the aerospace, oil and gas, high-end machines,
antennas and marine industries are gaining more and
more significance and their number is growing very
intensively. Modern composite technologies are used
to produce innovative products on an industrial scale.
The successful development of such technologies
requires an intensive research effort. In 2011, research
activities undertaken by the ACRU members included
development of new structural design and analysis
methods, materials characterisation, understanding
mechanics and physics of new processes and
materials. Substantial contributions have been made
in the field of a validated numerical modelling and
simulation of the physical and mechanical responses
of composites and structural components for a wide
range of engineering applications.

Deep Water Composites (Evgeny Morozov,


Krishna Shankar, Amar Khennane, Rik
Heslehurst, Arif Ashraf)
The project involves development of functional
composite lightweight deepwater tubular structures that
provide significant advantages over the existing metallic
structures that are currently used by the offshore
industry. The research aims to enhance understanding
on the performance of composite and combined
metal/composite tubular and close specific technology
gaps in practical design, testing and qualification,
manufacturing, installation and inspection of offshore
tubular structures. The implementation of new
composite design solutions will provide opportunities
to reduce the installation and through life costs for the
offshore oil and gas structural applications. The project
is funded and coordinated by the CRC for Advanced
Composite Structures Ltd. The work on the project
has been undertaken in collaboration with Advanced
Composite Structures Australia Pty Ltd, PETRONAS
Research Sdn Bhd, University of Newcastle upon
Tyne, Unique Solution Partners Pty Ltd, University
of Southern Queensland, and Pacific Engineering
Solutions International Pty Ltd.

Design and Modelling of Composite Offshore


Risers (Krishnakumar Shankar, Evgeny Morozov)
Risers for offshore drilling platforms are traditionally
made out of steel. However, due to the high density of
steel the depth to which steel risers can be employed
is limited to about 3 km. It is envisaged that using
advanced composite materials, which are well known
for their light weight, strength and stiffness, risers can
be employed to greater depths for deep sea extraction
applications. Composite risers, however, may have their
own limitations due to their susceptibility to impact.
The current research involves modelling and analysing
various designs of steel and fibre reinforced composite
materials for the design of deep sea risers as well
designs involving hybrid combinations of steel and
polymeric materials.

Engineering and Information Technology Research Report 2011

17

Structural Health Monitoring of Composites


using Vibration Measurement (Krishnakumar
Shankar, Evgeny Morozov, Murat Tahtali)
Damage in structural components affects their
vibration characteristics through degradation in
stiffness an/dor changes in damping characteristics.
Vibration monitoring offers a powerful tool for online
and continuous health monitoring of structures while
they are still in service. If significant shifts in the
natural frequencies of the structures are observed
they indicate the possible occurrence of damage
and the location and size of the damage can be
assessed by solving the inverse problem.
The current project aims at developing a structural
health monitoring system for detecting and assessing
delamination damage in composite structures.

The parametric finite-element buckling analyses


of the anisogrid conical shells subjected to axial
compression, transverse bending, pure bending, and
torsion showed the robustness and potential of the
modelling approach (Fig. 1). It was demonstrated that
the buckling resistance can be significantly enhanced
by either increasing the stiffness of a few hoop ribs
located in the close proximity to the section with the
larger diameter, or by introducing the additional hoop
ribs in the same part of the conical shell.
The effectiveness of the design analyses is
demonstrated using particular examples. It has
been shown that the resultant optimised designs
can produce up to 22 per cent mass savings in
comparison with the non-optimised
lattice shells.

Impact Resistance of Sandwich Panels with


Nano-toughened Composite Facesheets
(Krishnakumar Shankar, Philippe Viot, Murat
Tahtali)
This work is aimed at studying the impact resistance
of sandwich panels with metallic with fibre reinforced
polymer laminate face sheets whose matrix has been
toughened by inclusion of elastomeric nano particles
using experimental testing and numerical modelling.
Improvement in impact resistance obtained by the
addition of elastomeric nano particles in the matrix will
be studied, with a view to optimising the percentage
of the additives to achieve maximum performance.

Buckling Analysis and Design of Anisogrid


Composite Lattice Conical Shells (Evgeny
Morozov, Alexander Lopatin, Vladimir
Nesterov)
Composite lattice anisogrid shells have now become
a popular choice in many aerospace applications.
Their use in various structural components, such as
rocket interstages, payload adapters for spacecraft
launchers, fuselage components for aerial vehicles,
and parts of the deployable space antennas
requires the development of more advanced finiteelement models and analysis techniques capable
of predicting buckling behaviour of these structures
under variety of loadings. A specialised finite-element
model generation procedure (design modeller) is
developed and applied to the buckling analysis of the
composite anisogrid conical shells treated as threedimensional frames composed of the curvilinear ribs
made of unidirectional composite material. Featuring
a dedicated control procedure for positioning the
beam elements, the design modeller enables a close
approximation of the original twisted geometry of the
curvilinear ribs.

18

Engineering and Information Technology Research Report 2011

Figure 1: Buckling mode of the composite lattice


conical shell under axial compression.

Computational Analysis of Low Velocity Impact


Response of Composite Panels (Mustafizur
Rahman, Evgeny Morozov, Krishna Shankar,
Murat Tahtali)
The present work deals with the finite element
modelling of low velocity impact response of
different types of composite panels for body armour
application. The response of these composites panels
including bonded, unbonded and partially bonded
laminates has been simulated using non-linear finite
element package LS-DYNA. 2D shell elements in
LS-DYNA have been used to represent both resin
bonded glass fabric targets and dry woven glass
fabric panels. The hemispherical shaped projectile is
being modelled with 3D solid elements. The results
of the numerical analysis showed that the value of
contact force for the fully bonded composites panels
was significantly higher than that observed for the
panels consisting of dry woven glass fabric. However,
the corresponding displacement was substantially
lower. The similar simulation of the partially bonded
composite panels has shown a reduction of both the
contact force and the displacement. In addition, it
has been shown that the partially bonded composite
panels are capable of absorbing higher levels of
energy than the rigid panels.

Design and Analysis of the Composite Lattice


Frame of a Spacecraft Solar Array (Evgeny
Morozov, Alexander Lopatin)
A novel design of the composite structural lattice frame
for the spacecraft solar arrays has been developed
(Fig. 2). The frame is composed of two flat lattice
composite plates assembled into the three-dimensional
panel using frame-like connectors (Fig. 3). Design,
fabrication, modelling and modal analysis of the panel
solar arrays based on the proposed technology are
discussed. The lattice panels are modelled as threedimensional frame structures composed of beam
elements subjected to the tension/compression,
bending and torsion using the specialised finite
element model generator/design modeller. Results of
the calculations of the frequencies and vibration forms
for the lattice panels with various types of supports
imitating the ways the panels can be attached to the
spacecraft body, deployment must, and adjacent
solar panels are presented and discussed. The lattice
frame design for maximum fundamental frequency
is performed subject to constraints imposed on the
geometrical parameters of the solar panel.

A Combined Elastoplastic Damage Model for


Progressive Failure Analysis of Composite
Materials and Structures (Jingfen Chen, Evgeny
Morozov, Krishna Shankar)
The paper is concerned with the development and
verification of a combined elastoplastic damage
model for the progressive failure analysis of
composite materials and structures. The model
accounts for the irreversible strains caused by
plasticity effects and material properties degradation
due to the damage initiation and development.
The strain-driven implicit integration procedure is
developed using equations of continuum damage
mechanics, plasticity theory and includes the return
mapping algorithm. A tangent operator consistent
with the integration procedure is derived to ensure
a computational efficiency of the Newton-Raphson
method in the finite element analysis. The algorithm
is implemented in Abaqus as a user-defined
subroutine. The efficiency of the constitutive model
and computational procedure is demonstrated using
the analysis of the progressive failure of composite
laminates containing through holes and subjected
to in-plane uniaxial tensile loading. It has been
shown that the predicted results agree well with the
experimental data reported in the literature.

Figure 2: Solar wing design. (Courtesy of ISSReshetnev Company).

Figure 3: Lattice frame design and placement of


the connectors: (a) dense rib layout; (b) sparse
rib layout.

Engineering and Information Technology Research Report 2011

19

Finite Element Modelling and Buckling


Analysis of Anisogrid Composite Lattice
Cylindrical Shells (Evgeny Morozov, Alexander
Lopatin, Vladimir Nesterov)
The buckling behaviour of anisogrid composite lattice
cylindrical shells subjected to axial compression,
transverse bending, pure bending, and twisting has
been investigated. The lattice shells are modelled as
three-dimensional frame structures composed of the
curvilinear ribs capable of withstanding the tension/
compression, bending in two planes and twisting.
Geometric and finite element models of the lattice
shells are generated using the rotation, copying, and
translation of the universal typical unit cell.

The dedicated procedure (finite element model


generator) is developed to control the orientation
of the beam element allowing the original twisted
geometry of the curvilinear ribs to be closely
approximated. The effects of varying the length of the
shells, the number of helical ribs and the angles of
their orientation on the buckling behaviour of lattice
structures are examined using parametric analyses.
The influence of reinforcements around the cutout
edges (Fig. 4) for the lattice shells having the holes
is also investigated. The results show that these
parameters strongly affect the values of critical loads
and buckling mode shapes of the CFRP lattice shells
subjected to various loadings. It is shown that the
discrete modelling approach presented in the paper
provides a sufficiently accurate buckling analysis
of the lattice shells and, at the same time, can be
efficiently employed in solving the relevant design and
design optimisation problems.

Figure 4: Buckling mode for the cylindrical composite lattice shell with the reinforced cutouts.

20

Engineering and Information Technology Research Report 2011

Fundamental Frequency of an Orthotropic


Rectangular Plate with an Internal Centre
Point Support
(Alexander Lopatin, Evgeny Morozov)

Performance of Outside Filament-wound


Hybrid FRP-concrete Beams
(Anup Chakrabortty, Amar Khennane,
Obada Kayali, Evgeny Morozov)

A method of calculating the fundamental frequency of


an orthotropic rectangular plate with a centrally located
point support and free edges has been developed.
(Figs. 5 and 6) The variational equation of motion is
derived by applying Hamiltons principle. The analytical
approach determining the fundamental frequency of
the plate is developed using the generalised Galerkin
method and verified by comparison with the results of
the finite element modal analysis. The comparisons of
the computational results indicate that the fundamental
frequency of the centre-supported plates can be
calculated with sufficient accuracy using the analytical
technique developed in this work. The approach
proposed in this work can be efficiently employed
when designing composite rectangular plates for a
specified value of the fundamental frequency.

A novel configuration of a hybrid FRP-concrete beam


has been developed. The beam consists of a GFRP
pultruded profile, a CFRP laminate, and a concrete
block all wrapped up using filament winding.
Three different concrete blocks were used: high
strength concrete, normal strength concrete and steel
fibres reinforced high strength concrete. The major
feature of the design is that it does not mimic that of
reinforced concrete as reported previously. The CFRP
laminate is not designed to fails first to serve as a
warning of imminent failure, but rather to enhance the
stiffness of the beam by compensating for the lack of
stiffness of the GFRP profile. The experimental results
have shown that this approach is successful.
The wrapping did not only eliminate the risk of
premature failure as a result of the concrete block
debonding from the pultruded profile, but it was also
found to enhance the stiffness and load carrying
ability of the beams. The beams with a high strength
concrete block showed increased stiffness and load
carrying ability but failed in a catastrophic manner.
On the other hand, the beams with normal strength
and steel fibres reinforced high strength concrete
showed improved ductility. The amount of energy
dissipative behaviour was found to depend on the
thickness of the concrete block. When the latter is
too thin, failure akin to shear punching appears to
take place.

Figure 5: Laser corner reflector.

Figure 7: Failure mode of the filament-wound


hybrid FRP-concrete beam.

Figure 6: First mode shape of the vibrations of


the square isotropic plate.

Engineering and Information Technology Research Report 2011

21

Integrated Plain and Slurry Infiltrated Fibre


Concrete (IP-SIFCON) Composite Beams (Chang
Lin, Obada Kayali, Evgeny Morozov, David
Sharp)
Composite beams (IP-SIFCON) were composed of
two layers; a bottom SIFCON layer and an upper
layer manufactured of plain cement paste. Beams
made totally with SIFCON were also investigated for
comparison. The effects of the SIFCON layer thickness
on the flexural strength and energy absorption of
composite beams were reported. The IPSIFCON
beams exhibited a distinctive deflection hardening
behaviour and performed comparatively with total
SIFCON beams. The studies indicate that, compared
with normal concrete beams, the IP-SIFCON composite
beams have significantly improved flexural strength and
energy absorption capacity.

Management of Aging Composite Airframes


(Rik Heslehurst, Eric Wilson, Aaron Warren)
The investigation of the adequacy of current Aircraft
Structural Integrity (ASI) methodologies for composite
airframes and proposal of changes needed to
accommodate the usage of composites in aircraft
structures. Developed concept for the application of
accident causation models and resilience engineering
to the assessment of the impact of composites to
ASI. Conference paper presented at 2011 Aircraft
Airworthiness and Sustainment Conference, Brisbane,
June 2011, titled, Evolution of Aircraft Structures and
Integrity Management.

The Effect of Saltwater Absorption/Desorption


on the Residual Strength of Carbon Fibre
Reinforced Composite Materials (Rik
Heslehurst, Eric Wilson, Chris Kourloufas)
This research is an investigation of the effect the
cycle of absorption and desorption of saltwater
has on the mechanical properties of Carbon Fibre
Reinforced Plastics (CFRP). It has been identified from
a survey of relevant literature, that the use of CFRP
is ever increasing in the aerospace industry however,
this topic is one that has had little research even
though it is of great importance. Thus, the objective
of this research is to add to the body of knowledge
regarding the environmental degradation of CFRP.
The focus of this research will be a literature review on
the mechanisms of saltwater absorption/desorption
and its effects on CFRP and/or FRP in general, and
experimentation of CFRP specimens conditioned with
saltwater. The experimentation will aim to determine
whether absorption and desorption cycles of saltwater
does leave trace elements, and whether the presence
of these trace elements affect the mechanical
properties of the CFRP.

22

Engineering and Information Technology Research Report 2011

Conference paper presented at 2011 Aircraft


Airworthiness and Sustainment Conference, Brisbane,
June 2011, titled, A Review of the Effects of Fluid
Absorption/Desorption on the Residual Strength of
Carbon Fibre Reinforced Composite Materials.

Investigation of Innovative Aeroelastic


Structure Designs Manufactured from
Composite Materials to Reduce High Speed
Aerofoil Drag (Rik Heslehurst, Warren Smith,
Lorin Coutts-Smith)
Expansion of the analytical methods for assessing
bend-twist coupling of laminates to consider effects
of ply-position/orientation in anisotropic laminates.
Development of the understanding of laminate plyposition/orientation effects in sandwich structure.
Assessment of the effectiveness of anisotropic
laminates used as skins for aeroelastic aerofoils made
from sandwich structure. Development of a practical
application of an elastically tailored laminated
structure that has been designed to make use of
usually problematic aeroelasticity.

Resin Bleed Schedule Impact on Composite


Specific Properties (Rik Heslehurst, AnneMarie Lane)
When composite structures are cured they often
use a resin bleed schedule to control the amount of
resin removed during the process. This resin removal
equates to variation in the fibre volume ratio.
Hence, the specific engineering properties of a
composite laminate are controlled by the fibre volume
ratio or bleed schedule. Five different resin bleed
schedules were used to produce a series of testing
coupons. Five mechanical tests were then conducted
on each bleed schedule coupons. These mechanical
tests were tension, compression, flexural bending,
in-plane shear and short-beam shear. The results of
the testing found that there exists a noticeable effect
between the bleed schedule used and mechanical
properties achieved. This is due to the impact that
the amount of resin bleed has on the thickness and
the fibre-volume ratio of the composite product. As
such this research activity supports the importance
of considering the resin bleed schedule used during
the manufacture of an advanced composite material
when attempting to achieve consistent composite
engineering structural properties. The results
highlighted the importance of a well defined resin
bleed schedule to ensure appropriate engineering
properties of the composite material.

Manufacturing Defect Tolerance in Critical


Locations (Rik Heslehurst, Shayne Hohensee)
The rate of introduction of lightweight composite
materials into the bicycling industry has seen several
front forks fail resulting in serious injury and in some
cases fatalities. Research has been conducted
through a sensitivity analysis in the design issues
associated with bicycle carbon fibre composite front
fork failures. The study determined the position of the
of maximum centre of gravity effect of the bicycle
and rider system which then leads to a force balance
of a bicycle in motion, upright and at a constant
velocity when it is subject to a step input (a curb).
A theoretical analysis of the forces involved in the
impact was applied to a finite element model of the
front fork of the bicycle. The finite element model
provided the load distribution of such impact forces
which can be used as a comparison for the carbon
fibre composite design and layup stacking sequence
for carbon fibre composite forks. These results were
then use to identify critical defect size and position
near the crown of the front forks. These initial results
clearly showed the sensitivity of the structural integrity
of the front forks to defects.

Structural Performance of Steel/FRP


Reinforced Concrete Beams at Elevated
Temperature (Sarah Zhang)
Fiber reinforced polymers (FRPs) such as Glass-fiber
reinforced polymers (GFRP), Carbon-fiber reinforced
polymers (CFRP) and Aramid-fiber reinforced
polymers (AFRP) have been widely introduced to
the construction of concrete structures in recent
years. Due to their superior virtues, such as high
tensile strength, excellent electrochemical corrosion
resistance and cost-effective fabrication, they are
increasingly used as a substitute of traditional steel
reinforcement especially in severe environments.
However, most of applications of FRP reinforcing
bars are, at present, restricted to the constructions
in which temperature effect is not a primary
concern, which may be attributed to the fact that the
mechanical properties of FRPs deteriorate with the
increase of temperatures.

9 mins

21 mins

Hitherto, few investigations on the behavior and


endurance of FRP-reinforced concrete structures at
elevated temperatures have been reported. There is still
no mature design guidelines available for FRP-reinforced
concrete structures in aggressive environments, such
as in fire condition, which is one of the inevitable
threats to building structures. It is therefore essential to
understand the structural behavior of FRP-reinforced
concrete members at elevated temperatures before
implementing them in building structures. Nonlinear finite
element analyses have been employed to predict the
structural behavior of concrete structures successfully.
But according to the authors investigation, nearly all the
previous researches on numerical analyses of concrete
structures at elevated temperatures have been focused
on the conventional steel-reinforced concrete structures,
and very few numerical analyses of FRP-reinforced
concrete structures under a combination of thermal and
mechanical loading up to failure have been conducted.
In this research [Lin and Zhang 2011] a onedimensional two-node layered composite beam
element is developed for nonlinear finite element
analysis of steel/FRP-reinforced concrete beams
under under a combined mechanical and thermal
loading in fire conditions. By employing the
Timoshenkos composite beam functions to construct
the new element, shear-locking problem is avoided
naturally and a unified formulation for analyses of both
slender and deep beams is established. A nonlinear
finite element analysis based on heat transfer
theory is performed to determine the temperature
distribution across the cross-section of the beam.
Both geometric and temperature-dependent material
nonlinearities are accounted for in the finite element
model for accurate modelling. Numerical modelling
demonstrates that the element is computationally
effective, and is efficient and accurate for analyses of
steel/FRP-reinforced concrete beams. The element is
then employed to investigate the influences of a series
of parameters such as concrete cover thickness, type
of reinforcements including GFRP, CFRP and AFRP,
and load levels on the structural behavior of FRPreinforced concrete beams in fire conditions.
The effects of these parameters on the structural
behavior are summarized and concluded which will
provide guidance for structural analyses and design.

30 mins

39 mins

51 mins

Figure 8: Temperature distributions across the cross-section of a beam at different fire exposure time
predicted from the finite element heat transfer analyses

Engineering and Information Technology Research Report 2011

23

Experimental Investigation of the Structural


Behaviour of Steel/FRP Reinforced Concrete
Beams (Sarah Zhang)

Figure 9: Temperature-deflection relationships


of concrete beams reinforced with CFRP
(Beam IV-T1), GFRP (Beam IV-T2) and AFRP
(Beam IV-T3)

Nonlinear Finite Element Analyses of Steel/


FRP Reinforced Concrete Beams with
Debonding Effects (Sarah Zhang)
Bond between concrete and reinforcing bar is one of
the main traits of reinforced concrete structures, and
it plays an important role in transferring the stress
from reinforcing bars to the surrounding concrete.
However, with the increase of loading, cracking
occurs inevitably, which results in the reduction of the
bond strength. And a certain amount of bond-slip may
take place in the beam, which will affect the stress
distribution, crack spacing, crack width as well as
structural behaviour of concrete beams. Although FRP
rebars show lower bond strength values than steel
rebars, when there is no sufficient surface preparation,
the effect of bond-slip on the structural behaviour of
reinforced concrete beams shouldnt be ignored.
In this research, a simple one dimensional composite
beam model considering the bond-slip effects is
developed for modelling of the structural behaviour
of steel/FRP reinforced concrete beams. The model
is validated by comparing the numerical results with
those from the experimental studies. The modelled
results using the model are also compared with those
obtained from the model without considering the
bond-slip effect and other numerical analyses.
It is demonstrated that the model can capture
the bond-slip effects accurately and effectively.
Parametric effects on the structural behaviour of the
beams with the bond-slip effects are also investigated.

24

Engineering and Information Technology Research Report 2011

In this research, the structural behaviour including the


debonding failure of the steel/FRP reinforced concrete
beams is investigated experimentally. Four-point
bending tests are carried out on the concrete beam
reinforced with steel rebars, glass fiber, carbon and
basalt fiber rebars. Bond-slip is tested and recorded.
The effects of the different types of reinforcement
are compared and analysed. The tested results are
also compared with those from the developed finite
element model with and without bond-slip effects.

Structural Performances of ECC Panels under


High Velocity Impact (Sarah Zhang)
Fibre-reinforced engineered cementitious composite
(ECC) are composed of cement, water, sand, fly
ash and some chemical additives with a moderate
volume fraction of randomly distributed short fibres.
It has been concluded as a competitive substitute for
concrete in protective structures due to its potential to
resist impact. ECCs are classified into two types, i.e.
mono-fiber ECCs, which consists of only one type of
fibers, and hybrid-fiber ECCs consisting of more than
one type of fibers. A hybrid-fibre ECC, with proper
volume of high and low modulus fibres, is expected
to exhibit a simultaneous improvement in ultimate
tensile strain and strength properties, which are both
essential for protective structures.
This research develops a new hybrid-fiber ECC
material with 1.75% polyvinyl alcohol fiber (PVA) and
0.58% steel fibers (SE) fibers with (PVA) and (SE) for
high strength, good ductility and excellent resistance.
The material properties and mechanical behaviour
of the new material are investigated experimentally.
These include the compressive strength, elastic
modulus, modules of rupture, and tensile materials
under dynamic loading rate. The high-velocity
impact responses of the material is investigated
experimentally subjected to the impact of a small
ogive-nose projectiles fired from gas gun with initial
impact velocities in the range of 300 m/s and 657
m/s. To compare the impact resistance of the new
ECC mix, impact responses of a hybrid-fiber ECC
with with 1.5% PVA and 0.5% SE fibers, which was
recommended as the best ECC mix to resist impact
by are also studied experimentally. To compare the
impact resistance capability of the hybrid-fiber ECC
panels with the conventional concrete panels, impact
responses of plain concrete panels made of grade
N45 and grade N90 normal concrete are also tested
in this research.

High speed camera is used to record the whole


penetration process, and the damage parameters,
such as crater diameter, penetration depth, scabbing
diameter, residual velocity, fragment rate, impact energy
and energy absorption are determined, analysed
and compared. The research findings are finally
summarized and concluded finally. The experimental
results from this study provide additional information
in understanding of behavior of hybrid-fiber ECC
structures under high velocity impact loading.

High Velocity Impact of a New Hybrid-fiber ECC


(Sarah Zhang)
ECC is characterised by a number of desirable
mechanical properties, such as an improved modulus
of rupture, fracture toughness, fatigue resistance,
impact resistance, and significant strain-hardening
behaviour. There is significant potential for ECC to be
used in defensive and protective structures due to its
capacity to resist impact. A number of experiments
have been conducted to investigate the material
properties of ECC mixes, but mainly on mono-fibre
ECC mixes consisting only one type of fiber, such as
steel fiber (SE), PVA fiber or PE fiber. Very few studies
have been reported on the mechanical properties of
the hybrid-fiber ECC mixes, which consist of more
than one type of fibers, and no research has been
reported on the dynamic material properties of the
hybrid-fiber ECC mix.
A new hybrid-fiber ECC mix containing 1.25% steel
fibres and 0.75% PVA fibres is proposed based on
the standard ECC design in this research. The new
ECC mix is expected to exhibit excellent strength,
ductility and energy absorption for good capability
of impact resistance. Material properties of the new
hybrid-fiber ECC mix are tested experimentally with
a specific focus on tensile properties under static
and dynamic loading. Tests performed on the ECC
mix include quasi-static uniaxial compression and
tension tests, elastic modulus test, flexural test, and
dynamic uniaxial tensile test. Experimental results are
summarized and analysed and conclusions are drawn
based on the analyses [Hermes et al. 2011 ].
There has been very little research into the high
velocity impact behaviour of ECC containing both
high modulus and low modulus fibres.
These hybrid-ECCs have shown promise to provide
many beneficial properties over normal steel
reinforced concrete including greatly increased
ductility, strain hardening behaviour, greater durability
and increased energy absorption. The impact
resistance and energy absorption properties of the
hybrid-ECC are investigated in this research.

A number of high-velocity impact test on hybrid-ECC


panels using fabricated steel projectiles fired from
a laboratory gas-gun and standard rifle round fired
from an in-service military rifle are carried out [Bell et
al. 2011]. In addition, the ballistic tests of panels with
dimension of 300 mm x 170 mm x 55 mm made of
conventional concrete, high strength concrete,
steel-reinforced concrete, steel-fiber reinforced
concrete are also carried out in this research.
The impact responses of the different construction
materials in terms of crater sizes and damage failure
mode are analysed and compared. The results
from the gun gas facilities and the real bullets are
compared. The test results demonstrate significantly
improved impact and shatter resistance of the
new hybrid-ECC mix with reduced spalling and
fragmentation, localized damage areas, improved
cracking resistance with distributed microcracking
and increased energy absorption capability.
FIGURES 10 13: The images of the damaged panels
under the impact from 7.62 mm projectile fired from
military SR-25 rifles.

Figure 10: Damaged hybrid-ECC panel

Figure 11: Damaged steel-fiber reinforced


FRC panel

Engineering and Information Technology Research Report 2011

25

Figure 12: Damaged steel bar-reinforced


concrete panel

In this research several widely used material


models for plain concrete under dynamic loading,
especially the Concrete Damage model and the
Elastic-Plastic Hydrodynamic model are evaluated
so as to determine an appropriate material model for
engineered cementitious composite (ECC) materials
under dynamic loading. The effects of size effect,
strain rate effect and the specific equation of state
on the dynamic material behaviour are investigated
using numerical modelling method. A material
model which is appropriate to simulate the dynamic
behaviour of ECC materials is established based on
the Concrete Damage model. The proposed material
model is validated via numerical simulation of the
impact process of a hybrid-fibre ECC slab struck by a
high-velocity projectile. Advantages and limitations of
other material models for dynamic behaviours of ECC
materials are also compared and commented [Li and
Zhang, 2011].

Nonlinear Numerical Modelling of FRP


Strengthened Concrete Slabs (Sarah Zhang)
Due to the superior material properties of the fibre
reinforced polymers (FRPs), such as high ratio of
stiffness to weight and strength to weight, resistance
to fatigue and corrosion, they have been used to
strengthen and rehabilitate deteriorated concrete
structures and infrastructures, such as bridges.
Moving vehicles generally produce significant
responses than equivalent static loads do.
Vehicle-induced dynamic response of bridges is one
of the primary problems for bridge engineers.

Figure 13: Damaged high strength concrete


panel (compressive strength of 90 MPa)

Evolution and Calibration of a Numerical Model


for Modelling of Hybrid-fibre ECC Panels under
High-velocity Impact (Sarah Zhang)
The investigation of the dynamic responses of
the ECC structures plays a significant role in the
understanding of the physical mechanisms and
the development of practical design guidelines for
application of ECC materials in protective structures.
Among the methods to study the performances of the
ECC structures under dynamic loading, numerical
simulation is a widely used one considering the costs
related to experimental investigation and the difficulty
of the analytical method. For accurate numerical
prediction of the dynamic behaviour of the ECC
structures, an appropriate material model which can
represent the dynamic behaviour of ECC materials
and equation of state (EOS) are essential.

26

Engineering and Information Technology Research Report 2011

In this research, a finite element model is developed


to investigate the structural behaviours of a
FRP-strengthened bridge deck system under
moving vehicle loadings. Finite element analysis
of a full-span continuous FRP-strengthened box
girder steel reinforced concrete bridge model under
moving vehicles is conducted. In order to validate the
analysis procedure, the static and modal analyses
of an existing concrete bridge are firstly conducted
and results are compared to those obtained from the
literature. Once the procedure is validated, the model
is used for analysis of a FRP-strengthened concrete
bridge. Structural and dynamic behaviour including
the deflection of the mid-span and other monitoring
points and natural frequencies of vibration are
studied. The parameters affecting the bridge dynamic
responses including the speed of the vehicle and the
effects of different types of FRPS including CFRP,
GFRP and Basalt Fibre on the structural behaviour are
studied [Teng and Zhang, 2011].

Biomechanics and heat transfer in the Brain


(Neely, Tahtali, Lueck, McIlwaine, Lillicrap)

Figure 14: The displacement-time responses of


the bridge with CFRP strengthening
and without strengthening under vehicles
moving at speed of 30m/st

Figure 15: Displacement-time responses of


the bridge with CFRP and GFRP strengthening
under vehicles moving at speed of 30m/s

Work on two medical-related projects has continued to


apply engineering tools to clinical problems. The first
concerns ongoing research to model the compression
of the optic chiasm by a growing pituitary tumour and
its relation to the visual defect known as bitemporal
hemianopsia in which the outer half of the visual field,
which is carried by the nasal optical nerve fibres, is
lost. A new PhD student, Xiaofei Wang, was recruited
to the project and spent 2011 performing initial FEM
simulations of the resulting distortion of the optic
chiasm, which is the crossing point for the optic nerve
bundles passing backwards from the eyes to the
brain. Simulations of individual nerve fibre models
were also performed to account for the different
crossing geometries in the nerve fibre bundles.
These initial simulations demonstrated that the central
region of the chiasm always bears higher stresses
than peripheral regions. In the nerve fibre scale, the
stresses in the nasal nerve fibres, which cross, are
dramatically higher than in the temporal nerve fibres,
which do not.
The second project has modelled the transfer of heat
in the brain and its bearing on body cooling strategies
for post-stroke intervention to minimize cell death
in the brain. The work being performed at UNSW
Canberra has used FEM to create simplified brain
geometries that incorporate stroke damage and model
the transfer of heat in the structure that results from
various cooling strategies. This involves incorporating
an analytical model for the balance of heat production
and heat removal by the blood supply to the brain.
Initial simulations have established the technique and
the work is ongoing.

Engineering and Information Technology Research Report 2011

27

Computational
Intelligence

SEIT Academics
Prof. Hussein Abbass
Dr Sameer Alam
Dr Michael Barlow
Dr Daryl Essam
Dr Chris Lokan
Dr Kathryn Elizabeth Merrick
A/Prof. Ruhul Sarker
Dr Kamran Shafi

SEIT Research Staff


Dr Vinh Bui
Dr Jing Liu
Mr. Jiangjun Tang
Dr Weicai Zhong

SEIT Postgraduate Students


Ms. Heba Zaki Mohamed El-Fiki
Mr. Amr Ahmed Sabry Abdel Rahman Ghoneim
Mr. Peter Hoek
Ms Erandi Lakshika Hene Kankanamge
Mr. George Leu
Ms. Shen Ren
Ms. Bing Wang
Ms. Shir Li Wang
Ms. Kun Wang
Mr. Leon Young
Mr. Bin Zhang

Software Developers
Mr. Qi Fan

External Collaborators
Defence Science and Technology Organisation,
Australia
Dr Axel Bender
National University of Singapore, Singapore
Prof. Tan Kay Chen
Defence Science and Technology Organisation,
Australia
Prof. Neville J Curtis
Defence Science and Technology Organisation,
Australia
Dr Svetoslav Gaidow
Monash University, Australia
Prof. David Green
UC, Australia
Assistant Prof. Eleni Petraki
Kyushu University, Japan
Prof. Jun Tanimoto

Funding Agencies and Sources


Australian Research Council
Defence Science and Technology Organisation

28

Engineering and Information Technology Research Report 2011

Research Description
Computation underlies the science of using
calculations to understand systems or solve problems
in a systemic manner. Intelligence is the high mental
capacity of a human being to utilize their cognitive
skills in a systematic and rational way to be conscious
of, understand, learn, predict and influence the
surrounding environment, using justifiable actions.
Computational Intelligence is the science of using
computations to represent, model and mimic
intelligence. Examples of Computational Intelligence
include computational problem solving methods and
algorithms inspired with concepts from Nature (e.g.
Evolutionary Computation, Ant Colony Optimization,
Marriage in Honey Bees Optimization, Estimation
Distribution Algorithms, artificial immune systems),
computations through architectures that mimic the
architectures of the brain (e.g. neural networks,
connectionism, cognitive agents), and computations
through linguistic forms (e.g. fuzzy systems,
computational linguistics).
CI members are specialized in a wide variety of
computational frameworks, methodologies, methods,
algorithms and techniques for solving problems and
understanding systems. Many projects in CI are
industry driven, attempting to popularize computations
in organisations and systems.

The Causes for No Causation:


A Computational Perspective
Causality is grounded in every scientific field.
Computational modelling is no exception, except
that it is our focus in this article. But what if we
have made a mistake? Is causality a constraint on
our understanding of complex systems? Is it an
obstacle in our ability to build theories to control
change in complex systems? Or do we merely need
to refine the concept as we evolve from one level of
complexity to another?
We [Abbass and Petraki] started the journey of
this project by glancing over a few key pieces of
work from Philosophy and Metaphysics. We then
centred the research on the pivotal elements of this
project, causality of change in complex systems of
systems, and demonstrated that a counterfactual
analysis of causality breaks down. We attempted
to understand change and separated physical
and perceptual elements. Three applications were
presented as examples of the type of complexity we
face in computational modelling of complex systems
of systems. These three applications covering
story generation in linguistics, network centric
operations in defence and interdependency security
problems - demonstrate how causal dependencies
can be modelled, identified and extracted from a
computational environment that mimics real-world
complex systems of systems. We then proposed a
model we call the E4 model - to control change in
complex systems.

Evolving high fidelity, low complexity


rule-based Multi Agent Simulations of
standing group conversations utilising a
framework bootstrapped from human
aesthetic judgements
This project is part of ongoing work to determine the
relationship between rule complexity and fidelity of
rule-based Multi Agents Systems. The objective of the
framework is to derive high fidelity simulations with
minimal computational complexity. The framework
is presented in the domain of social simulations of
standing conversation group dynamics.
Four conceptual rules inspired by the seminal
boid-rules introduced to synthesize the dynamics
of flock behaviours - form the building-blocks of the
framework. These rules contain parameters which
influence the rules while the combination mechanism
combines the rules to determine the agent behaviour.
Considering the number of permutations, it is a
highly resource intensive to derive optimal agent
configurations by manual parameter tuning.
Hence the framework employs a Genetic Algorithm
to search rules and the parameter space more
efficiently than a human would be capable of. The
Genetic Algorithm utilises a fitness function developed
based on a machine learning system trained by
bootstrapping human judgment on the visual fidelity
of a relatively small set of training examples (agent
configurations). The framework can be described
using the following 5-step process.
1. Derive sets of rules to determine the agent
behaviour in the Multi Agent System
2. Present sets of different agent configurations
(scenarios/training examples) for human
evaluation
3. Train a scorer a machine learning algorithm to
determine the fitness (visual fidelity) of unseen
agent configurations using human scores and
extracted features of the scenarios

Computational Red Teaming: Past, Present


and Future
The combination of Computational Intelligence (CI)
techniques with Multi-Agent Systems (MAS) offers
a great deal of opportunities for practitioners and
Artificial Intelligence (AI) researchers alike.
CI techniques provide the means to search massive
spaces quickly; find possible, better or optimum
solutions in these spaces; construct algorithms,
functions and strategies to control an autonomous
entity; find patterns and relationships within data,
information, knowledge or experience; assess risk and
identify strategies for risk treatment; and connect the
dots to synthesize an overall situational awareness
picture that decision makers can utilize.
MAS provide the structured, modular, distributed
and efficient software environment to simulate
systems; the architecture to represent systems and
entities naturally; the environment to allow entities to
observe, communicate with, negotiate with, orient
with respect to, and act upon other entities; the
modular representation that allows entities to store
and manipulate observations, forming beliefs, desires,
goals, plans, and intentions; and the framework to
model behavior.
By bringing CI and MAS together, we [Abbass,
Bender, Gaidow and Whitbread] have a powerful
computational environment that has the theoretical
potential to do many things that one can expect
when attempting to structure, understand, and
solve a problem. Computational Red Teaming (CRT)
is the state-of-the-art architecture representing
the integration of CI techniques and MAS for
understanding competition. This integration of MAS
and CI benefits practitioners in almost all major
application domains such as defense, business and
engineering. This project maps out the evolution of
CRT by categorizing the different levels of integrating
CI and MAS, and highlighting open research
questions pertaining to CRT.

4. Use a Genetic Algorithm to evolve agent


configurations utilising the automated scorer in
order to derive the optimal agent configurations
5. Present evolved agent configurations back to the
human validation

Engineering and Information Technology Research Report 2011

29

Behavioural Analysis in Computational


Red Teaming

Autonomous design of creative data mining


hypotheses in computational red teaming

Red teaming is an approach to study a task by


anticipating adversary with adversary refers to
an entity which affects the objectives of the task.
A blue entity refers to the entity which would like
to achieve the task while a red entity refers to the
circumstances and/or entities which may have an
adversary impact on the task. In other words, both
the blue and red entities have conflicting interests.
Originally, red teaming is an approach widely used
in military operations to role-play the enemy; test and
evaluate its courses of action or judgement; assess
the vulnerabilities of the blue team; and learn to
understand the dynamics existed between the red
and blue entities. In a computational red teaming
environment, red is not necessarily an enemy but any
entity with objectives that are in conflict with blues
objectives. The red teaming concept can be mapped
in domains which share similar characteristics such as
adversarial learning, risk assessment, and behavioural
decision making and the concept can be expanded
further with the use of computational red teaming.

The volume of data available nowadays exceeds


the analysis capacity of experts and researchers.
Data carries significant information and knowledge,
but they are usually represented in terms of hidden
relationship, patterns or trends. Large database are
searched for such relationships, trends or patterns.
These patterns are not known to exist and are not
visible prior to the start of the search process.
Although automated searching/knowledge discovery
techniques can be used to analyse and search for
such hidden knowledge, they assume human experts
guiding the search process. We [Wang, Merrick and
Abbass] propose methods to autonomously mine data
in a Computational Red Teaming Environment.

A key question in Computational Red Teaming is:


can an autonomous machine red team in humanlike manners? This is the underlying question of
this project. If we [Wang, Shafi, Lokan and Abbass]
are able to establish the feasibility of using a
computational environment to play the role of a red
entity, we can have red teaming in silico. We would like
to understand the differences of behaviour between
an autonomous machine and human, and thus, shed
some lights on the rationality and optimality of strategy
selection. Through understanding the differences, the
blue team which can be either a machine or human is
able to manage their tasks effectively and this leads to
better decision making.

A Computational Red Teaming based Interactive


Learning Environment for Cyber Intelligence
It is a fact that there are many security problems and
various threats existing in the cyber environment.
However, we are still learning how to deal with this
new space. For information assurance, we need to
understand how Cyber Intelligence differs and how
to design proper effective measures of performance,
measures of effect and measures of utility. As such,
there is an increasing need to provide effective
education and training tools for organizations and
individuals. This is the objective of our research.
We [Zhang, Shafi and Abbass] integrate Simulation,
Optimization and Data mining in this project to
provide an Interactive Learning Environment for
Cyber Intelligence. Machine learning and optimization
techniques are used to derive the learning engine.

30

Engineering and Information Technology Research Report 2011

Cognitive-Aware Adaptive Games


In order to enhance the user game play experience,
to improve the computational methods for detecting
and measuring human emotions in real-time, and
to develop Computational Intelligence techniques
and adaptation models suitable for electronic
games, we (Ren, Barlow & Abbass) design adaptive
mechanisms for games by using physiological and
cognitive monitoring data to create what we coin
as Cognitive-Aware Adaptive Games (CAAG).
In this project, we first design a CAAG framework.
Under this framework, monitored physiological and
cognitive data from the player are analyzed to obtain
emotional indicators. An emotion model which maps
emotional states and indicators to high-order states is
developed. Adaptation is then achieved by changing
the game using the full information derived from realtime monitoring of the players. CAAG could greatly
enhance the game-playing experience of the player.
The findings not only have entertainment value, but
can also be applied to educational and training areas.

Evolutionary Story Generation Methods


In this project, we [Wang, Bender, Bui & Abbass]
propose a computational framework for automated
story-based scenario generation. Under this
framework, a hierarchical grammar approach is
applied to model the complexity of a story at different
levels of granularity the scene-level and the
event-level. We use a parameterized version of tree
adjoining grammar (TAG). The potential of TAG to
represent and generate complex stories has been
shown in our previous work. The grammar is then
evolved using evolutionary computation techniques to
generate novel story plots, i.e. story-based scenarios.
To evaluate these newly generated scenarios, a
human in the loop model is used. Moreover, to
meet the challenge of generating domain-specific
stories, we propose a semi-automatic story structure
finding approach and natural language processing
techniques to automatically parse stories in a domain
of interest into a network of interrelated events and
entities, then use network analysis tool to find story
patterns and features.

A Computational Intelligence Approach


to Competency and Skill Assessment of
Go Players

A Computational Linguistic Approach for


the Identification of Translator Stylometry in
Arabic-English Text

Complex situations are very much context


dependent, thus agents whether human or
computerized need to attain an awareness based
on that present situation. An essential part of that
awareness is the accurate and effective perception
and understanding of the set of knowledge, skills,
and characteristics that are needed to allow an agent
to perform a specific task with high performance,
or what we would like to name, Competency
Awareness. In this study, we propose a framework
whereby a computational environment is used to
study and assess the competency of a decision
maker. We [Ghoneim, Essam and Abbass] use the
game of GO to demonstrate this functionality in an
environment in which hundreds of human-played
GO games are analysed. In order to validate the
proposed framework, a series of experiments on
a wide range of problems has been conducted.
These experiments automatically (1) measure and
monitor the competency of Human Go players
(See the Example Figure), (2) reveal and monitor
the dynamics of Neuro-Evolution, and (3) integrate
strategic domain knowledge into evolutionary
algorithms. The experimental results showed that:
(1) the proposed framework was effective
in measuring and monitoring the strategic
competencies of human Go players and evolved
Go neuro-players, and (2) was effective in guiding
the development of improved Go players when
compared to traditional approaches that lacked the
integration of strategic competency measurement.

Despite the research proliferation on the wider


research field of authorship attribution using
computational linguistics techniques, the translator
stylometry problem is more challenging and there is
no sufficient literature on the topic. Some authors even
claimed that this problem does not have a solution; a
claim that we [El-Fiki, Petraki & Abbass] challenge in
this project. We present an innovative set of translator
stylometric features that can be used as signatures to
detect and identify translators. The features are based
on the concept of network motifs: small graph local
substructures which have been used successfully in
characterizing global network dynamics. The results
demonstrate the efficiency of the approach.

Local-Global Interaction and the Emergence


of Scale-Free Networks with Community
Structures
Understanding complex networks in the real-world
is a non-trivial task. Researchers resort to computergenerated networks that resemble characteristics of
networks encountered in the real-world as a mean
to generate many networks with different sizes,
while maintaining the real-world characteristics of
interest. The generation of networks that resemble
characteristics in the real-world turns out in itself to
be a complex search problem. We [Liu, Zhong, Green
& Abbass] present a new re-wiring algorithm for the
generation of networks with unique characteristics
that combine the scale-free effect and community
structures encountered in the real-world.
The re-wiring algorithm is inspired by social
interactions in the real-world; whereby people tend
to connect locally while occasionally they connect
globally. This local-global coupling turned out to be
a powerful characteristics that is required for our
proposed re-wiring algorithm to generate networks
with community structures, power law distributions
both in degree and community size, positive
assortative mixing by degree, and
rich-club phenomenon.

A Competency-Level Monitoring Curve for a


Human Go Player.

Engineering and Information Technology Research Report 2011

31

Concrete Technology
and Materials

SEIT Academics
A/Prof Obada Kayali
Prof Evgeny Morozov
Dr Amar Khennane
Dr Tapabrata Ray

SEIT Postgraduate Students


Chang Lin
Yuan Fang
M. Shakhaout Hussein Khan
M. Talha Junaid

SEIT Undergraduate students


Juliana Karantonis
Michael Lynch
Thomas Bleeck
Damian Selby

Other Collaborators
Roads ACT
Dr M. Sharfuddin Ahmed
CEO-VECOR Building Systems Ltd
Mr. Alex Koszo
University of Kuwait Kuwait
Prof M. Naseer Haque
University of Wolverhampton-UK

Performance of High Volume Fly Ash Concrete


Major industries in Australia and around the world
have been producing far too large quantities of waste.
Coal power generation, iron and steel, aluminium
and silicon industries all produce various waste
materials . Although such materials are considered
waste from the particular industry viewpoint, they
can be of enormous value for concrete and building
industries. The facts however are: (a) concrete
manufacturing is one of the major causes of green
house gas emissions, (b) the industrial waste can
be an environmental hazard, (c) some of the waste
materials actually possess properties that are very
desirable to have in concrete, (d) some of these
materials can substitute cement, which is the main
contributor to CO2 emission in the concrete and
building activities, and (d) utilising such materials is
beneficial to the economy of the producing industry,
the concrete industry and the society. Thus it makes
sense to direct research to assess the properties and
effects of replacing cement by a large amount of
waste materials such as fly ash, blast furnace slag,
metakaolin and silica fume. In the years 2010/2011
findings from our concrete research have been
published and presented in international journals
and conferences. The positive contributions of the
use of ground granulated blast furnace slag as
a large volume replacement of cement has been
systematically studies and reported. A rigorous
comparative study of the effects of fly ash inclusion
in large quantities, have been reported in a couple of
important publications and is expected to importantly
contribute to the knowledge in this area.

A/Prof. Jamal M. Khatib


Geomaterials laboratory, Civil Engineering
Department, University of Blida, Algeria
Dr S. Kenai

Research Description
The advances in the technology of concrete have
gained momentum in the past decade. New materials,
design methods, ideas, innovations and standards
have been introduced. Meanwhile, the issue of
sustainability of building materials in general and of
the concrete industry in particular has come under
careful scrutiny. This is because the production of
concrete is a main source of green house emissions.
To put this into perspective, it has been found that for
every tonne of cement produced, there is a tonne of
carbon dioxide emitted in the atmosphere. Thus the
technology of concrete has taken a special direction
aimed towards innovations to produce concrete that
may be sustainable .
The research activities in concrete and building
materials which are taking place at the School of
Engineering and Information Technology have been
steered towards the theme of sustainability.
This has included the innovation in materials as well as
research that aims to produce concrete of desirable
and predictable durability. These activities are briefly
described in the following paragraphs.

32

Engineering and Information Technology Research Report 2011

Measurement of corrosion current in the


reinforcement

Compressive strength as a function of fly ash


replacement

Engineering Cementitious Materials and The


Use of Fibres
Developments in the science of fibre reinforced
concrete are expected to have far reaching effects
on the durability of structural concrete. Already high
performance fibre reinforced cementitious composites
have been successfully applied to the retrofitting of
damaged concrete beams. External strengthening
of various elements of civil engineering construction
has become very popular with the use of fibre
reinforced polymer composites. Fibre reinforced
plastics have been very much researched and
successfully employed in retrofitting structures that
were not initially designed to withstand seismic loads.
Moreover, the recent advances in the manufacturing
of fibre reinforced cementitious materials have made it
possible to effectively resist seismic loading resulting
in saving of lives as well as structures. The significant
reduction in crack width and permeability that the
engineered fibre reinforced concrete can achieve,
has made it possible to protect coastal structures
against reinforcement corrosion and thus significantly
prolong the life expectancy of structures. Research in
this area has taken important strides that established
engineered fibre reinforced cementitious materials as
a most promising area of research in the School.

Stress-Strain diagrams for Engineered


Cementitious Materials

Load-deflection curves of fibre reinforced


concrete beams made with Engineered
Cementitious Composite materials

Scanning Electron Microscopy of Engineered cementitious materials

Engineering and Information Technology Research Report 2011

33

Geopolymer Research
Geopolymer is now promising to be a major building
material. Its importance is that it can be made from
industrial wastes and by-products. The use of fly ash
and blast furnace slag in manufacturing this type of
concrete is anefficient way to get rid of the waste

Back-scatter image of fly ash

34

Bond of reinforcement in ordinary portland


cement concrete

material as well as to create an excellent performing


concrete and reduce the dependence on cement. The
figures below show some results obtained through
research into the nature and behaviour of geopolymer
concrete.

Scanning Electron Microscopy in Geopolymer


Rresearch

Bond of reinforcement to geopolymer concrete

XRD of geopolymers at various stages (M: Mullite, Q:Quartz, G: Gypsum, P:Portlandite, A: Albite, L:
Labradorite)

Engineering and Information Technology Research Report 2011

Optimization of mix design using optimized


particle size distribution algorithm
Optimum mix design depends to a large extent on
the density values and grain size distribution of
the ingredients.

UNSW Canberra possesses instruments that allow


the study of grain size as well as the specific gravity
values of concrete constituents. This project develops
an algorithm that optimizes the proportions of concrete
ingredients based on particle size distributions and mix
design grading requirements.


Particle
Size Distribution of Cement, Ground Granulated Blast Furnace Slag, Silica Fume and Fly Ash
using equipment at UNSW-Canberra.
Research into printable concrete
This research is focused on exploring admixture
interactions in order to develop an acceptable
concrete that can be used in a potential automated
construction process. This automated construction
process can be described as concrete printing.
Printing concrete construction involves concrete being
pump through a system to be extruded into its desired
shape without form work.

This research explores the issues involved with


developing a concrete that is structurally viable and
can meet the requirement to make this construction
process possible. In order to make this possible,
rapid setting must be achieved. Therefore hydration of
cement is explored along with the use of accelerators.
In addition, the uses of super-plasticisers containing
air entrainment properties are explored to see if they
are beneficial in the creation of a printable concrete.

Effect of accelerating and superplasticiser admixtures on concrete strength

Engineering and Information Technology Research Report 2011

35

Control Theory and

Control Applications

SEIT Academics
Prof Ian R. Petersen
A/Prof Hemanshu R. Pota
A/Prof Valeri Ougrinovski

SEIT Postgraduate Students


Ms Ning Chuang
Mr Hua Ouyang
Mr Sayed Sayed-Hassen
Mr Hendra Harno
Ms Aline Maalouf
Mr Obaid Rehman
Mr Md Apel Mahmud
Mr Ahmed Fathi Abdou
Mr Rabiul Islam
Mr A. B. M. Nasiruzzaman
Mr Adnan Anwar
Mr Md. Sawkat Ali
Mr Naruttam Kumar Roy
Mr Abdul Barik
Mr Md. Shihanur Rahman
Ms Tahsin Fahima Orchi
Mr Habibullah Habib
Mr Sajal Kumar Das
Mr Md. Sohel Rana
Mr Tushar Kranti Roy
Mr Shanon Vuglar
Mr Mohamed Mabrok
Mr Cheng Yi
Ms Medria Hardhienata

SEIT Research Staff


Dr Abhijit Kallapur
Dr Daoyi Dong
Dr Dabo Xu
Dr Igor Vladimirov
Dr Hamid Teimoori Sangani
Dr Mahendra Samal
Dr Luis Duffaut Espinosa

Other Collaborators
SEIT, UNSW Canberra
Prof Elanor Huntington
A/Prof Charles Harb
Mr Toby Boyson
Dr Kathryn Merrick
Prof Jiankun Hu
Dr Matt Garratt
Dr Sreenatha Anavatti

36

Engineering and Information Technology Research Report 2011

School of Engineering Systems, QUT


Dr Jason Ford
University of Illinois at Urbana-Champaign
Prof Cedric Langbort
Mr Takashi Tanaka
Australian National University
Prof Matthew R. James
Dr Hendra Nurdin
Mr Zibo Miao
University of Manchester
Dr Alexander Lanzon
Ms Zhuoyue Song
Mr Sonke Engelken
Dr Sourav Patra
University of San Paulo , Sao Carlos
Dr R. A. Ramos
Keio University, Japan
Dr Naoki Yamamoto
University of Newcastle
Prof. S. O. R. Mohiemani
University of Waterloo
Dr Baris Fidan
Albert-Einstein-Institut Hannover
Juniorprof. Dr Michele Heurs
IIT Madras
Dr A.J. Shaiju
Dr B. Bhikkaji

Research Description
Feedback control systems are widely used in
manufacturing, mining, automobile and military
hardware applications. In response to demands for
increased efficiency and reliability, these control
systems are being required to deliver more accurate
and better overall performance in the face of
difficult and changing operating conditions. In order
to design control systems to meet the demands
of improved performance and robustness when
controlling complicated processes, control engineers
require new design tools and better underlying
theory. The Control Research Group conducts
fundamental research into theory and applications of
automatic control systems. Particular interests of the
members of the group include theory of optimal and
robust control systems, quantum control, stochastic
control systems, and applications to active noise
control, signal processing, navigation and guidance.
The Control group receives financial support from
the Australian Research Council and the Defence
Science and Technology Organisation.

Constructive control of interconnected systems


The aim of this project supported by the ARC
Discovery project awarded to A/Prof Ougrinovski in
2008 is to develop a constructive feedback control
theory of complex interconnected systems that is
focused on new distributed and decentralized control
methodologies, by combining the method of vector
Lyapunov functions with advanced approaches of
robust, stochastic and nonlinear control.
In 2010 we continued research into the development
of observer-based algorithms for distributed
estimation of uncertain systems which began in 2009.
The objective is to develop constructive algorithms for
the synthesis of networks of interconnected robust
estimators. Our approach is to treat this problem as
a distributed control problem where one seeks to
design appropriate interconnection control protocols
for exchanging information between the nodes.
This year we applied the approach of vector
dissipativity and vector Lyapunov functions to allow
for the design of networks of robust observers
connected over randomly failing channels. The main
result obtained in 2011 is a sufficient condition which
guarantees a suboptimal H level of disagreement of
estimates in a network of filters which use only locally
available information about the network connectivity. It
involves solving a optimization problem subject to LMI
and rank constraints.
The research into connective stability of stochasitc
nonlinear systems has been finalized. A known result
in the stability theory of stochastic systems with
nonlinear Lipschitz-bounded noise intensity states that
the robust stability radius of such a stochastic system
is equal to the inverse of the H2 norm of its
noise-to-output transfer function. This research
extends this result to the case where one is interested
in the diagonal stability of the system under
consideration. This problem arises naturally when
studying large-scale interconnected systems subject
to random perturbations, as one is often interested in
using diagonal or block-diagonal Lyapunov functions
for such plants. The main result of this research is the
characterization of the diagonal stochastic stability
radius, which is similar to the mentioned result for
non-diagonal stability.
We also considered the problem of measurement
feedback decentralized stabilization of large-scale
interconnected nonlinear systems. Motivated by the
recent developments on control vector Lyapunov
functions, the notion of an output control vector
Lyapunov function is defined which serves as a
starting point for the investigation of a decentralized
version of feedback stabilization problem for such
systems. This paper focuses on the measurement
feedback decentralized stabilization problem.
The main contributions of this research are solutions
to the static version of the problem. An example is
given to illustrate the proposed design methods.

Robust Filtering of Uncertain Hidden Markov


Models with Conditional Relative Entropy
Constraints
We consider a robust filtering problem for uncertain
discrete-time, homogeneous, first-order, finitestate hidden Markov models (HMMs). The class
of uncertain HMMs considered is described by a
conditional relative entropy constraint on measures
perturbed from a nominal regular conditional
probability distribution given the previous posterior
state distribution and the latest measurement.
Under this class of perturbations (which is
assumed to contain a corresponding regular
conditional probability measure corresponding to
the true system), a robust infinite horizon filter is first
formulated as a constrained optimization problem
before being transformed via variational results into
an unconstrained optimization problem that can be
elegantly solved using a risk-sensitive informationstate based filtering problem.

Control Theory and Its Application to


Pendulum-like Systems
This project addresses the stability analysis problem
and the stabilizing controller synthesis problem for
pendulum-like systems with multiple nonlinearities.
As existing method for analyzing the Lagrange
stability of pendulum-like systems with single
nonlinearity is generalized to pendulum-like systems
with multiple nonlinearities. Also, a non-degeneracy
condition of the existing Lagrange stability criterion
is removed and a strict frequency-domain inequality
is used instead. To study the synthesis problem,
this project develops an Extended Strict Bounded
Real Lemma for systems which are not stable but
stabilizable. A sufficient condition for state feedback
control design is proposed in terms of a sign
indefinite solution to an algebraic Riccati equation.

Autonomous deployment and recovery of


rotary wing UAVs from moving platforms
This project is made up of several diverse component
--- sensor technology, signal processing, real-time
control, system identification, and nonlinear control.
For this research we have a couple of experimental
helicopters (UAV) - RMAX and Eagle. The helicopters
are instrumented with GPS sensors, inertial
measurement units, laser rangefinder systems, and a
research platform for ultrasonic sensors.
In 2011 two controllers were designed to attenuate
gust disturbances during hover (a) Model predictive
control, and (b) backstepping control including sensor
delays. The experimental testing of these controllers is
planned for 2012.

Engineering and Information Technology Research Report 2011

37

Renewable Energy Integration in Power Grids


Modern power system grids are interconnection
of synchronous generators and electrical loads,
separated over huge distances of the order of few
thousand kilometres. The first line of power system
controllers are (a) input mechanical power controllers,
which match it to the electrical output power, and (b)
voltage controllers, which maintain a fixed voltage.
Owing to the ever increasing load and reluctance to
build new power stations, more and more power is
being pushed over the tie-lines in grids. These high
tie-line power flows make the interconnected system
dynamics tightly coupled and often makes the system
critically stable.
In order to operate critically stable systems a
damping signal, using the generator speed as input,
is generated by what is known as power system
stabilisers (PSS). Classical power system stabilisers
are designed for nominal operating point of the
system. In this work we partition the entire power
system operating range and then design a PSS for a
nominal point in each partition.
In 2011 analysis and control design was done to
integrate renewable generation in distribution systems.
The approaches researched were (a) including
FACTS devices for voltage profiel control, (b) complex
network based analysis to identify critical system links,
(c) multi-agent based control for fault recovery, and
(d) the use of plug-in hybrid vehicles to improve the
power quality in distribution systems.

Standard control methods do not take into account


the special features of quantum systems; these
features, however, are critical to the operation of these
systems and provide opportunities beyond those
available in classical systems. This project addresses
these challenges by developing a new theory of
robust feedback control for quantum systems.
Research in this project is currently directed towards
developing quantum versions of standard control
theory techniques such as LQG control and H-infinity
control and also applying evolutionary optimization
techniques to these problems. This research is
exploring the use of controllers which are themselves
quantum systems along with mixed classical and
quantum controllers. A practical side of this research
is concerned with the frequency locking problem for
optical cavities, applying modern control techniques
such as LQG control. Three PhD students are working
in this area and their research has involved both
theoretical development of robust quantum control
theory as well as the experimental implementation of
an LQG controller in locking an optical cavity.

Negative Imaginary Systems and the Control of


Flexible structures and Nano-positioning
Many industrial and scientific devices include
components that can be classified as flexible
structures, Flexible structures are highly resonant
systems, and therefore susceptible to high
amplitude oscillations even in the presence of
weak disturbances. These oscillations can result in
significant loss of precision and possible breakdown
if the amplitude of oscillations crosses the elastic
limit. Thus, there is a clear need to damp or control the
oscillations that arise in flexible structures.
A large number of control design techniques have
been proposed for this purpose. In particular, there
has been significant and growing interest in control
design for flexible structures with collocated sensors
and actuators. This project is concerned the theory of
Negative Imaginary systems which leads to a number
of important methods for controlling flexible structures.
Applications of this research include the use of piezoelectric actuators in nano-positioning in areas such as
atomic force microscopes and in optical cavities.

The UNSW Canberra power system testbed.

Robust Feedback Control in Quantum


Technology
Developments in quantum technology are presenting
new challenges to control theory. There is a need for
robust feedback control design methods for quantum
systems that are capable of achieving desired
performance while compensating for the detrimental
effects of uncertainty, decoherence and noise, and
taking into account the fact that measurements affect
the dynamics of quantum systems.

38

Engineering and Information Technology Research Report 2011

Control of Hypersonic Vehicles


This research is concerned with the application of
methods of robust and nonlinear control theory to the
problem of designing a flight control systems for a
hypersonic vehicle. Using existing models available
in the literature, the method of feedback linearization
can be applied to design a flight control system.
However, the available models are know to be highly
inaccurate due to a lack of experimental data. Hence,
in this research, the feedback linearization method
is combined with uncertainty modelling and minimax
optimal control methods to design nonlinear robust
controllers for hypersonic vehicles. One PhD student
is currently working on this project.

Cyber Security

Biometric Security

Mr Wencheng Yang
Mr Kai Xi
Ms Wanrong Wu

A fundamental flaw in existing embedded e-security


technologies is their cryptography-plus-PIN-number
infrastructure. This has generated security concerns
that have proved a major obstacle to the growth
of e-commerce, presently a relatively poor 2% of
market share. We aim to design a new infrastructure
that can solve this security problem by incorporating
cryptography and biometric authentication into a
computing resource limited embedded e-security
system. The outcomes of this project will be a set
of new cryptosystems, new biometrics processing
schemes and new onboard resource allocation
schemes that will form the basis for the next
generation of embedded systems.

Other Collaborators

Fingerprint Related Project:

School of Computer Science, University of Western


Australia

Fingerprint authentication is the emerging technology


for security. A fingerprint is unique and can never be
forgotten. It has found many applications in banking,
mobile device access control, law enforcement,
customs border control etc. It is predicted fingerprint
security feature will be placed everywhere. Several
interesting projects are given below.

SEIT Academics
Prof. Jiankun Hu
Dr Lawrie Brown
Dr Robert Stocker
Dr Frank Jiang
Dr Kathryn Merrick
Prof. Ian Petersen

SEIT Postgraduate Students

Prof. Mohammed Bennamoun


RMIT University
Prof. Z. Tari, Dr F. Han, Dr Ibrahim Khalil, Prof. Xinghuo Yu
Beihang University, China
Dr Jihao Yin

Project 1: fingerprint registration. Most of


fingerprint applications rely on registration, a
process that align different imprints. This is
non-trivial as each fingerprint captured tends
to be different. The project tries to explore
more reliable algorithms.

Project 2: Fingerprint indexing. The size of


fingerprint database is exploding and easily
at the scale of 100 millions. Given an unknown
fingerprint, how could you find a match reliably
and fast for the very large scale databases.

Project 3: fingerprint smart card template


protection. We can use fingerprint for strong
authentication. However, fingerprint biometrics
stored in the smart card template need to be
protected by themselves. This project tries to
explore effective schemes for that

Research Description
Cyber security is a major concern in our information
age and will become more threatening due to the
ubiquitous network connections and more advanced
and automated attacks tools available. Cyber attacks
can intrude privacy, bring down a whole plant,
communication centre, and commanding systems.
Intrusion Prevention Systems (IPSs), e.g., firewalls,
intend to prevent these attacks but cannot effectively
deter new attacks or variations of virus what are
occurring daily. How about insiders attacks? Such
attacks do not go though network firewalls at all.
Cyber security is an exciting, and challenging area/
topic for both academic research and industrial
applications. It will be an on-going effort in the
foreseeable future.
Security strength is always upon the weakest link of
the systems. Therefore cyber security is a system
concept and covers a very broad spectrum including
cryptography, access control, authentication, network
security, intrusion detection etc. The Cyber Security
group at UNSW Canberra conducts both theoretical
and applied research in the aforementioned topics
with emphasis on biometrics security, sensor network
key management, and intrusion detection.
The Cyber Security group has received financial
support from the Australian Research Council, and the
University of New South Wales.

Engineering and Information Technology Research Report 2011

39

Face Related Project


Similar to fingerprint, face is a popular biometric
authentication feature which has been applied in
many places. Australia Parliament has recently (2007)
passed a bill to introduce face recognition based
smart cards for health and social services.
However, it is still an issue on the face biometric
template protection and also the false recognition rate
is still very high for very large scale databases. This
project tries to explore effective solutions.
Following achievements have been made:
1. One awarded ARC Linkage grant on partial
fingerprint identification including 3D fingerprint
identification.
2. Two ERA-A* ranking and one ERA-A ranking
journal publications including the prestigious IEEE
Transactions on Pattern Analysis and Machine
Intelligence.

Network Security
Network security is becoming a major issue in our
daily life. Firewall technology seems to be insufficient
as we are having more and security break-in reports
on worm, various attacks. In theory, it is impossible to
prevent such attacks. Therefore, it is very important to
have a second defense which is intrusion detection.
Several interesting projects are given below.
Project 1: Anomaly intrusion detection. Normally firewall
technology can detect attacks with known features.
However new ways of attack are happening all the
time. There is a need to detect attacks with unknown
features. Anomaly detection is a promising technology
that can detect unknown attacks. This project tries to
explore effective schemes to reduce high false alarm
rates in the existing technologies.
Project 2: Wireless sensor network security. Wireless
sensor network is regarded as the most influencing
technology in the 21st century. However, security
is a major issue in sensor network as it is normally
deployed in a hostile environment and also it cannot
afford many existing security mechanisms due to
energy issue. This project will explore energy efficient
security schemes especially on cryptography key
generation and distribution.

40

Engineering and Information Technology Research Report 2011

Developmental Systems
and Machine Learning

SEIT Academics
Dr Kathryn Merrick
Dr Kamran Shafi
Dr Amitay Isaacs
Dr Michael Barlow
Dr Chris Lokan
A/Prof Valeri Ougrinovski
Prof Hussein Abbass
Dr Jen Badham (Visiting Fellow)

SEIT Postgraduate Students


Mrs Medria Hardhienata
Mr Muhummad Shoaib Khan Niazi
Mr Essam Soliman Yousseif Mohamed Debie
Ms Bing Wang

Other Collaborators
University of Newcastle
Dr Ning Gu
University of Maryland
Dr Mary Lou Maher
University of Sydney
Dr Xiangyu Wang

Research Description
Topics studied by the Developmental Systems and
Machine Learning group lie at the intersection of
cognitive science, developmental robotics, virtual
worlds and machine learning research. Cognitive
science is the interdisciplinary study of how
information used during perception, language,
reasoning, motivation and emotion, is represented
and processed, either in a human or animal, or by
a machine (specifically a computer in our case).
Developmental robotics and character animation
in virtual worlds are application areas that use
principles of cognitive and developmental sciences
to build artificial systems capable of ontogenetic
development. Such systems initially have little or no
domain-specific knowledge or skills in their infant
stage, but are equipped with generic reasoning
mechanisms that permit them to acquire such
knowledge and skills through interaction with their
environment as they mature to an adult stage.
Researcher areas of interest to the Developmental
Systems and Machine Learning group include, but are
not limited to, reinforcement learning, neural networks,
data mining, ensemble learning and learning
classifier systems, as well as naturally inspired
cognitive models, genetic and evolutionary systems.
Applications include robotics, digital characters
in virtual worlds, intelligent environments, network
intrusion detection and social networks.

Highlights in 2011 include the renovation of the


Developmental Robotics Laboratory, commencement
of a fortnightly research meeting in conjunction with
the Virtual Environments and Simulation Laboratory
and the welcoming of two new postgraduate students.

Computational Models of Achievement,


Affiliation and Power Motivation for Artificial
Systems
In the area of cognitive science and developmental
systems, 2011 saw publication of a number of
computational models of motivation for use in of
agent-based or robotic applications. Computational
models of the influential trio: achievement, power
and affiliation motivation were developed for artificial
systems. The new models have been validated in
agent-based simulations of well known experiments
from human psychology, including the ring-toss
experiment, roulette experiment and a prisoners
dilemma experiment. Results show that our new
models permit the design of agents with statistically
similar decision-making properties to humans
under certain cooperative, competitive and risktaking conditions. These results were published in
the Adaptive Behavior journal and presented at the
International Conference on Autonomous Agents and
Multiagent Systems in 2011. In 2011 these models
were studied for single-shot decision making. In
2012 this work will continue with a focus on iterative
decision making.

Reasoning in the Absence of Goals


In creative industries such as design and research it
is common to reason about problem-finding before
tasks or goals can be established. Problem-finding
may also continue throughout the problem-solving
process, so achieving goals may be an ongoing
process of discovery as well as iterative improvement
and refinement. This project considers the design of
cognitive systems with complementary processes
for both problem-finding and problem-solving. In
2011 we reviewed a range of approaches that may
complement goal-directed reasoning when an
artificial system does not or cannot know precisely
what it is looking for. We argue that there is a spectrum
of approaches that can be used for reasoning in the
absence of goals, which make progressively weaker
assumptions about the definition and presence
goals, and that goal-oriented behavior can be an
intermediate result of problem-finding, rather than
as a starting point for problem-solving. In 2011 this
project supported a Chief of Defence Force Student
Project, resulting in a publication at the AAAI Fall
Symposium on Advances in Cognitive Systems.

Engineering and Information Technology Research Report 2011

41

Task Allocation in Multi-Agent Systems Using


Models of Motivation and Leadership

Creative Search Systems and Hypothesis


Generation

This PhD project focuses on how to improve the


efficiency of multi-agent coordination, so that
coordination problems can be solved in a more reliable
way. To address one aspect of this issue, this project
proposes a new method that endows agents with
models of motivation and leadership to aid agents
coordination. Initially, we study the model for solving a
task allocation problem. The outcomes of this project
in 2011 include a new approach named Motivated
Particle Swarm Optimization (MPSO) algorithm that
embeds the agents with a model of motivation and
leadership for coordination. This approach considers
the task allocation problem in the case where there is a
small number of agents initialized at a single point. The
objective is to achieve an even distribution of agents
to tasks. The proposed approach uses the Particle
Swarm Optimization algorithm with a ring neighborhood
topology as a foundation and incorporates
computational models of motivation to achieve the
goals of task allocation more effectively. The results
of the numerical experiments show that compared to
the lbest PSO algorithm using a ring topology, first,
the proposed method increases the number of tasks
discovered. Secondly, the number of tasks to which
the agents are allocated increases. Thirdly, the agents
distribute themselves more evenly among the tasks.
In 2012 this work will continue to adjust some critical
parameters and other possible variations of the number
of each type of agent to increase both the role of
leader and the performance of the new proposed
method by using an optimization strategy. Other
potential work that will be done in 2012 is to implement
the Nearest Neighbors methods where the selection
of neighborhoods is done dynamically, based on the
distance between agents in the search space.

This PhD project is investigating the design of search


systems that can explore an environment defined
by a batch of input data, and find patterns hidden
in the environment. This intelligent system should be
capable of learning to improve its search ability over
time. In 2011 this project has made definitions of the
expected searching system, as well as performance
measures of an improving search system from several
different aspects. A literature review in the areas
of hypothesis generation, rule mining and machine
learning has also been conducted.

Extending the Data Mining Capabilities


of Learning Classifier Systems to HighDimensional Search Spaces and Limited
Training Data
Learning classifier systems are emerging geneticsbased machine learning techniques that have recently
shown a high degree of competence on a variety of
data mining problems. One critical problem that is
highlighted in recent research is the stalling of the
genetic search when facing with high dimensional
problems. Another problem is their performance
degradation when dealing with limited training data.
This latter problem has not been analysed adequately
in the literature. This PhD project is addressing these
two problems independently and in combination.
A literature review has been conducted in the area
of learning classifier systems and a number of
experiments conducted to systematically diagnose
the key issues that cause Learning Classifier Systems
to perform poorly under the two scenarios. This work
will continue in 2012 to formalise the learning bounds
under the two problems and develop solutions to deal
with issues highlighted by our analysis.

42

Engineering and Information Technology Research Report 2011

Motivated Agents for Modelling Social


Network Crime
Still in the computer security domain, this Masters
by research project is developing agent-based
simulations of social network crime with specific
focus on spam and the role of human motivation
in generating and detecting spam. To reduce the
potential threats and risks to social networking
sites (SNSs), we require an understanding of the
contributing factors. In 2011 the outcomes of this
project include a study and model of SNSs and
their user types. This division will provide us with
a base to understanding the culture and business
model of SNSs and the behavior of their users. It
is also necessary to understand the current tactics
used by cybercriminals on SNSs, future threats and
challenges faced by SNSs and ways to deal with
those threats and challenges. Generally SNSs can
be categorised on the basis of the services they are
offering and level of authority they give to their users.
Furthermore, users can be categorised on the basis
of their motives for using SNSs. This permits us to
identify users who can impose threats for other users
as well as for the SNS itself. In 2011 this project will
continue to validate our model through a study of
user behavior on existing SNSs including Facebook
and Twitter.

Case Studies Using Multiuser Virtual Worlds as


an Innovative Platform for Collaborative Design
This project investigated the innovative use of
emerging multiuser virtual world technologies
for supporting human-human collaboration and
human-computer co-creativity in design. The project
defined a series of conceptual technology spaces
that describe the different aspects of virtual worlds
that make them useful as platforms for certain types
of collaborative design. The primary spaces were:
design tools for modelling new artefacts, support for
communication, and the ability to incorporate artificial
models of cognitive design processes. Secondary
spaces include the network and graphics technology,
educational/tutorial systems and motivational systems.
In order to support the conceptual technology
spaces for multiuser virtual worlds, a number of case
studies were conducted and examined in the field of
collaborative design using multiuser virtual worlds.
Analysis of these case studies reveals the current
strengths and limitations of multiuser virtual worlds for
supporting human-human collaboration and humancomputer co-creativity in design activities. In addition
they suggest extensions of virtual world design
systems beyond small-scale collaborative design
towards large-scale mass participation and collective
design. This work was published in the Journal of
Information Technology in Construction, Special Issue
on Use of Virtual World Technology in Architecture,
Engineering and Construction.

Supporting Collective Intelligence for Design in


Virtual Worlds
This project analysed virtual worlds with reference
to the technological facets that can support of
collective intelligence in design. These include
graphical simulation tools, communication, design
and modelling tools, artificial intelligence, network
structure, persistent object-oriented infrastructure,
economy, governance and user presence and
interaction. We discuss how these facets support the
design, communication, motivational and educational
requirements of collective intelligence applications,
and how these world facets can be adopted for
supporting collective design by drawing analogues
to gaming concepts such as level systems, quests or
plot and achievement/reward systems. We argue that
there is a mapping between these game elements
and the requirements to achieve collective intelligence
in design. In 2011 this work resulted in a case study
of Lego Universe, to validate the technology facets
defined above.

Computational Creativity and Procedural


Content Generation in Computer Games
With rapid growth in both production costs and
player populations over the last decade, the
computer games industry is facing new scalability
challenges in game design and content generation.
The application of computers to these tasks called
procedural content generation has the potential to
reduce the time, cost and labour required to produce
games. A range of generative algorithms have so far
been proposed for procedural content generation.
However, automated game design requires not only
the ability to generate content, but also the ability
to judge and ensure the novelty, quality and cultural
value of generated content. This includes factors
such as the surprise-value of generated content as
well as the usefulness of content in the context of a
particular game design. Studies of human designers
have identified that the ability to generate artefacts
that are novel, surprising, useful and valuable are
facets of the human cognitive capacity for creativity.
This suggests that computational models of creativity
may be an important consideration for developing
tools that can aid in or automate design processes.
However such cognitive models have not yet been
widely considered for use in procedural content
generation for games. This project has developed a
framework for procedural content generation systems
that use computational models of creativity as a part
of the generative process. A software system has
been implemented that combines the generative
shape grammar formalism with a model of creativity
based on the Wundt curve to select new designs that
are similar-yet-different to existing human designs.
The approach aims to capture the usefulness and
value of existing designs while introducing novel
and surprising variations. The system incorporates
a metric that permits generated designs to be
evaluated in terms of both their similarity to high
quality human designs and their creative novelty

We discuss the potential of Lego Universe or


similar tools to move design beyond the individual
and small-scale professional design teams to
harness large-scale collective design through mass
participation. This was published in
CAADFutures 2011.

Engineering and Information Technology Research Report 2011

43

SEIT Academics
A/Prof Mark Pickering
Dr Andrew Lambert
Dr Murat Tahtali

SEIT Postgraduate Students


Abdullah Al Muhit
Md. Nazmul Haque
Rafiqul Islam
Md Abdullah Masum
Masuma Akter
Md Omar Khyam
Sajib Kumar Saha

SEIT Research Staff


Dr Moyuresh Biswas

Other Collaborators
Trauma and Orthopaedic Research Unit, The Canberra
Hospital
A/Prof Paul Smith
A/Prof Jennie Scarvell
Dr Tom Ward

Research Description
There are many aspects of engineering which can be
applied to improve medical technology. The current
focus of the Engineering in Medicine group is the
application of image and signal processing techniques
to aspects of orthopaedic medicine. In particular,
the research focus has been on the development of
new ways to measure the three dimensional motion
of bones in a joint while the patient is performing
everyday functional tasks. The measurement of how
the bones move can provide valuable information
for many aspects of the treatment of injured joints.
For example, an analysis of pre- and post-operative
motion on patients undergoing total knee replacements
can provide valuable feedback to the designers
and manufacturers of knee implants. The ability to
accurately measure joint motion can also be used in
planning rehabilitation treatments targeting particular
muscle groups to bring the joint motion back into the
normal range after injury or reconstructive surgery.
The research of the group has focussed on developing
improved imaging techniques to fuse 2D motion data
available from standard hospital imaging equipment
with 3D CT data and alternative non-invasive
techniques for kinematic analysis using ultrasound.

Visualization of the motion trajectories (kinematics)


of individual bones in a knee joint gives significant
insights to orthopaedic surgeons for the analysis of
knee replacement and reconstruction surgery. A major
focus of the orthopaedic research community is to
restore normal motion to the knee joint after a total knee
replacement or surgery to repair ruptured ligaments.
Kinematic analysis has several important applications
including: providing valuable information during
knee replacement surgery, enabling the comparison
between the motion of normal and abnormal knees
for designing artificial knee components, investigating
how abnormal motion influences the resulting early
wear of the components in an artificial knee joint,
evaluating different types of techniques for ligament
reconstruction, identifying pain and wear inducing
motion and developing therapeutic strategies to
prevent this motion in its early stages. Currently the
standard way to measure the motion with enough
accuracy is by implanting tantalum beads in the bones
prior to imaging using X-ray equipment. However this
technique is invasive and exposure to ionizing radiation
imposes a significant cancer risk. Moreover, during the
procedure the patient cannot perform normal everyday
activities due to their confinement to the limited field of
view of the X-ray equipment. Recently we proposed a
novel non-invasive approach to measure knee motion
using non-invasive 2D B-mode ultrasound and 2D/2D
image registration. Results from this work show a
maximum deviation of 0.51 mm and 0.42 mm from the
true displacements for the registered horizontal and
vertical motion parameters respectively. The standard
deviation of the error between the true and measured
translations was 0.145 mm and 0.151 mm for the
horizontal and vertical translations respectively.
These precision results compare favourably with the
current clinical standard for kinematic analysis (RSA)
which has a reported precision of 0.25 mm.
Horizontal Translation (mm)

Medicine

Precision Assessment of B-mode


Ultrasound for Non-Invasive Motion Analysis
of Knee Joints

Vertical Translation (mm)

Engineering in

20
15

True Displacement
Registration Result

10
5
0
0

15

10
15
20
2D Bmode US Slice No.

25

True Displacement
Registration Result

10

0
0

10
15
2D Bmode US Slice No.

20

Figure 1: The true displacement of the


sensor and the displacement measured by
the registration algorithm for (a) Horizontal
translation and (b) Vertical translation.

44

Engineering and Information Technology Research Report 2011

30

25

A New Similarity Measure for Multi-Modal


Image Registration
Image registration is the process of spatially aligning
one image to another. Registration algorithms consist
of two main components: a similarity measure and an
optimization technique. For images captured using the
same sensor the similarity measure used is typically
the sum-of-the-squared difference (SSD) between the
two images. However, if the images to be registered are
captured using different sensors, a linear relationship
between the pixels cannot be assumed and the SSD
will not be a true indication of the spatial alignment
of the images. In such cases, a multi-modal similarity
measure is required such as Mutual Information (MI),
Cross-Correlation or Correlation Ratio. These similarity
measures quantify the relationship between two images
using probability distributions rather than intensity
values. Registration is often required for medical images
of the same patient captured using different imaging
modalities such as MRI, CT and PET. In this project,
a new multi-modal similarity measure was proposed
that was based on calculating the sum-of-conditional
variances from the joint histogram of the two images
to be registered. The formulation of this new similarity
measure allows the standard Gauss-Newton optimization
procedure to be used. To evaluate the performance of
the new similarity measure, we compared the algorithm
with the approach developed by Thevenaz and Unser
for the MI similarity measure. Figure 2 (a) shows the
success rate of the two algorithms and Figure 2 (b)
shows the average registration error at each iteration
over the successful registration attempts performed by
the two algorithms. These results show that our new
approach is more accurate and robust than the most
common and best performing alternative.

110

SCV
MI

100
90
80
70
60
50
40
30
20
10
0

05

510

1015

1520

2025

SCV
MI

6
5
4
3
2
1
0
0

10

20

30

40

50

60

70

80

Figure 2: (a) Registration success rate (b)


Average registration error for each iteration.

Super Resolution of 3D MRI Images Using a


Gaussian Scale Mixture Model Constraint
Magnetic resonance imaging (MRI) is used to capture
images of the human body or parts of the body for
clinical purposes. An MRI scanner is capable of
acquiring 2D cross-sectional images of the human body
from any orientation. It is a non-invasive method and
uses strong magnetic fields and non-ionizing radiation
in the radio frequency range. In multi-slice magnetic
resonance imaging (MRI) the resolution in the slice
direction is usually reduced to allow faster acquisition
times and to reduce the amount of noise in each 2-D
slice. In this project, a novel image super resolution (SR)
algorithm was developed to improve the resolution of
the 3D MRI volumes in the slice direction. The proposed
SR algorithm uses a complex wavelet-based deblurring approach with a Gaussian scale mixture model
sparseness constraint. The algorithm takes several multislice volumes of the same anatomical region captured
at different angles and combines these low-resolution
images together to form a single 3D volume with much
higher resolution in the slice direction. Our results
showed that the 3D volumes reconstructed using this
approach have higher quality than volumes produced by
the best previously proposed approaches.
Engineering and Information Technology Research Report 2011

45

Maximum-Entropy
Analyses of Flow
Systems

SEIT Academics
Dr Robert Niven

Other Collaborators
Australian National University, Australia
Prof. Roderick Dewar
Dr Charley Lineweaver
CNRS Poitiers, France
Prof. Bernd Noack
CSIRO / University of Western Australia, Australia
Prof. Klaus Regenauer-Lieb
University of Hiroshima, Japan
Dr Hisashi Ozawa

Research Description
This theme concerns the concept of entropy, a
measure of the disorder of a system, and one of the
most profound but least understood discoveries of
human knowledge. As shown by Boltzmann, entropy
is based on probabilistic (combinatorial) concepts,
providing the tool to predict the most probable state
of a system. Although this idea is widely applied
in statistical mechanics and thermodynamics,
the fundamental concept has far broader power
of application, being applicable to all systems of
probabilistic character. The potential for new methods
for analysis of many scientific, engineering and human
systems - to replace a variety of empirical and semitheoretical methods - is especially strong.
In this project, the generic maximum entropy method
(MaxEnt) developed by Jaynes in 1957 was
used to infer the state of several different types of
probabilistic systems. Research was undertaken on
several interrelated projects, as listed below.

Maximum-Entropy Closure of Steady-State


Fluid Flow Systems
This project involves the prediction of the steady state
of a non-equilibrium flow system, for which three
approaches have been developed:
1. During a 3-month visit by Dr Niven to the Institut
Pprime fluid mechanics laboratory of the Centre
National de la Recherche Scientifique (CNRS) /
Universit de Poitiers / ENSMA, Poitiers, France,
hosted by Prof. Bernd Noack, a new approach
was developed for a Galerkin model of an
incompressible periodic cylinder wake, which
employs a MaxEnt method for system closure.
The analysis predicts mean amplitude values
and modal energy levels in good agreement with
direct Navier-Stokes (DNS) simulation, in effect
supplanting the need for DNS analysis. In addition,
it provides an analytical equation for the modal
energy distribution. The authors believe this work
to be a major research achievement, which could
pave the way for a new, scientifically defensible
turbulence closure method without the need for
artificial constructs (such as the eddy viscosity).
The research findings have been summarised in a
50-page manuscript which has just been accepted
for publication (Noack & Niven, in press).
Dr Niven will host a reciprocal visit by Prof. Noack
to UNSW Canberra during April 2012, enabling
accelerated research on this theme.
2. A MaxEnt analysis of an infinitesimal element
within a control volume, using an entropy function
defined on the set of fluxes through the element.
In specific circumstances, this analysis provides
a derivation of the maximum entropy production
(MaxEP) principle, currently used as an empirical
heuristic to predict the steady state of many nonequilibrium flow systems. This research unites all
fields which involve non-linear flow phenomena
(e.g. turbulent fluid flow, convective heat transfer,
biochemical degradation). Building on a major
theoretical foundational work published in
2009, this research led to one further refereed
conference / book chapter publication during
2011 (Niven, in press 2).
3. A MaxEnt analysis of fluid flow in a simple
internal flow (such as flow in a pipe) was also
developed. During a 1 month visit by Dr Ali
Ghaderi of REC Wafer Norway AS, Porsgrunn,
Norway, hosted by Dr Niven of the School, the
body of previous research on this topic by C.L.
Chiu was extended into full three-dimensional
analysis, involving probability density and
entropy functions defined on phase space
coordinates. This research was also integrated
with a new approach to the choice of prior
probabilities within MaxEnt, developed by
Dr Ghaderi. This work is currently being
further developed and will soon be summarised
for publication.

46

Engineering and Information Technology Research Report 2011

Maximum Entropy and Maximum Entropy


Production Analyses of the Earth and
Extrasolar Climate Systems
Employing the MaxEnt method, a simple framework
was developed by Dr Niven to enable the synthesis
of multiple climate models of the Earth climate
system. Two approaches were developed, based
on sets of individual climate models or ensembles
of models, analogous to the microcanonical and
canonical ensembles of thermodynamics. The
primary advantage of these approaches is the ability
to directly predict the optimum (most probable)
conditions of a suite of models, without the need for
calculation of the entire set. A connection was also
made to cost-benefit analysis within (any) modelling
framework. This research led to one publication in
2011 (Niven, in press 1).
Research was conducted in collaboration with Dr
Hisashi Ozawa, University of Hiroshima, Japan, on
applications of the MaxEP principle to the analysis of
planetary climate systems, with specific application
to solar and extrasolar planets. This work is currently
being summarised for publication.
This research theme also involved a number of invited
seminars to research groups in the Australia, Canada,
France and the UK, including a keynote presentation
by Dr Niven to the 31st International Workshop on
Bayesian Inference and Maximum Entropy Methods
in Science and Engineering, Waterloo, Canada, 10-15
July 2011. Dr Niven was also co-host of the Maximum
Entropy Production Workshop, Australian National
University, Canberra, Australia, 12-14 September
2011. Dr Niven and collaborators are currently
organising a further international conference on
MaxEP and related principles, to be held in Australia
in December 2012.

Engineering and Information Technology Research Report 2011

47

Geotechnical

Engineering and
Pavement Geotechnics

SEIT Academics
A/Professor Robert Lo
Dr Rajah Gnanendran

SEIT Postgraduate Students


Mr Abdul Lahil Baki
Mr Rajibul Karim
Mr Dalim Paul
Mr. Ariful Islam
Mr. Ohiduz Zaman
Mr. Jiajun Zhang
Mr. Alam Iftekhar
Mr. Nurul Islam
Mr. Mathanraj Theivakularatnam

Other Collaborators
Road and Traffic Authority (RTA), NSW
Queensland Department of Transport and Main Roads
SafeLink Joint Venture
Maccaferri Australia Pty Ltd
Nehemiah Reinforced soil, KL and Syd.
Nanyang Technological University, Singapore
University of Nottingham, UK
Centre of Research and Professional Development, HK
Indian Institute of Technology, Madras, India
Swinburne University of Technology, Melbourne

48

Engineering and Information Technology Research Report 2011

Research Description
Geotechnical engineering is a vital part of civil
engineering that deals with the engineering aspects
of soils and rocks which are collectively referred as
geomaterials. The designs of every building, bridge
or any other civil engineering structure built on the
ground must give due consideration to the underlying
and/or surrounding geomaterials. Among the
geomaterials, soft clays are widely found in Australia
and around the world and they are problematic for
constructing civil engineering structures due to their
low shear strength, high water content and large time
dependent deformation characteristics. However, due
to rapid growth of infrastructure and transpiration
development and environmental considerations, the
necessity of constructing road embankments and other
structures on such soft soils is common. Excessive
ground deformations, which is a common scenario
in such problematic soils, causes severe damages to
pavements and other related structures and research
in soft soil engineering form an important part of
our research activities. Another principal area of our
research is concerning the instability and liquefaction
potential of sandy soils under cyclic or dynamic
loading conditions such as earthquakes.
Soils being weak in tension, different stabilization
methods are adopted for overcoming design and
construction problems involving them (e.g. steep
slopes, retaining walls). The use of geosynthetic
reinforcement has been advocated recently to
be an economical method for stabilising such
soil structures. However, design of reinforced soil
structures depend on the interaction behaviour
between the soil and the reinforcement and research
is under way in this area also.
Road pavements are constructed with geomaterials
and hence their designs are influenced by the
engineering behaviour of pertinent geomaterials
under the influence of the environmental and traffic
loading conditions. Thousands of kilometers of
granular (gravel) base pavements (i.e. pavements
without a structural asphalt layer) exist in Australia
and a number of these pavements fail prematurely.
Important research initiatives such as light stabilization
of granular materials using cement blended with
slag or flyash and incorporating unsaturated soil
mechanics principles to characterise the behaviour of
pavement materials are undertaken by our group to
address this problem with the objective of developing
innovative solutions.

Soft clay engineering


Road embankments constructed on very soft clay
along the east coast of Australia may manifest very
high settlement in the order of 1m or more, even
for moderate embankment height. The design and
contractual issues of such embankments are further
complicated if the soft clay manifests creep and/
or sensitive behaviour. Extensive research is being
carried out in this area, supported both in-cash and
in-kind by RTA, NSW.
Currently our research is focussed on the prediction
of long term performance of soft soil behaviour using
the new elasto-viscoplastic (EVP) model developed
recently incorporating nonlinear variation of the creep
coefficient and a newly proposed yield surface (Karim,
Manivannan, Gnanendran and Lo, S-C. R. (2011). The
predictive capability of this model has been assessed
by analyzing the long term performance of Leneghans
embankment and it has been found to be superior
compared to elasto plastic and other EVP models
(see Karim et al. 2011 and Manivannan, Karim,
Gnanendran and Lo (2011) for further details).
When the application of traditional ground
improvement techniques such as surcharge
preloading, wick drains and vacuum preloading are
not appropriate for a particular situation, innovative
techniques such as electro-osmosis needs to be
considered. Though the effectiveness of electroosmosis has been widely demonstrated in many field
applications, geotechnical engineers are still hesitant
to apply electro-osmosis due to unveiled effects
such as electro-chemical effects which could not be
accounted for in the design.
The design of an electro-osmotic triaxial testing
apparatus (see Fig. 1 below) suitable for electroosmotic treatment of soft clays and for measuring
the electro-osmotic permeability, generated pore
water pressure and a testing procedure that account
the contribution of electro-chemical changes in the
improvement of soil properties was developed by
Jeyakanthan, Gnanendran and Lo (2011).
Figure 1. Photographs of experimental setup
for electro-osmotic treatment and the newly
developed top and bottom caps with electrodes
(from Jeyakanthan, Gnanendran and Lo 2011)

Engineering and Information Technology Research Report 2011

49

Liquefaction and instability of sand with fines

Deviatoric stress, q (kPa)

150
(a)

30% fc ; *(0)= +0.052


A

50
0

A1

-50

100

Deviatoric Stress, q (kPa)

150
100

(b)

200

300

400

Effective confining stress, p' (kPa)

50
0

A1

-50
0

10
20
Axial Strain, 1 (%)

30

Figure 2. Triggering of cyclic liquefaction of


loose sand with fines.

Pullout resistance of soil reinforcement


An extensive experimental study that involved the
testing of two types of reinforcement with the same
source of select fill was completed this year. The two
types of reinforcements are: ribbed steel strip and
steel ladder. The select fill used contain ~17% fines
and were obtained from a borrow area earmarked
for an actual construction project. Two unexpected
findings: The bearing resistance was mobilised
at 100% to 200% of bar diameter, and Nbfactor
increased at overburden stress less than 60 kPa (see
Fig 3). The former leads us to question whether, in
addition to bearing capacity analogy, there are other
mechanisms at play. The latter suggests the presence
of constrained dilatancy.

50

40

30

20

10

0
0

50

Test pressure (kPa)

100

Figure 3. Variation of Nb-factor with test


pressure

Lightly stabilised granular materials

IS

100

50

Nb

We continued our research in this area, which


has been ongoing for a number of years and in
collaboration with Nanyang Technological University,
Singapore, and University of Nottingham, UK.
We demonstrated that a single relationship between
equivalent granular state parameter and instability
stress ratio can be used for sand with a range of fines
content, where instability stresses ratio is the effective
stress ratio that defines triggering of static liquefaction
(see Fig. 2 below). This relationship is referred to
as the instability curve as static liquefaction, in the
context of continuum mechanics is instability under
undrained loading. The instability curve can also be
used to predict the triggering of cyclic liquefaction
of loose sand with fines, here the term loose is
defined by a clearly positive equivalent granular state
parameter (higher than the 0.045, the experimental
error in its determination).

Engineering and Information Technology Research Report 2011

Australia has one of the largest road networks in


the world and much of this network is over 40 years
old, which is typically the design life of a pavement.
Hence, each year, significant amount of roads require
rehabilitation or reconstruction to sustain the road
infrastructure at an acceptable level. Moreover, many
of these roads are being widened and reconstructed
to cater the rapid growth of road freight and traffic.
Along with this growth, increase in axle loads from
heavier vehicles such as B-doubles and road trains
is also taking pace, which substantially increases
the deterioration rates of existing pavements.
An overwhelming challenge is, therefore, how we
can cleverly engineer the construction of new
or reconstruction/rehabilitation of existing road
pavements in an environmentally sustainable and
cost-effective way.
Pavements are generally constructed with granular
materials compacted in layers over the natural
road bed material referred to as the subgrade.
Engineering characteristics of granular materials
could be improved dramatically by mixing Portland
cement to them which is referred as stabilization but
the cost of cement is quite high and hence it is not
always practiced. An economical and environmentally
friendly method of stabilizing them would be to use
binders such as blends of slag and lime or flyash and
Portland cement in small quantities which are referred
to as light stabilization.

We continued with the laboratory investigation on


the characterisation of a freshly quarried granular
base material lightly stabilised with slag-lime
cementitious binder involving unconfined compression
(UC) testing and monotonic as well as cyclic load
Indirect Diametrical Tensile (IDT) testing, both with
internal displacement measurements. The UC
test investigation involved the determination of the
unconfined compressive strength (UCS) and four
different types of stiffness moduli from both internal
and external displacement measurements. The IDT
testing included the determination of IDT strength
as well as the static and dynamic stiffness moduli
of the lightly stabilised granular base material from
monotonic and cyclic load IDT testing.
The major distress modes involving cementitiously
stabilized granular materials in road pavements
are fatigue cracking and permanent deformation
and they are being investigated through pavement
model testing. In particular, the pavement model
testing method and its suitability for determining the
stiffness modulus, fatigue and permanent deformation
properties of a pavement structure constructed with
a cementitiously stabilized granular base layer and a
clay subgrade layer was investigated.

A new laboratory pavement model testing setup


with extensive instrumentations to measure soil
deformations and strains was developed (see Fig.
4). The suitability of this testing arrangement for
determining the stiffness, fatigue and permanent
deformation characteristics of a pavement structure
was examined by studying the characteristics of a
granular base materials stabilized with 1.5 % general
blend (GB) cement-flyash and of a clay subgrade
material. The test was continued at a frequency of
3 Hz up to 8 millions load cycles and measured
horizontal tensile strain at the bottom of the stabilised
base layer was used to determine the fatigue life of
the stabilised layer (see Fig. 5 for typical results).
This study indicates that the deformation and strain
measurement setups developed for pavement model
testing is suitable for undertaking accelerated cyclic
load pavement model tests to determine the stiffness,
fatigue and permanent deformation properties of the
materials reliably (see Gnanandran et al. 2011).

Figure 4. Accelerated Pavement Model Test on cemented base and clay subgrade
(from Gnanendran, Piratheepan, et al. 2011)

Figure 5. Typical vertical deformation and horizontal strain responses obtained from accalerated pavement
model testing (from Gnanendran, Piratheepan, et al. 2011)

Engineering and Information Technology Research Report 2011

51

Pavement materials as unsaturated soil


We started our preliminary study into the influence
of fines on unbound granular base materials, with a
particular focus on developing a unified framework
(based on unsaturated soil mechanics) to explain the
often divergent findings reported in literature. Increase
in fines changes the maximum dry density (MDD),
optimum moisture content and therefore its effect
cannot be easily isolated. More importantly it also
changes the soil water characteristic Curves (SWCC)
as illustrated in Fig. 6. This means that no matter how
we make the comparison, the initial matric will be
increased. However there may also be an opposite
effect, the deformability may be reduced by the
increase of fines as illustrated in Fig. 7.

80

DOS(%)

10%
15%
60

40

20

10

100

Matric Suction (kPa)

Figure 6. SWCC for 10% and 15% fines content


with both specimen at their respective MDD

0.8

C15-540
C15-450
C15-360
C10-H-540
C10-H-450
C10-H-360

Axial strain(%)

0.6

0.4

0.2

0
1

52

10

100
1000
No. of cycles, N (log scale)

10000

Figure 7: Influence of fines content on


accumulation of permanent strain with load
cycles

Engineering and Information Technology Research Report 2011

High Frequency
Engineering

SEIT Academics
Dr Greg Milford
Dr Robin Dunbar
FLTLT Matt Gibbons (Visiting Fellow)

An equivalent circuit model for the IDC-SI unit cell


is shown in Fig. 1.2. Values for each of the six
unknown equivalent circuit components (ie. LR, CL,
LL, CR, Cp and Rs) are obtained using pseudo-inverse
techniques to directly solve the over-determined
matrix equation formed by comparing the series
and shunt arm immittances in Figure 1.2 with the
equivalent 2-port ABCD parameter terms, where the
latter are calculated from the measured or simulated
S-parameter data values over a range of frequencies.

SEIT Postgraduate Students


Ms Rajpreet Kaur Gulati
Ms Le Chen

Other Collaborators
Australian National University
Dr Ilya Shadrivov
Dipartimento di Elettronica e Telecomunicazioni,
Politecnico di Torino
Dr Ladislau Matekovits

Research Description
The High Frequency Engineering Research group
conducts theoretical and applied research in the
fields of antennas, microwave and millimetre wave
electronics, and computational electromagnetics.
The following paragraphs summarise the groups
activities in 2011.

Equivalent Circuit Modeling of Planar


Structures
The left-handed propagation behaviour of artificial
metamaterial structures offers a new paradigm
for electromagnetic device design. Transmission
line metamaterials, or composite right-left hand
transmission lines (CRLH TL) have been shown to
achieve wider bandwidth and lower losses than
resonance based left-handed structures, and a
range of wave-guiding and radiation applications
have been demonstrated. The analysis and design
of CRLH TL structures is greatly facilitated if an
equivalent circuit model is available for the CRLH
unit cell. Such equivalent circuits enable derivation
of closed form expressions for the key performance
characteristics of the CRLH structure (such as cutoff
frequency values, dispersion behaviour and Bloch
impedance). In this work we demonstrate how a
direct solver approach can be used to develop a unit
cell equivalent circuit from knowledge of a single unit
cells frequency response.

Figure 1.1 Top view (above) and photo (below)


of a single stage interdigital capacitor, shunt
inductor (IDC-SI) unit cell implemented in
Grounded Coplanar Waveguide. Vias are used
to connect top and bottom ground planes

Figure 1.2 Equivalent circuit model for the IDC-SI


unit cell of Fig. 1.1, excluding feed-lines and
coaxial connectors.

Figure 1.1 illustrates a single CRLH unit cell


consisting of a pair of series-connected interdigital
capacitors (IDC) and shunt-connected inductors
(SI), implemented in Grounded Coplanar Waveguide
(GCPW). Also shown in this figure are vias for
connecting the top and bottom ground planes of
the GCPW structure, thereby suppressing parasitic
higher-order modes in the GCPW.

Engineering and Information Technology Research Report 2011

53

To evaluate the performance of the equivalent


circuit extraction process, both Network Analyser
measurements and full-wave simulation data (using
Agilents ADS Method of Moments solver Momentum)
were generated for the structure in Figure 1.1. Figure
1.3 compares the frequency responses obtained
using the extracted equivalent circuit models with
the measured and simulated data. In both cases
very good agreement between the respective model
and original S-parameter frequency responses was
obtained, up to the right-hand cutoff frequency
around 10 GHz.
The S-parameter behaviour observed in the measured
and simulated responses above 10 GHz is due to
(undesirable) non-CRLH unit cell behaviour, and
reflects a limitation of the IDC-SI structure for CRLH
unit cell implementation. Also, the close agreement
between the measured and simulated responses
indicates accurate full-wave modeling of the structure.
The use of this direct solver approach provides a
computationally efficient alternative to iterative or
optimisation-based approaches, and is more accurate
than existing approximate methods for calculating
equivalent circuit component values.

Figure 1.3 Comparison between the frequency


response reproduced from the extracted
equivalent circuit model with the measured
(above) and full-wave simulated (below)
frequency responses, showing good agreement
up to the right-hand cutoff frequency.

54

Engineering and Information Technology Research Report 2011

Width-Modulated Periodic Structures


The demand for real-time reconfigurable radio
frequency devices for applications such as cognitive
radio and smart antennas has stimulated much
research into new approaches for adjustable
component design. Generally, such multi-band,
multi-purpose circuits require some active device
for switching or tuning the desired transfer function
of an associated passive device. Additionally, power
consumption is an important consideration with
portable devices, hence low power consumption
by active devices and low loss passive circuits are
desirable. In this work we investigate varactor diode
tuning of a modulated width microstrip transmission
line to produce a multi-band wave guiding structure
with predictable frequency response charateristics.
This approach has the advantages of no power
supply consumption by the active devices (ie.
reverse biased varactor diodes), and tunability of
the inherent multi-band behaviour of the periodic
microstrip structure.
Firstly we validate the approach taken to efficiently
simulate the periodic microstrip structure, which is
described as follows. Figure 2.1 illustrates the variation
of the effective permittivity of a microstrip line with line
width, while Figure 2.2 illustrates the corresponding
width profile along the axis of the microstrip line to
produce a sinusoidal variation in effective permittivity
of +/- Mu = 0.17 about an average value of 6.671,
over the length of a modulated width unit cell of length
of 7.5mm. Fig 2.3 illustrates the fabricated circuit
consisting of a cascade of 20 such unit cells.
The frequency response of the modulated width line
of Figure 2.3 is illustrated in Figure 2.4, showing a
low-pass response with a cutoff frequency of about
4 GHz, and multiple band-pass responses from 10
GHz upwards. Also shown in Figure 2.4 is a simulated
frequency response obtained by performing a
full-wave (Method of Moments) simulation of a single
unit cell (Figure 2.2(b)), with access ports defined
at the two narrower ends of the structure. This is
followed by a circuit simulation of a cascade of 20
identical 2-port networks, where the S-parameters
for these 2-ports are linked to the full-wave simulation
data. A layout artwork is then generated from the
circuit schematic for subsequent fabrication. Good
agreement between measured and simulated data
can be observed in Figure 2.4, confirming that the
response of the periodic cascade of unit cells can
be calculated from knowledge of a single unit cells
2-port characteristics.
Secondly, this co-simulation approach is used to predict
the performance of the active (tunable) transmission line
structure formed by inserting shunt-connected varactor
diodes at one end of each of the modulated width unit
cells. At this position the microstrip line is the narrowest
(highest impedance), maximising the effect of the shuntconnected varactor reactances.

Figure 2.5 illustrates the simulated reflection and


transmission S-parameter and extracted dispersion
responses for the 20 stage circuit, for two different
bias voltages. These results show virtually no
change in performance over the low-pass region,
but tunability of the band-pass regions. Although
the circuit of Figure 2.3 has a length of 150mm,
this approach could be used to produce a much
smaller device suited to applications requiring a fixed
lower passband and adaptable upper passbands.
In addition the co-simulation approach offers a
computationally efficient approach to periodic
structure analysis.

Figure 2.1 Variation of effective permittivity eff


of the microstrip line with line width. Vertical
dashed lines indicate eff = 5.537, 6.671, 7.805,
being the minimum, average and maximum
values of eff (substrate properties: r = 10.2
with thickness of 0.025 inch).

Figure 2.2 Modulated width unit cell: (a) width


contour using data in Fig. 2.1 to achieve
the desired sinusoidal variation of effective
permittivity, and (b) top vitew of 7.5mm long
by approximately 4mm wide varying width unit
cell, showing a finite element mesh for full-wave
simulation

Figure 2.3 Photo of 20-stage structure, including a 50 Ohm through-line for calibration purposes. Overall
length is 150mm.

Engineering and Information Technology Research Report 2011

55

Nonlinear Transmission Line Metamatrials


Incorporation of nonlinear components into a CRLH
TL produces a nonlinear (NL) CRLH TL. These
structures has been shown to exhibit a wide range
of tunable nonlinear propagation phenomena due
to the interaction of the dispersion characteristics
of the CRLH TL with the nonlinear elements. Such
circuits produce harmonic generation, parametric
amplification and oscillation, but also exhibited
unstable behaviour as the bias and input drive
conditions are varied.

Figure 2.4 Reflection (top) and transmission


(bottom) coefficient of the 20 stage periodic
structure of Fig. 2.3. Very good agreement is
observed between measured and simulated
responses.

Figure 3.1 illustrates a 20 stage NL CRLH TL circuit


where series connected varactor diodes in each unit
cell are used to produce the nonlinearity. Figure 3.2
illustrates this circuits dispersion characteristic. The
varactor diodes can be tuned for a balanced bandpass
response in which case low loss propagation occurs
from left to right-handed regions with no bandgap.
Since a wave guiding structure can function as a leaky
wave antenna (LWA) if the guided waves propagation
coefficient is less than the free-space wave number, this
dispersion characteristic predicts LWA operation for
frequencies within the light cone or fast wave region
of Figure 3.2.
In this work we show how the parametric frequencies
generated by the NL CRLH TL under large signal
drive conditions can be controlled, such that at
least one parametric frequency (frequency f2) lies
within the fast wave region of Figure 3.2, leading to
LWA radiation of this parametric frequency. Figure
3.3 illustrates the measurement configuration for
characterising the received spectra as a function of
azimuth rotation angle of the NL CRLH circuit. Figure
3.4 shows an example of the measured spectra
at a particular azimuth position, for three different
pump frequencies as indicated. Although the pump
power levels are significantly greater than the two
parametric frequency power levels in the circuit, the
measured spectra show much greater f2 parametric
frequency power levels compared to the lower
parametric frequency f1, and comparable f2 and
pump power levels, indicating much more efficient
radiation of f2 frequency components.

Figure 2.5 Frequency response of the varactor


loaded periodic structure, for varactor bias
voltages of -10V and -3V, illustrating (above)
fixed low-pass pass-band and tunable bandpass pass-bands, and (below) dispersion
characteristic indicating the slow wave
behaviour of the varactor loaded 20 stage
periodic structure

56

Engineering and Information Technology Research Report 2011

Figure 3.1 20 stage NL CRLH circuit using


varactor diode nonlinearites. Decoupled DC
varactor bias voltages are supplied with the
circuitry to the upper part of the circuit, with
coaxial connectors at the input and output port.

Figure 3.3 Measurement configuration for


measurement with helical receive antenna in
the foreground and NL CRLH TL with power
amplifier mounted on the rotating pedestal.
Zero azimuth angle is defined broadside to
the plane of the CRLH circuit, with a positive
increase to the left.
Figure 3.2 Dispersion characteristic for the
NL CRLH TL circuit of Fig 3.1, showing the
agreement between measured, simulated and
lumped element equivalent circuit modeling.
Dotted lines indicate the light lines separating
the slow and fast wave regions.

The variation with azimuth of the f2 parametric


frequency component is illustrated in Figure 3.5,
where the amplitudes are normalised to the peak
value for each azimuth scan. Most of the azimuth
scans show a distinct centralised main lobe, with a
beam-width of around 50 degrees, consistent with a
rule of thumb half power beam-width calculation of
/D where D is the length of the NL CRLH TL circuit
(91mm). The beam center position varies with the f2
frequency, increasing in azimuth as the parametric
frequency f2 increases.

This frequency scanning behaviour is consistent with


theoretical predictions, as can be seen in Figure
3.6, showing the variation of the LWA scan angle
with frequency, using both the circuit simulated
and measured dispersion data of Figure 3.2,
superimposed with asterisks indicating the beam peak
positions of Figure 3.5. The observed beam center
values for the left-hand pass-band, that is for f2 less
than 3.80 GHz (transition frequency), follow the same
trend as the predicted scan angles obtained using
either the simulated or measured data. However this
is not the case for right-hand f2 values, instead these
frequencies fit more closely to a flipped scan angle
response, indicated by the dotted curves in Figure
3.6, consistent with a reversal of the propagation
direction between LH and RH regions for the f2
frequency. The variation of the main lobe direction
appears consistent with leaky wave behaviour, and
the half-power beam-width is consistent with radiation
along the length of the structure.

Figure 3.4 Spectrum of the measured signal radiated by the NL CRLH circuit, showing comparative
amplitude f2 and pump frequencies (approx. 3.2 GHz and 5 GHz respectively) and much weaker f1
frequencies (around 1.8 GHz).

Engineering and Information Technology Research Report 2011

57

Figure 3.6 Leaky wave antenna scan angle


calculated from the extracted measured
(solid) and simulated (dashed) propagation
coefficients. Asterisks indicate the approximate
beam peaks from Figure 3.5. Dotted curves
correspond to scan angle response if
propagation direction is reversed above
3.803 GHz.

Figure 3.5 Measured receive power (normalised


to the scan peak value) variation with azimuth
scan angle, obtained using the measurement
setup in Figure 3.3. Parametric f2 frequencies
are as indicated.

58

Engineering and Information Technology Research Report 2011

High-Speed Flows and


Microfluidics

SEIT Academics
A/Prof Sudhir Gai
A/Prof Harald Kleine
Dr Jong-Leng Liow
Dr Neil Mudford
A/Prof Andrew Neely
Dr Sean OByrne
Dr Krishna Shankar
Dr John Young

SEIT Research Staff


Dr Joseph Kurtz (Research Associate)
Dr Mark Aizengendler (Electronics Engineer)
Dr Carlos Rodriguez (Research Associate)

SEIT Postgraduate Students


Mr Ashraf Ali
Mr Stefan Brieschenk
Mr Rishabh Choudhury
Mr Arnab Dasgupta
Ms Priyanka Dhopade
Mr Zhipeng Gu
Mr Varun Prakash
Mr Deepak Narayan Ramanath
Mr Md. Mahfuzur Rahman Shah
Mr Vikram Sridhar
Mr Zhaolong Wang
Mr Sven Wittig
Mr Guofeng Zhu

Other Collaborators
BAE Systems
Adam Billiards, James Whitford
Colorado State University (USA)
Prof. Ranil Wickramasinghe
The Defence Science and Technology Organisation
Dr Judy Odam, Dr Allan Paull, Dr Nigel Smith
McGill University (Canada)
A/. Prof. Eugene Timofeev
NASA Langley Research Center (USA)
Dr James Moss
Ohio State University (USA)
Prof. Walter Lempert
RWTH Aachen (Germany)
Prof. Herbert Olivier
Tianjin University
Xiubing Jing
United States Naval Academy
David Myre
University of New South Wales
A/Prof. Tracie Barber, Dr Robert Nordon, A/Prof Hans Riesen
(PEMS), A/Prof. Gary Rosengarten, A/Prof. John Fletcher, Prof
Wang Jun, Dr Li Huaizhong

University of Queensland
Prof. Russell Boyce, Dr Tim McIntyre, Prof. Richard Morgan
University of Southern Queensland
Prof. David Buttsworth
University of Western Australia
Prof. Yee-Kwong Leong
University of the Witwatersrand (South Africa)
Prof. B. Skews

Research Description
In 2011, research continued in the areas of very
high-speed flows at supersonic and hypersonic Mach
numbers and for very small-scale, low-speed flows for
microfluidic applications. The research of high-speed
flows has relevance to the development of vehicles
for high-speed flight and planetary exploration both
in terms of the external aerodynamics and heat
transfer associated with atmospheric flight at these
speeds as well as the development of propulsion
systems such as scramjets to power these vehicles.
Quite separately, the investigation of microfluidics is
concerned with the scaling of fluid flows to very small
geometries and their application to small chemical
processing systems often for biomedical needs.
The research performed in both areas ranged from
fundamental studies to improve our understanding
of the underlying physics governing these flows to
more application-based studies. These investigations
incorporated a wide range of experimental, numerical
and analytical techniques.

SCRAMSPACE supersonic combustion flight


test (OByrne, Neely, Ray, Petersen, Kurtz,
Aizengendler, Rodriguez, Krishna, Wittig, Ur
Rehman, Dasgupta)
This project is part of the Australian Space Research
Program SCRAMSPACE project, performed in
conjunction with the University of Queensland and
t13 other national and international organisations.
The aim of the project is to successfully launch a
scramjet with an axially symmetric nozzle to achieve
supersonic combustion at Mach 8 over a range of
altitudes. UNSW Canberra is contributing its expertise
in laser diagnostics, thermal paints, optimisation and
control theory to the project.
In 2011 we continued the development of a
diode-laser-based oxygen sensor to measure the flow
speed and temperature in the inlet of a supersonic
combustion ramjet engine. The design phase was
completed, and construction of a prototype has begun.
We have developed miniaturised laser current and
temperature controllers that can be used under flight
conditions. Our group has developed models of the
expected heating of the sensor during flight, and has
tested the performance of the diode lasers under these
extreme temperature conditions and integrated the
optical sensor with the rest of the scramjet payload.

Engineering and Information Technology Research Report 2011

59

Progress has also been made in development of


hypersonic control algorithms and development of
constrained optimisation algorithms for development
of optimal inlet designs.

Scramspace sensor source optical unit

CARS measurements of electric field strength


in gases (OByrne)
During Dr OByrnes sabbatical in 2011, he
collaborated with Prof. Walter Lempert at Ohio State
University in developing a new version of the coherent
anti-Stokes Raman Scattering (CARS) technique for
nonintrusively measuring the electric field strength in
hydrogen gas. Picosecond-duration pulses of intense
laser light were used to achieve the very high time
resolution required to make measurements in very
short-duration plasma events. This work coincided
with a collaborative project involving development and
testing of a device for generating pulsed nanosecondduration plasmas, with the device designed by A/
Prof. John Fletcher and Dr Toan Phung from UNSW
Kensington. The combination of these technologies
will allow us to better understand the behaviours of
these very short-duration pulsed plasmas, which have
uses that range from flow control to ignition of fuels
and sterilisation of biological samples.

Simulation of hypersonic separated flows


(Gai, OByrne, Neely, Kleine, Ramanath)

Thermal model of optical unit heating


during flight

Free flying models in hypersonic facilities


(Prakash, Mudford, OByrne, Neely,
Aizengendler)
We have continued our research into the development
of free-flying instrumented models for the study
of hypersonic flows, by performing high-speed
visualisations of hypersonic drop test experiments,
and developing a new, miniaturised heat flux
measurement system which can make simultaneous
measurements from four thin-film temperature sensors.

60

Instrumented free-flight model

Engineering and Information Technology Research Report 2011

We have continued our collaboration with Dr James


Moss of NASA Langley Research Center, on the
simulation of low-density hypersonic separated flows
using the Direct Simulation Monte Carlo method.
This work builds upon our previous successful work
on the separated wake flows of re-entry vehicles
by investigating thermal nonequilibrium effects on
these flows. This work shows how the translational,
rotational and vibrational temperatures of a molecule
at these conditions can differ from each other by
several orders of magnitude, and the assumption of
equilibrium internal energy in the molecules within
those flows, although common, is a poor description
of their thermal behaviour. Capturing this behaviour
can have significant effects on predictions of heating
for probes entering the atmospheres of other planets.
We have also found that the size of the separated
region is strongly dependent on the wall temperature
of these vehicles.
A separate study investigated hypersonic, highenthalpy flow over a rearward-facing step using
computational fluid dynamics (CFD). Two conditions
relevant to suborbital and superorbital flow with
total specific enthalpies of 26 and 50 MJ/kg, were
considered. The Mach number and unit Reynolds
number per metre were 7.6, 11.0 and 1.82 X E+06,
6.23 X E+05 respectively. The Reynolds number
based on the step height was correspondingly
3.64 X E+03 and 12.5 X E+02. The computations
were carried out assuming the flow to be laminar
throughout and the real gas effects such as
thermal and chemical non-equilibrium are studied
using Parks two-temperature model with finiterate chemistry and Guptas finite-rate chemistry
models. In the close vicinity of the step, detailed
quantification of flow features was emphasised.

Fluid-Structure Interaction of Gas Turbine


Blades (Dhopade, Neely, Young, Shankar)

Translational temperature maps of a hypersonic


separated flow with wall temperatures of
(above) 300 K and (below) 1000 K

In particular, the presence of the Goldstein singularity


at the lip and separation on the face of the step
was elucidated. Within the separated region and
downstream of reattachment, the influence of real gas
effects was identified and shown to be negligible. The
numerical results were compared with the available
experimental data of surface heat flux downstream of
the step and reasonable agreement was shown up to
30 step heights downstream.

This study has investigated the effects of


low-cycle and high-cycle fatigue interaction on the
aerodynamic and structural behaviour of a fan blade.
A numerically based analysis through the interfacing
of computational fluid dynamics (CFD) and finite
element modelling (FEM) analysis, referred to as
fluid- structure interaction (FSI) was performed in
order to estimate the fatigue life of the blade.
A numerical study using one-way FSI simulations to
predict representative fluctuating loads on the fan
rotor blades of the first axial compressor stage of a
representative gas turbine engine was performed.
The stator blade was modelled upstream of the rotor
blades to simulate the turbulent shedding of wakes
that result in aerodynamically induced vibrations
of the rotor blades, a leading cause of high-cycle
fatigue. The rotor blades are also subject to
low-cycle fatigue induced by both the high rotational
loads and the mean aerodynamic pressure loading
experienced by the blades at various operating
conditions. The transient results reflected the
oscillatory nature of the pressure loads and resulting
stresses on the blades. A stress-life analysis used
to estimate the fatigue life of the blade based on the
stresses from the FSI analysis demonstrated that it
has the potential to be a useful tool in determining
the effect of an HCF and LCF interaction on the
fatigue life of rotating components.


Streamlines and normalised pressure contours
behind a rearward facing step at two hypersonic
enthalpies (26 MJ/kg and 50 MJ/kg)

Static pressure distribution on first stage


fan stator and rotor bladeslife of rotating
components.

Engineering and Information Technology Research Report 2011

61

Two streamline plots for the case of L/D = 3: the flow field is characterised by the presence of a number
of unsteady vortices.

Supersonic flows over shallow cavities


(Sridhar, Gai, Kleine)
The study of compressible cavity flow has been an
important topic in the field of aerodynamics and
acoustics. Cavity flows are encountered in essentially
all moving vehicles, from automobiles to aircraft to
missiles. These cavities are present in the aircraft in
the form of weapon bays, landing doors etc.
Although the geometry of these cavities is simple,
their unsteady fluid dynamic behaviour is complicated
and difficult to predict both in subsonic and
supersonic flows. These fluid phenomena typically
cause unwanted drag, structural noise and vibrations.
The results show that the flow undergoes a significant
structural change when L/D is increased beyond
about 5. Cavities with L/D < 5 show a highly unsteady
oscillatory structure while those with L/D > 5 exhibit
a steady oscillatory flow within the cavity. This has
important implications with regard to noise and
vibrations of a structure that incorporates cavities in
its design.

62

Shock reflection off cylindrical surfaces (Kleine)


In spite of considerable research effort in past
decades, the reflection of shock waves off convex
cylindrical surfaces still poses a number of
unanswered questions. For a given shock Mach
number MS, the reflection pattern changes from
regular to irregular at a certain wall angle W. If one
determines this transition angle by visual inspection
of the reflection pattern and defines it as the location
of the first occurrence of a visible Mach stem, one
typically arrives at wall angle values lower than the
one found in the pseudo-steady case for a straight
wedge at the same Mach number. This would indicate
that the regular reflection pattern is maintained longer
on the cylindrical surface compared to the straight
wedge case. Numerical simulations, on the other
hand, suggest that the transition occurs at the same
wall angle as for the straight wedge. If this were
the case, the transition would be governed by the
local wall angle and would not be influenced by the
preceding history of the reflection.

Time-resolved shadowgraph visualisations showing three instants of the interaction of a shock wave with
three cylinder models of two different radii.

Engineering and Information Technology Research Report 2011

An extensive study of this configuration was


undertaken in order to clarify whether the radius of
the cylinder and the initial angle (in the case of partial
cylindrical models) influence the transition point.
Experiments reported in the literature appear to confirm
such an influence. Both aspects are directly linked to
the aforementioned quest for a transition criterion for
shock reflection off curved surfaces.
The analysis of the obtained records yields the
following conclusions:
1. the radius of the cylinder influences the shock
pattern, but this influence appears to be minute
unless one compares cylinders that differ in size
by more than an order of magnitude.

2. tests with the partial models have clearly shown that


the transition process has started before the Mach
stem becomes visible; the transition delay reported
in the literature may therefore simply be caused by
the difficulties to detect a minute Mach stem.
3. The influence of the Reynolds number on the
process is only visible if this number changes by
more than one order of magnitude,

Simulation and measurement of fluid-thermalstructural behaviour of hypersonic vehicles


(Choudhury, Dasgupta, Neely)
To quantify the structural behaviour of hypersonic
vehicles work has continued on developing and applying
techniques to simulate and measure the fluid-thermalstructural interactions that result from high-speed flight.
The prototype of an electric arc-based heating rig
was developed and demonstrated that can reproduce
the temperature histories experienced by structural
components in the highly transient flight tests performed
from the Woomera range. This calibration facility
uses a large welding power supply to heat painted
samples via electric arc. Computer control of the rig
has now been implemented allowing the prescription
of a pre-designated power history. This results in a
corresponding surface temperature history on the
sample. This rig has been successfully operated in open
loop mode and will be used to calibrate the thermal
paints used in the flight experiments. Work is ongoing
to correlate the power input and temperature result
for open-loop operation and to eventually implement
closed-loop control of the calibration rig. This will enable
calibration of the paint response both pre flight for the
nominal predicted trajectory and post flight for the actual
trajectory flown. 3-D simulations of the atmospheric
heating experienced by the HIFiRE-0 vehicle during its
flight in 2009 were performed to provide the heating
histories for the paint calibrations and these are ongoing.
In collaboration with DSTO, the HIFiRE-5 hypersonic test
flight vehicle was instrumented with extensive patches of
permanent-change thermal paint with expected launch
sometime in the second half of 2012. This work was
performed as part of a collaboration with A/Prof Riesen
from PEMS, UNSW Canberra and Dr Paull and Dr Odam
from DSTO.

Contour plots of flow field Mach number and


structural temperature during the descent of a
hypersonic nose cone.

Fluidic Thrust Control


(Rodriguez, Ali, Neely, Young)
Methods of vectoring and modulating exhaust thrust
in a converging-diverging nozzle by secondary fluidic
injection were investigated. The application of fluidic
thrust control (FTC) offers potentially significant
gains in performance and manoeuvrability without
the cost of heavy mechanical systems. FTC nozzles
also have significant advantages in relation to
reducing observability and are particularly suited to
low- cost, lightweight, highly maneuverable missiles
and unmanned combat air vehicles. Two methods
of the FTC, Shock Thrust Control (STC) and Throat
Shifting (TS), were investigated in both 2D and 3D
configurations using numerical simulations. In the STC
method, shocks are induced in the supersonic flow by
the injection of a secondary flow from the walls of the
diverging section of the nozzle. This study considered
the use of symmetric injection from the walls of the
diverging nozzle to generate strong normal shock
waves in the flow to modulate the exhaust thrust. In the
TS method secondary flow is injected from the walls
of the nozzle throat to modify its apparent shape.
While the initial STC simulations demonstrated the
ability to modulate thrust, the configuration used was
not able to reduce it.

Engineering and Information Technology Research Report 2011

63

Modelling of flow in a micro-hydrocyclone


(Zhu, Liow, Neely)

For the STC method the result indicated that pressure


thrust was the dominant term when modulating the
thrust. Total thrust, which is the sum of momentum
thrust and pressure thrust, increased as the
reduction in momentum thrust was much less than
the increase in pressure thrust. For the TS case, the
effect of slot size to throat size and the interaction
of the parameters with nozzle operating pressure
and injection angle had significant effects on the
performance of the nozzle. Increased secondary to
primary mass flow ratio increased the modulation of
thrust in the TS method. This project is a collaboration
with BAE Systems and the Department of Defence.

Micro-hydrocyclones are miniature-scale


hydrocyclones with applications in micro-devices.
As hydrocyclones do not have any moving parts,
they are easier to control in micro-devices and have
the potential to be more reliable. The flow in a 5
mm diameter micro-hydrocyclone was modelled
in FLUENT to investigate the fluid flow and particle
separation ability. Direct numerical simulation (DNS)
results have shown that the flow transition and
subsequent unsteady state behaviour occurred in
the micro-hydrocyclone at a low Reynolds number
(Rein=300) because of the onset of centrifugal
instability. The centrifugal instability results in flow
transition from laminar to the development of turbulent
flow in the hydrocyclone. This flow transition has
not been studied in previous modelling work of
hydrocyclones as they normally operate in the highly
turbulent region in industry. The centrifugal instability
in the micro-hydrocyclone begins as Grtler vortices
developing in the boundary layer which subsequently
affect the flow field. Particle motion tracing showed
that improved separation with finer cut size, d50, and
steeper separation sharpness were obtained as the
inlet velocity was increased. This improvement is
enhanced by the change in the flow characteristics
when the flow transits to turbulent flow.

!"#$%&'()*$+,-)./'!0!('

!"!#$
!%&'()$"%*+*,-$!%&./)$#'0.1(2',$

12'

3).45,+'

Vexit + Aexit (Pexit " Pambient )


Thrust = m

Contour plots of Mach number and injectant


mixing for thrust modulation via throat injection
!
(TSTM) in 2D and conical nozzles.

!"#

c1

c2

Contours of vorticity in a vertical plane (0 and 180 azimuthal positions) of the micro-hydrocyclone for
(A) 0.1, (B) 0.2 and (C) 0.4 m/s inlet velocities (A, B & C1 - one time-step results after the flow reached a
statistically steady state and C2 - time-averaged results). The increased vorticity at the wall with higher
inlet velocities are a consequence of the formation of Grtler vortices.

64

Engineering and Information Technology Research Report 2011

Force measured for a 600 m diameter end mill cutter running at 3m/tooth with different cutting speeds

Experimental study of cutting forces in micro


end-milling (Jing, Li, Wang, Liow)

Micro-end-milling is an efficient and economical


manufacturing operation that is capable of
accurately producing high aspect ratio features and
parts. This is used in the production of microfluidic
components for study of fluid flow in micro-devices.
The cutting forces affect the quality of the surface
which in turn affects the fluid flow characteristics at
the wall boundaries of the micro-channels. Studies
were carried on the cutting forces and surface
roughness by micro-end milling of 6160 aluminum
alloy. The cutting forces were found to increase with
increasing feed rate for end-mills having diameters
of 600 and 900 micrometres.
An indicator called as percentage peak force
difference was introduces to investigate the effect
of tooth runout on the cutting force variation. It has
been found that the effect of tooth runout on the
peak force variation increases when the feed per
tooth decreases, and that it increases when the
cutting speed increases with the same feed per
tooth. The results provide a means of controlling the
machining of micro-channels enabling particular
fluid flow behaviour to be manifested in a given
micro-channel design.

Investigation on the droplet formation time


with xanthan gum solutions at a t-junction
(Gu and Liow)

Xanthan gum solutions with various concentrations


were used as the dispersed phase to study the
formation time for drop formation at a T-junction.
Two critical concentrations (0.05 and 0.2 wt%) of
xanthan gum solutions were observed resulting in
three distinct regimes. The droplet diameter increased
with increasing xanthan gum concentration within
each regime but the transition through each critical
concentration was accompanied by a significant
reduction in the droplet size. Experimental results
showed that the droplet formation time decreased
exponentially with increasing continuous phase flow
rate. It was also found that the formation time was
reduced with increasing dispersed phase flow rate.
Xanthan gum solutions with a higher concentration
within each regime resulted in a longer formation time,
and there was a decrease in the formation time at
each critical concentration.
The formation time consists of growth and breakup
stages and the effect of xanthan gum concentration
on each stage was examined. The results showed that
it is possible to control the drop sizes of the dispersed
phase flow for different xanthan gum solution
concentrations by varying the flow rates.

Engineering and Information Technology Research Report 2011

65

0.01% xanthan gum

Growth
stage

End of
growth

Breakup
stage

0.025% xanthan gum

0.1% xanthan gum

6t

4t

6t

12t

9t

13t

18t

15t

19t

24.2t

18.6t

25.3t

30.2t

27.6t

32.3t

37.2t

38.6t

39.3t

43.2t

47.6t

46.3t

50t

54.3t

53t

The variation of droplet formation behaviour and time for 0.01%, 0.025% and 0.1% xanthan gum solution
(t=10 ms).
The variation of droplet formation behaviour and time for 0.01%, 0.025% and 0.1% xanthan gum solution
(t=10 ms).

66

Engineering and Information Technology Research Report 2011

Image Coding
SEIT Academics
Prof John Arnold
Prof Michael Frater
A/Prof Mark Pickering
Dr Matt Garratt
Dr Andrew Lambert

SEIT Postgraduate Students


Qiang Li
Md Nazmul Haque
Md Hafizur Rahman
Md Asikuzzaman
Ashek Ahmmed

By combining the original three RGB channels of the


scene under adaptive structured light with a fourth
channel generated using inverse principle component
analysis we can use the cooperative global
optimization algorithm to generate a dense depth
map. In order to keep clear depth discontinuities
and alleviate noise in the depth map, we aggregate
the local match score with shiftable windows.
Experimental results show our approach performs well
on images of real-world objects with strong colours
and complex textures that have been captured under
ambient light conditions. Figure 1 shows images of an
example scene taken under ambient and structured
light, the image extracted using principle component
analysis and the depth map generated by the
proposed approach.

SEIT Research Staff


Dr Moyuresh Biswas

Research Description
Digital television is now big business worldwide, and
techniques that can lead to improved compression
of audiovisual services are of great interest both
to international standards bodies and to industry.
Indeed, the increasing capacity of communications
systems is often outpaced by the increasing demand
for access to audio-visual services. The development
of more efficient transmission techniques for
audiovisual services will be of considerable benefit
to all regions of Australia and in particular to remote
regions. It can be expected that the development
of this technology will significantly improve service
quality in these areas without the need for upgrading
the existing telecommunications infrastructure.
This will allow, for example, pay and free-to-air
operators to provide additional services within their
current bandwidth limitations. Staff and students in
the Image Coding Lab are currently working on a
number of projects relating to the compression and
analysis of images and video sequences.

Dense Depth Estimation Using Adaptive


Structured Light and the Cooperative Algorithm
In this project we proposed a new depth estimation
approach using adaptive structured light. In the
proposed approach, a random noise adaptive
structured light pattern is projected onto objects
and then two cameras capture stereo images.
The adaptive colors for the random noise pattern are
acquired using principle component analysis in the
RGB color space of the image of the scene. By using
inverse principle component analysis on the images
with structured light, it is possible to maximize the
energy of the structured light and meanwhile minimize
the energy of other noise factors.

Figure 1: (a) scene under ambient light and


adaptive structured light, (b) the extracted third
component, (c) the depth map generated by the
proposed approach.

Engineering and Information Technology Research Report 2011

67

Fast Image Registration Using a Multi-Pass


Image Interpolation Approach

An Adaptive Low-Complexity Global Motion


Estimation Algorithm

Image registration is a fundamental technique in


image processing. It is used to spatially align two or
more images that have been captured at different
times, from different sensors, or from different
viewpoints. There have been many algorithms
proposed for this task. The most common of these
being the well-known Lucas-Kanade and HornSchunk approaches. However the main limitation of
these approaches is the computational complexity
required to implement the large number of iterations
necessary for successful alignment of the images.
In this project we developed an alternative approach
for image registration using a modified version of the
Image Interpolation Algorithm (I2A). Our proposed
approach requires far fewer iterations to successfully
register two images than the standard Lucas-Kanade
approach. This means that our approach is much
more suitable for pipelined hardware implementations
that are required in real-time FPGA-based registration
applications. Figure 2 shows the percentage of
successful registrations produced by our proposed
approach and the standard Lucas-Kanade registration
algorithm after a certain number of iterations. It can be
seen from these two curves that for the same number
of iterations the success rate of the MP-I2A approach
is much better than for the Lucas-Kanade algorithm.
For example the success rate for the MP-I2A algorithm
is 95% after four iterations while the rate for the LucasKanade algorithm is less than 20%.

The computational complexity of motion estimation


between video frames for video coding remains a
significant challenge even with current computing
power. An important recent advance in the
development of efficient motion estimation algorithms
is the use of image registration in the estimation of
global motion parameters for object-based video
coding. However, the main disadvantage of this
approach is the increased computational complexity
required to estimate the parameters which define
the more complex motion models. In this project
we developed a new low-complexity algorithm for
global motion estimation. The complexity of the
algorithm was reduced by performing the majority of
the operations in the gradient-descent optimization
using logic operations rather than full-precision
arithmetic operations. The new hierarchical-adaptive
low complexity (ALC-H) approach was compared with
the following algorithms: the 8-bit GN algorithm, ALC,
ALC-H(1-bit), ICA, ICA with adaptive step-size choice
(ICA-SSC), BPS7, Dufaux and Konrads coarse-to-fine
algorithm and Alzoubi & Pans approach of using a
small portion of the available data. To evaluate the
performance of the proposed algorithm, frames from
standard video test sequences were transformed
using an affine transform with randomly chosen
values for the 6 motion parameters. The algorithms
under investigation were then used to register the
transformed images to the original image.
We took frames from each sequence and applied
100 transformations (randomly generated with
the above-mentioned parameters) to generate
100 different transformed images. Although the
transformations were chosen randomly, they were the
same for all algorithms. The average PSNR at each
iteration for all successful cases is shown in Figure
1 for the algorithms under investigation. It can be
seen from Figure 1 that, except for the algorithm of
Alzoubi and Pan, all of the fast algorithms converge
to an average PSNR which is equal to that of the full
precision GN algorithm. Of the algorithms tested,
our new ALC-H algorithm is the fastest to converge
and requires only 40-50 iterations on average for
successful registration.

Figure 2: The registration success rate of the


MP-I2A and Lucas-Kanade algorithms.

Mobile Calendar

30
28

Average PSNR

26
24
22
ALC
ALCH
ALCH (1bit)
GN
ICA
ICASSC
Dufaux & Konrad
Alzoubi & Pan
BPS7

20
18
16
14
12
0

20

40

60

80

100
Iteration

120

140

160

180

200

Figure 3: Average PSNR at each iteration for the


registration algorithms under investigation.

68

Engineering and Information Technology Research Report 2011

Scale and Rotation Invariant Gabor Features for


Texture Retrieval
For image classification applications it is often useful
to generate a compact representation of the texture
of an image region. The conventional representation
of image textures using extracted Gabor wavelet
coefficients often yields poor performance when
classifying scaled and rotated versions of image
regions. In this project we developd a scale and
rotation invariant feature generation procedure for
classification of images using Gabor filter banks.
Firstly, to obtain scale and rotation invariant features,
each image is decomposed at different scales and
orientations. Then, in order to create unique feature
vectors, we apply a circular shift operation to both
scale and rotation dimensions to shift the maximum
value of the Gabor filters to the first orientation of the
first scale and the energies of these filtered images
are calculated. To demonstrate the effectiveness of
our proposed approach we compared its performance
with the most recent texture feature generation
methods in a classification task. Experimental results
showed that our proposed feature generation method
is more accurate at classifying scaled and rotated
textures than the existing methods.

Engineering and Information Technology Research Report 2011

69

Imaging Through
Turbulence

SEIT Academics
Dr Andrew Lambert
Dr Murat Tahtali
A/Prof Harald Kleine
A/Prof Donald Fraser (retd)

SEIT Postgraduate Students


Mr David Bowman
Sqnldr Malcolm Gould
Ms Ying Liu
Ms Si Liu
Mr Shan Xiu (NUI Galway)
Mr Qichao Zhao

Other Collaborators
School of Optometry, QUT
Prof David Atchison
DSTO Australia
Dr Leszek Swierkowski
Dr Geoff Nicholls
Dr Garry Newsam
CSIRO
Dr John Lasalle
Dr David Lovell
Dr Charles Jenkins
Dr Michael Goodwin
Applied Optics, National University of Ireland Galway,
Ireland
Prof Chris Dainty
Dr Nicholas Delaney
Dr Liz Daly
Dr Ruth Mackay
Dr Alex Goncharov
University of Canterbury, New Zealand
Dr Steve Weddell

70

Engineering and Information Technology Research Report 2011

Research Description
In several different fields, images are distorted by
the intervening medium. For example, images of
objects observed by telescope often exhibit unwanted
distortion and blurring due to distortion of the
wavefront by atmospheric turbulence.
We are continuing to investigate the restoration
of such images, requiring a time-sequence to be
captured and processed to remove the distortions
and reveal a clear image of the scene.
In addition, we are investigating the distribution
and behaviour of the intervening turbulent layers,
particularly for Space Situational Awareness (SSA).
Similar problems exists when imaging the retina of
the eye, in optometry or ophthalmology, due to the
inherent optical characteristic of the materials of the
eye, and also when imaging objects involving a water
surface disturbed by waves. Application areas for
these techniques include ground-based and aerial
telescopic surveillance, investigation of turbulence
severity for astronomy sites, adaptive optics in
optometry, and visualisation of objects through water.
Optical and real-time image processing techniques
are also easily applied to imaging the full wavefield in the above areas, and in microscopy.
Massively parallel image processing techniques
are being investigated using grid computing, field
programmable gate array (FPGA) clusters, and
graphical processing units (GPU).

Horizontal Image Restoration close to the


Ground Using Distributed Embedded Systems
for Real Time Applications
Our interest over recent years has been in the
surveillance of objects affected by ground-turbulence
at a considerable distance on the Earths surface.
In astronomy, the disturbing turbulence is usually
in only a small number of distinct layers in the
atmosphere above the telescope, and the turbulent
behaviour is considered to behave according to
Kolmogorov statistics. The statistical behaviour of
turbulence close to the ground has not been studied
before in any detail. To this end, David Bowman has
been developing lean algorithms suited to FPGA
implementation for processing surveillance image
sequences acquired by telescope. David is on a DSA
scholarship, augmented by project funds from DSTO.
This has been undertaken also in collaboration with
Dr Geoff Nicholls at DSTO. The idea of using stream
based processing coupled with Neural Networks
within the FPGA are the subject of collaborations with
Dr Steve Weddell from New Zealand, who visited the
group in November.

Applied Optics
Andrew Lambert spent 2010 on sabatical hosted by
the Applied Optics group at National University of
Ireland, Galway. He would like to thank his sponsors
at NUIG for this very productive opportunity.
This is a vibrant group of internationally recognised
researchers undertaking projects in astronomy,
optical communications, and image processing.
However, most of the effort is in the field of
ophthalmology. Andrew participated in an inaugural
study of surveillance imaging over long paths over
water, to determine the rational for adaptive optics in
seaborne communications systems. He continued to
investigate the companioning of high speed digital
circuits with adaptive optics systems, an area which
is being expanded upon by a shared PhD candidate
at NUI Galway, Mr Shan Xui, with emphasis on
plenoptic imaging for microscopy.
Work in the adaptive optics area continues,
particularly in space and terrestrial surveillance, with
PhD candidates, Manuel Cegarro and Sqnldr Mal
Gould. The objectives are to investigate compact
AO systems with novel wavefront sensing and
electro-optics. A demonstrator assembly is being
designed for the Schools Meade 16 telescope,
and the holographic wavefront sensor is being
investigated, with these two students.

Adaptive Optics in Human Vision


Andrew Lambert is continuing his collaboration with
colleagues at the School of Optometry, QUT, employing
adaptive optics for understanding the optical limitations
to human vision, improving clinical assessment of
the inner eye, and for developing new ophthalmic
correcting devices such as accommodating intraocular
lenses in the future. ARC funding for this study began
this year for a project entitled Removing the blinkers: a
wider study of the human eye. Peripheral aberrations,
wide-field retinal imaging and optical parameters.
Tomographic examination of the optics in the human
eye may be undertaken while the subject is involved in
visual tasks.

surfaces and the lens gradient index, and register


correlated phase distortion on a wavefront sensor
external to the eye. The retinal sources (analogous
to stars in astronomy) are created from a probe beam
pattern imaged into the eye. The shape, position, and
surface or volume structure (and hence aberration)
for each of the refracting components can then be
obtained. New PhD candidate, Ms Si Liu, is focusing
her study on this area that may shed insight into the
workings of the optics in the human eye.
Work in this area has been largely in improving the
beacon creation process, whereby active optics is
used to provide a more useful distribution of power
at the retina. A Spatial Light Modulator is used in the
illumination path to create the likes of a Bessel beam
which is less affected by the aberrations experienced
by the light en-route to the retina. There are numerous
opportunities with this process that have not been
before examined.

Lucky Region Imaging and Imaging through


or over Water
Dr (Ms) Zhiying Wen introduced the use of the
bispectrum to turbulence image restoration for
surveillance and imaging through or into water, in
her PhD studies, graduating in 2010. Her own new
lucky region technique finds the least distorted
regions from a time sequence of raw images, and
then post-processes the extracted lucky regions using
bispectral analysis to obtain a best estimate of the
target. The bispectrum has been successfully used
in astronomical image restoration for several years,
based on the premise that, averaging the bispectrum
over a time sequence cancels out the random phase
distortions by the intervening medium. Ms Ying Liu
continues these studies now as a Masters Candidate.

A simple
of this
process,
called
SLODAR,
Figure version
1: Beacon
creation
using
a SLM.
(left) The phase profile may be encoding on the illumination beam
involved
creation
two
angularly
separated
to (right)
createofan
annulus
on the
retina ofretinal
the eye to guide adaptive optic correction while imaging the
sources
which
are refracted
various of
optical
retina, or
for investigation
ofby
thethe
workings
its crystalline lens.
Engineering and Information Technology Research Report 2011

71

GPU based intensive algorithm computation

Space Situational Awareness

Murat Tahtali considers running a Kalman filter per


pixel, extracting image field motion from surrounding
regions of interest, in real time megapixel imagery,
a very real possibility for image restoration for the
effects of turbulence in the optical path. He is
addressing the real-time computation issues using
large clusters of GPU engines.

Tracking space-borne objects, and passively


observing their photometric signatures when
illuminated only by natural light, is difficult enough,
without considering the effects of the intervening
atmosphere. We are exploring the concept of a
deploy-anywhere optical telescope solution for this
purpose using the Schools 16 telescope. Problems
associated with the prediction of turbulence outside
the isoplanatic angle are investigated, as is an inbuilt
immunity to sky background and cloud absorption.
We are seeking to address these problems with multispectral imagery and adaptive optics.

Applications of the plenoptic camera to


turbulence degraded imagery
The GPU engines allow Murat to analyse the
recordings of a plenoptic camera which digitise the
3D wavefield. These recordings capture the delays
and scattering effects of the turbulence field in the
path, utilising a densely packed microlens array and
high density image sensor, and we are seeking to
evolve this technology to model the turbulence. Murat
and Andrew have developed a plenoptic camera with
39 megapixels for examination of the volume effects
of turbulence within a shock-tunnel to support the
hypersonics work of Harald Kleine.

72

Figure 2: Plenoptic Camera. Images from a trial plenoptic camera formed with a microlens array and
consumer 14 megapixel camera capture the direction of illumination as well as the spatial distribution
of the image. From these images any depth of focus can be reconstructed with post-processing. Such
recordings enable examination of the layers of a turbulent volume within a shock-tunnel.

Engineering and Information Technology Research Report 2011

Immiscible

Contaminants in Natural
Porous Media

SEIT Academics
Dr Robert Niven

SEIT Undergraduate Students


Mr Steven Waldrip

SEIT Postgraduate Students


Ms Yasmine Abdelraouf

Other Collaborators
The University of New South Wales
Prof. Nasser Khalili
Dr Markus Oeser
The Australian National University
Prof. Mark A. Knackstedt
A/Prof. Timothy J. Senden
Dr Michael L. Turner
Dr Adrian P. Sheppard
Dr Jill Middleton
Max Planck Institute for Dynamics and SelfOrganization, Gttingen, Germany
Dr Kamaljit Singh (former PhD student)

Research Description
Control of the geometric form and mobility of an
immiscible fluid phase, such as air or hydrocarbon
liquids, in water-saturated natural porous media (soil
and rock materials) has emerged as one of the most
demanding engineering challenges of the 21st century.
When subject to fluid flow, such immiscible fluids
tend to fragment to form discrete gas bubbles or fluid
droplets commonly referred to as ganglia which
remain trapped in the porous medium due to the strong
forces induced by surface or interfacial tension.
Such droplets are then effectively immobilised. It has
been estimated that some 20-30% of known petroleum
reservoirs have been rendered unrecoverable by this
mechanism, of critical importance in light of concerns
over limitations to world oil supply (peak oil).
In addition, such entrapment substantially increases
the cost and difficulty of remediation of immiscible
contaminants, such as hydrocarbons and solvents,
from contaminated sites. Furthermore, efforts to
redress global warming by the geosequestration of
CO2, involving the injection of CO2 into deep geological
formations, could be significantly impaired by this
mechanism. For these reasons, a long-standing
body of research has been conducted in SEIT on the
behaviour of immiscible fluids in natural porous media.

CO2 Sequestration in Deformable, Chemically


Interactive Porous Media
A major project was continued on the effect of CO2
sequestration in deformable, chemically interactive,
double porosity media, following the award of a
Australian Research Council Discovery Grant for
2010-12 (DP1096480, investigators Khalili, Niven and
Oeser). This project examines the effect of injection
of supercritical CO2 into deep geological formations,
both experimentally and computationally, in the latter
involving a composite multiphase / multiporosity
flow, deformation and heat transport code. Of
particular interest is the possibility of deformation
(and, potentially, failure) of the rock matrix due to
CO2 injection, which would render the CO2 storage
inoperable. A PhD student was recruited in SEIT to
couple the flow and deformation codes to a chemical
thermodynamics model, to account for chemical
reactions between the supercritical CO2 and ambient
geochemical materials. Theoretical and modelling
analyses are now underway, and one analysis is
currently being summarised for publication.

Effect of Freeze-Thaw on Oil-Contaminated


Soils
It is well known that freeze-thaw cycles, due to
seasonal and/or daily fluctuations in temperature
about 0oC, can dramatically alter the fabric,
stratigraphy, moisture distribution and many other
properties of natural soils. However, despite its
significance to the oil-fields of Alaska, Siberia and
Canada, only a handful of studies have examined the
effect of freeze-thaw processes on the entrapment
of immiscible fluids (hydrocarbons or solvents).
Research within SEIT by Dr Niven and Dr Kamaljit
Singh (2009 PhD graduate), in collaboration with
researchers at the X-ray tomography facility at
ANU, has revealed that freeze-thaw cycles induce
substantial fragmentation of entrapped hydrocarbon
ganglia and their remobilisation in the direction of
freezing. A rendered image of the X-ray tomographic
results is shown in Figure 1. Following publication of
(what we believe to be) the first manuscript on this
topic 2008, a further manuscript was published in
2011 (Singh et al., 2011), with one further article in
press (Singh et al., in press).

In 2011 the following projects were conducted as part


of this research theme:

Engineering and Information Technology Research Report 2011

73

Shape of Immiscible Fluid Droplets Entrapped


in Porous Media
Fundamental research was also undertaken on the
shape of an immiscible fluid ganglion (such as oil)
in contact with solid spheres in the presence of a
continuous fluid (such as water), as governed by
the Young-Laplace equation, a highly non-linear
partial differential equation. The shape has important
ramifications for the force of entrapment of the
droplet, and hence the mobility of oil in petroleum
reservoirs and contaminated soils. The project was led
by Mr Steven Waldrip, a former BE (Civil Engineering)
student at UNSW Canberra, who was recruited
as a research fellow. A finite element code was
implemented to solve this equation for the shape of
the fluid droplet, involving mapping between spherical
and Cartesian coordinates, subject to a free boundary
condition. A manuscript is currently under preparation
to summarise the results of this work.

Figure 1: Rendered X-ray tomogram of


hydrocarbon ganglia (yellow) and water (blue) in
bead pack (clear), at residual saturation.

74

Engineering and Information Technology Research Report 2011

Operations Research
and Optimisation

SEIT Academics
A/Prof Ruhul Sarker
Prof. Hussein Abbass
Dr Daryl Essam
Dr Chris Lokan
Dr Michael Maher
Dr Alan McLucas
Prof Charles Newton
Dr Tapabrata Ray

SEIT Postgraduate Students


Mr Md. Asafuddoula
Mr Saber Mohammed Elsayed
Mr Abdelmonem fouad
Mrs Noha Mohamed Hamza
Ms Eman Samir Hasan
Mrs Hawa Hishamuddin
Ms Farhana Naznin
Md. Humyun Fuad Rahman
Mr Nurhadi Siswanto

SEIT Research Staff


Mr Fan Qi

Other Collaborators
School of Business, UNSW Canberra
Dr Jason Mazanov

Research Description
Optimisation problems arise in many real-life design,
planning and decision processes. Most real-world
optimisation problems are complex as they involve
interacting variables and parameters, restrictions,
ambiguous goals and one or more objectives.
Typical examples of such problems are: planning,
resource allocation, logistics, inventory control,
scheduling, and company operations problems.
Worldwide, organizations are facing the problem of
appropriately modelling and solving their complex
decision problems. The optimal solutions of such
problems would result in higher productivity in
those organisations.
The Operations Research (OR) and Optimisation
group, at UNSW Canberra, conducts both theoretical
and applied research for solving complex optimisation
problems. The group covers a wide range of topics
including modelling and solving real-world problems,
analysing and improving existing OR techniques,
developing new heuristic algorithms, applying Artificial
Intelligence techniques to OR, and developing new
intelligent systems based techniques for optimisation
problems. The group also works on soft OR
techniques and their implementation for solving
real-world complex decision problems.
The Operations Research and Optimisation group
receives financial support from the Australian
Research Council, the Defence Science and
Technology Organisation and the University of New
South Wales.

University of New South Wales


A/Prof Tuan Pham, Dr Sami Kara, A/Prof Bermian Kayis

Production Scheduling under Disruption

University of Newcastle

The job scheduling problem (JSP) is considered as


one of the most complex combinatorial optimization
problems. JSP is not an independent task, but rather
a part of a company business case. In this research,
we (Hasan, Sarker and Essam) have first solved JSPs
using an Improved Memetic Algorithm (IMA). We have
studied JSPs under sudden machine breakdown
scenarios which introduces a risk of not completing
the jobs on time. We have extended IMA to deal with
the changed situation, and developed a simulation
model to analyze the risk using a job order-anddelivery scenario. So the paper has made three
sequential contributions: job scheduling under ideal
condition, rescheduling under machine breakdown,
and risk analysis for a production business case.
The extended algorithm provides better understanding
and results than the existing algorithms, the
rescheduling shows a good way of recovering
disruptions, and the risk analysis shows an effective
way of maximizing return under such situations.
A part of this research has been reported in a paper
published in the International Journal of Production
Research in 2011.

Dr David Cornforth
Curtin University of Technology
Prof M Quaddus
Queensland University of Technology
Prof Erhan Kozan
Monash University
Dr Joarder Kamruzzaman
Victoria University
A/Prof Rezaul Begg, Dr Lutfar Khan
University of Lethbridge, Canada
Prof. Sajjad Zahir
National Defence Academy, Japan
Prof Akira Namatame
Anna University, India
Prof KSP Rao
Pennsylvania State University, USA
Dr Aman Haque

Engineering and Information Technology Research Report 2011

75

Real-time Routing and Tracking Algorithms

Figure 1: A sample output for breakdowns of a


job scheduling problem

DMEA: a direction-based multiobjective


evolutionary algorithm
A novel direction-based multi-objective evolutionary
algorithm (DMEA)is proposed, in which a
population evolves over time along some directions
of improvement. We [Bui, Liu, Bender, Barlow,
Wesolkowski and Abbass] distinguish two types of
directions: (1) the convergence direction between a
non-dominated solution (stored in an archive) and a
dominated solution from the current population; and,
(2) the spread direction between two non-dominated
solutions in the archive. At each generation, these
directions are used to perturb the current parental
population from which offspring are produced.
The combined population of offspring and archived
solutions forms the basis for the creation of both the
next-generation archive and parental pools.
The rule governing the formation of the nextgeneration parental pool is as follows: the first half is
populated by non-dominated solutions whose spread
is aided by a niching criterion applied in the decision
space. The second half is filled with both nondominated and dominated solutions from the sorted
remainder of the combined population. The selection
of nondominated solutions for the next-generation
archive is also assisted by a mechanism, in which
neighborhoods of rays in objective space serve
as niches. These rays originate from the current
estimate of the Pareto optimal fronts (POFs) ideal
point and emit randomly into the hyperquadrant that
contains the current POF estimate. Experiments on
well-known benchmark sets have been carried out to
investigate the performance and the behavior of the
DMEA. We validated its performance by comparing it
with four well-known existing algorithms. With respect
to convergence and spread performance, DMEA is
very competitive.

76

Engineering and Information Technology Research Report 2011

Real-time routing and tracking require different


optimization algorithms from their offline counterparts.
Another name to real-time is time-constrained
problems, where a solution is needed within a
constrained and normally shorter timeframe.
Here, traditional offline optimization algorithms that
rely on heavy computations and long time to reach an
optimal solution fail to deliver high quality solutions
in a time-constrained environment. Moreover, when
faced with a real-time problem, it is common that the
full problem is not known in advance. For example,
parameter values may not be available in advance,
they may change during the course of solving the
problem, and some variables may become more
relevant or even redundant. This imposes a challenge
that is not normally the focus of traditional offline
optimization algorithms. This project focuses on
developing novel optimization algorithms for this class
of problems.

An Optimization Framework for the Design of


Underwater Vehicles
Recently, the Multidisciplinary Design Optimization
group at UNSW j95 Canberra has developed an
optimization framework for the preliminary design of
underwater vehicles [Ray, Anavatti, Chris, and Alam].
The framework allows the designer to identify optimum
underwater vehicle designs based on a set of user
and mission requirements. The framework is realized
by coupling commercial tools (CATIA-ICEM-FLUENT)
with in-house state of the art optimization algorithms.
Two such designs are presented below which have
been built.

Figure 2(a): Optimum Design of the


Underwater Vehicle

Multi Objective Learning Classifier Systems


Based Hyperheuristics for Modularised Fleet
Mix Problem
The Modularised Fleet Mix Problem (MFMP) is a
defence industry variant of the generalised Fleet Size
and Mix (FSM) problem. It is used as a modelling
and planning tool for estimating future military fleets
that are capable of fulfilling a range of anticipated
missions in the future in a cost effective and efficient
manner. Heuristic-based optimisation techniques are
used to obtain approximate solutions to MFMP.


Figure 2(b): Optimum Design of the SixInch Sub

Handling Equality Constraints in Evolutionary


Optimisation
Over the last few decades several methods
have been proposed for handling functional
constraints while solving optimization problems
using Evolutionary Algorithms (EAs). However, the
presence of equality constraints makes the feasible
space very small compared to the entire search
space. As a consequence, the handling of equality
constraints has long been a difficult issue for
evolutionary optimization methods. In this research,
we (Barkat Ullah, Sarker and lokan) present a Hybrid
Evolutionary Algorithm (HEA) for solving optimization
problems with both equality and inequality
constraints. In HEA, we propose a new local search
technique with special emphasis on equality
constraints. The basic concept of the new technique
is to reach a point on the equality constraint from
the current position of an individual solution, and
then explore on the constraint landscape. We
believe this new concept will influence the future
research direction for constrained optimization
using population based algorithms. The proposed
algorithm is tested on a set of standard benchmark
problems. The results show that the proposed
technique works very well on those benchmark
problems. A paper based on this research has
recently been accepted in the European Journal of
Operational Research (ERA A).

Figure 3: A sample diversity of population with


different design of next generation.

In this project we (Shafi, Bender and Abbass) present


an offline hyperheuristic for MFMP using two Michigan
style Learning Classifier Systems (LCS). Hyperheuristics in optimisation refer to search techniques
that operate on a primitive heuristics space instead
of a solution space. The LCS based multi-objective
hyperheuristic is built from multi-objective low-level
heuristics derived from an existing heuristic based
solver for MFMP. While the low-level heuristics use
multi-objective evolutionary algorithms to search nondominated solutions, LCS based hyperheuristics apply
the non-dominance concept at the primitive heuristic
level. Two LCS, namely eXtended Classifier System
(XCS) and sUpervised Classifier System (UCS) are
augmented by multi-objective reward and accuracy
functions respectively to incorporate this effect.
The results show that UCS performs better than XCS in
selecting heuristics in test instances which are closer,
in terms of a distance-based convergence metric, to
the derived global Pareto curves in these instances.

Inventory System with Transportation


Disruption
Supply chains (SC) are becoming increasingly
competitive and complex in order to effectively meet
customer demands. This nature and complexity
of the SCs make them vulnerable to various risks,
including disruptions due to interruptions in supply,
transportation and many other sources. In the
presence of a disruption, managers are required
to make quick and reliable decisions to recover
from the unexpected event with as minimal costs as
possible. In this study, a recovery model is assessed
for a two stage production and inventory system that
experiences a transportation disruption. The model
is capable of determining the optimal ordering and
production quantities during the recovery window
such that the total relevant costs are minimized, while
seeking to recover the original schedule. Such tools
are useful to assist managers in effective decision
making in response to disruptions, in particular when
determining the optimal recovery strategy for the
longevity and sustainability of their businesses.
For this research, we (Hisamuddin, Sarker and Essam)
received the best paper award from the International
Conference on Industrial Engineering and Service
Science (IESS) in 2011.

Engineering and Information Technology Research Report 2011

77

Figure 4: Evolution of generated dragonfly-wing (thin) towards the target dragonfly-wing (thick) for matching.

Figure 5: Evolution of generated damselfly-wing (thin) towards the target damselfly-wing (thick) for matching.

Ship Inventory Routing and Scheduling


This research investigates a ship inventory routing and
scheduling problem with undedicated compartments
(sIRPSP-UC). The objective of the problem is to find
a minimum cost solution while satisfying a number
of technical and physical constraints within a given
planning horizon. In this problem, we identify four subproblems that need to be decided simultaneously:
route selections, ship selection, loading, and
unloading activity procedures. First, we (Siswanto,
Essam and Sarker) develop an equivalent mixed
integer linear programming of the problem. Then, we
propose a set of heuristics for each sub-problem and
find the best combination of heuristics that ensures an
overall best solution for the entire problem.
In 2011, we considered a new variant of the maritime
inventory routing problem which involved multiple
time windows, and is hence called the multiple time
windows problem. We have developed a mathematical
model for this problem. However, due to the excessive
running time required for the mathematical model, we
have also developed a multi-heuristics based genetic
algorithm. The multi-heuristics are composed of a
set of strategies that correspond to the above four
decision points. We used this set of strategies in a
genetic algorithm framework so as to find the best
strategies. The computational results show that the
multi-heuristics can get acceptable solutions within
a reasonable running time. Moreover, the flexibility to
add or remove the strategies means that the proposed
method would not be difficult to implement for other
variants of the maritime inventory routing problem.
From this research, a paper has been published in
Computers and Industrial Engineering in 2011.

78

Engineering and Information Technology Research Report 2011

Shape Representation and Optimization


Shape representation plays a key role in many
applications such as image analysis, pattern
recognition, computer graphics and computer aided
animation [Ray and Khan]. A novel shape evolution
scheme has been developed within the group.
Two examples of such shape evolution are presented
here in the context of wing morphing to aid existing
research in flapping wing design.

A Novel Repair Mechanism based on Most


Probable Point of Failure
Handling equality constraints is a challenging
endeavour for researchers in optimization. A single
equality constraint can pose serious difficulties to an
optimization algorithm severely limiting its capability
when the size of the feasible search space is small.
This work introduces a novel approach for repairing
infeasible solutions, wherein one or all the solutions
of the population are repaired to yield feasible
solution(s). Subsequently, a suitable classic or
evolutionary optimization procedure can be used to
obtain optimal solution(s). Our [Ray and Saha] current
approach is implemented within a Real-coded Genetic
Algorithm (RGA) framework and the repair method is
based on the idea of Most Probable Point (MPP) (of
failure) which is derived from the context of Reliability
Based Optimization (RBO). Promising results have
been obtained for problems with equality constraints
and ones with active inequalities.

Learning from Evolutionary Algorithm based


Design Optimization of Axisymmetric
Scramjet Inlets

An Evolutionary Multi-objective ScenarioBased Approach for Stochastic Resource


Investment Project Scheduling

Optimisation is a key element in todays design


processes and there is an ever increasing emphasis
on development of efficient algorithms to deal with
computationally expensive optimisation problems.
While surrogate assisted optimisation methods
are commonly used for such problems, there are
few studies that attempt to understand the optimal
solutions. A study was undertaken to uncover hidden
relationships among the variables in the promising
regions of the search space. Such relationships
can be subsequently used to separate promising
and unpromising designs [Ray, Saha, Boyce, and
Ogawa]. An illustration of its classification ability
for a 3 objective scramjet inlet design problem is
illustrated below (Red: unpromising designs, Blue:
promising designs).

Many planning problems, such as mission capability


planning, can be modelled as project scheduling
problems. Unlike conventional deterministic project
scheduling problems, such problems involve
uncertainty and the execution of the plan will definitely
be perturbed by many factors. In other words, the
circumstances under which the plan will be executed
are changing and stochastic. In this paper, we
[Xiong, Liu, Chen, and Abbass] first use scenarios
to represent the stochastic elements in the problem;
these are: perturbation strength and perturbation
occurrence time. We define and explain the
Stochastic Resource Investment Project Scheduling
(SRIPS) problem. A multi-objective optimization
model of SRIPS is proposed where three optimization
objectives are considered simultaneously: makespan,
cost, and robustness. A multi-objective genetic
algorithm is employed to solve the problem. Finally,
we generate two test problems with 30 and 60 nondummy activities to validate the performance of the
proposed approach and analyze the sensitivity of the
results to different parameter settings.

Grid-Based Heuristic for Two-Dimensional


Packing Problems

Figure 6: Classification ability of the model


across three objectives

To solve two-dimensional (2D) rectangular packing


problems, we [Bui, Abbass, Baker, Barlow, Bender,
and Sarker] introduce a new spatial method based
on the discretization of the container into a grid of
cells with predefined resolution. Before an item is
added, grid cells are checked whether they can
accommodate the item. If an appropriate empty cell
cluster is found, the item is added and moved towards
the bottom-left corner of the container. This placement
and sliding method is supplemented by a heuristic
that orders the items according to descending size.
Order and rotation of items can be improved by
hybridizing the heuristic with a genetic algorithm (GA)
in which a population of order-rotation chromosomes
is evolved.
The method is tested on 47 benchmark problems
and compared to other methods in the literature.
This shows that it is fast and performs very well in
finding high quality solutions. Particularly for large
problem sizes, it outperforms some of the currently
leading methods, such as heuristic recursive (HR).
The hybridization with the GA meta-heuristic results in
further performance improvements.

Engineering and Information Technology Research Report 2011

79

User- and Application-Centric Multihomed


Flow Management
We addressed the problem of network selection and
flow distribution for a multihomed mobile device.
We argue the benefits of a holistic approach which
considers user- and application-centric metrics
such as quality, energy consumption and monetary
cost, rather than the commonly used network-centric
metrics. We thus introduced the multihomed flow
management problem which combines network
selection, flow distribution and application flow
awareness. We formulated it as a constrained
optimisation problem and compared it to commonly
used techniques: single network selection and load
balancing. For selected interactive applications, we
used empirical network measurements to evaluate the
optimal solutions obtained by the three approaches.
We showed that, by exploiting the flexibility of
application parameters, it is possible to achieve
the potentially conflicting goals of maintaining
high application quality while reducing both the
power consumption and cost of network use. The
investigators of the project are M. Maher (SEIT, UNSWCanberra), O. Mehani (UNSW and NICTA and INRIA),
R. Boreli (UNSW and NICTA) and T. Ernst (INRIA).

GA for Constrained Optimisation


Over the last two decades, many different Genetic
Algorithms (GAs) have been introduced for solving
optimization problems. Due to the variability of the
characteristics in different optimization problems,
none of these algorithms has shown consistent
performance over a range of real world problems.
The success of any GA depends on the design of
its search operators as well as their appropriate
integration. In this research, we (Elsayed, Sarker
and Essam) propose a GA with a new multi-parent
crossover. In addition, we propose a diversity operator
instead of mutation and maintain an archive of good
solutions. To judge the performance of the algorithm,
we have solved not only a set of constrained
optimization benchmark problems but also a variety
of real world optimization problems. The experimental
analysis showed that the algorithm converges quickly
to the optimal solution and a superior performance
compared to other algorithms that also solved those
problems. This algorithm has been received the Best
Algorithm Award from the Real-world Optimization
Problem Solving Competition organized by the IEEE
Congress on Evolutionary Computation in 2011.

Kangaroo: An Efficient Constraint-Based Local


Search System Using Lazy Propagation
We introduced a constraint-based local search
system, called Kangaroo. While existing systems
such as Comet maintain invariants after every
move, Kangaroo adopts a lazy strategy, updating
invariants only when they are needed. Our empirical
evaluation shows that Kangaroo consistently has a
smaller memory footprint than Comet, and is usually
significantly faster. The investigators of the project are
M. Maher (SEIT, UNSW-Canberra), M.A.H. Newton
(NICTA and Griffith U.), D. N. Pham (NICTA and Griffith
U.) and A. Sattar(NICTA and Griffith U.)

Large Scale Optimisation


It is very difficult for existing algorithms to solve large
problems with many variables. One popular approach
to alleviating these problems is to divide the large
problems into a number of subproblems, and to then
solve these subproblems using independent computer
processors. This can be suboptimal because when
one subproblem is optimised, it may cause one or
more other subproblems to become deoptimised.
This occurs because the variables in one subproblem
interact with those of another. In this research, we
(Hasan, Daryl and Sarker) have identified such
dependencies and have tailored the subproblems
to limit such dependencies. Results so far, have
supported the merits of this approach.

80

Engineering and Information Technology Research Report 2011

Figure 7: The effect of the diversity operator on


the quality of results

Soft Operations Research and System


Dynamics Modelling
Complex problems which outmatch human cognitive
capabilities are labelled wicked or messy.
Such problems arise in socio-technical,
socio-economic and socio-ecological contexts.
They confound the most diligent and fervent efforts
by leaders of organisations and Governments.
When confronting such problems the best we might
expect to achieve is to transform an existing problem
situation into a form which is more acceptable
that which we currently face. Often our failures to
fully understand such problems results in failures
to formulate effective intervention strategies. In
this research, Soft Operations Research (Soft
OR) is combined with System Dynamics (SD)
modelling to improve stakeholders understanding
of a wicked problem situation and to facilitate
learning about it. The research currently being
conducted in the Australian Capital Territory (ACT)
by El Sawah, McLucas and Mazanov integrates
Soft OR methodologies and SD modelling to aid
understanding with a view to ultimately re-shaping
the behaviour of water consumers and enabling
water resource managers to develop highly effective
strategies. The first stage of this research, which
involved capturing and analysing how consumers
perceive their roles in influencing the dynamic
changes in water consumption, is now complete.
Similar analysis of the roles of managers has been
completed. The cognitive maps of consumers and
managers have been used to guide the development
of a set of causal maps, an example of which
is included in the Figure below. Causal mapping
guided the development of series of SD models
and computer simulations through which players in
the roles of either consumer or manager, set out to
discover how the dynamics of water consumption
and supply play out over time and how they might
intervene to achieve desirable targets. SD models
which incorporate both the ACTs water supply
and demand for water have been validated against
historical data for the past 30 years. These SD
models provided the basis for developing computer
simulations through which both consumers and
managers have been able to test the efficacy of
their own strategies for managing the ACTs limited
water resources. A recently completed pilot study
demonstrated that players achieved significant
learning through playing these computer simulations.
During the next stage of this research the computer
simulations will be made publicly accessible.
This next stage will seek to establish the potential use
of computer simulations in enhancing learning and
re-framing public attitudes about the consumption
of scarce water resources. These simulations are
expected to be publicly accessible soon.

OR in Bioinformatics
Multiple sequence alignment is one of the most
important issues in molecular biology as it plays an
important role such as in life saving drug design.
In this paper, we (Naznin, Sarker and Essam) divide
given sequences into two or more subsequences and
then combine them together in order to find better
multiple sequence alignments by applying a new GA
based approach to the combined sequences. We also
introduce new ways of generating an initial population
and of applying the genetic operators. We have
carried out experiments for the BAliBASE benchmark
database using the sum of pair objective function with
the PAM250 score matrix. To evaluate our proposed
approach, we have compared with well known
methods such as T-Coffee, MUSCLE, MAFFT and
ProbCons. The experimental results show that better
multiple sequence alignments may be obtained with
higher number of divisions, however the computation
time increases with the number of decompositions.
The overall performance of the proposed
Decomposition with GA (DGA) method is better than
the existing methods and the GA method (without
decompositions). A paper from this research has been
accepted for publication in the IEEE Transactions on
Evolutionary Computation (ERA A*) in 2011.

Engineering and Information Technology Research Report 2011

81

Opto-Electronics
SEIT Academics
A/Prof Charles Harb
Prof Elanor Huntington
Dr Greg Milford
Prof Ian Petersen
Mr Trevor Wheatley

SEIT Postgraduate Students


Ms Kathryn Conroy
Ms Katanya Kuntz
Mr Mohammad Mabrok
Mr Rohit Ramakrishnan
Mr Karam Chand
Mr Peter Kuffner

SEIT Research Staff


Dr Toby Boyson
Dr Abhijit Kallapur

Other Collaborators
Centre for Quantum Computer Technology, Australian
Research Council
Loyola University New Orleans, LA, USA
M. Calzada, T.G. Spence
Macquarie University, NSW, Australia
Y. He, B.J. Orr
Australian Federal Police, ACT, Australia
K.P. Kirkbride
ACQAO, The Australian National University
J. Janousek, H-A. Bachor
Department of Physics, Denmark
P. Buchhave
Los Alamos National Laboratories, Los Alamos,
NM, USA
D.S. Moore
Department of Applied Physics and Quantum Phase
Electronics Center, School of Engineering, The
University of Tokyo, Japan
A. Furusawa, H. Yonezawa, D. Nakane, H. Arao
Institute for Quantum Computing, University of
Waterloo, Waterloo, ON, Canada
D. W. Berry
Perimeter Institute for Theoretical Physics, Waterloo,
ON N2L 3G1, Canada
D. T. Pope
Department of Physics, Centre for Quantum
Computing Technology, University of Queensland,
QLD, Australia
T. C. Ralph
Centre for Quantum Dynamics, Centre for Quantum
Computing Technology, Griffith University, QLD,
Australia
H. M. Wiseman

82

Engineering and Information Technology Research Report 2011

Research Description
The Opto-Electronics research group conducts
fundamental research into potentially high-payoff
applications of opto-electronic systems.
The Opto-Electronics group receives financial
support from the Australian Research Council through
the Linkage Projects, Discovery Projects, and Centres
of Excellence Schemes. The group is a member
of the ARC Centre for Quantum Computation and
Communication Technology and receives additional
financial support from the University of New South
Wales and the Australian Federal Police. The following
paragraphs discuss the research report for 2010.

Nonlinear estimation of ring-down time for an


experimental Fabry-perot cavity
This research applies the estimation techniques of
a nonlinear discrete time extended Kalman filter
to estimate the ring down time for an experimental
optical cavity for the purpose of cavity ring down
spectroscopy (CRDS). The cavity used is a
Fabry-Perot optical cavity, which is a hollow tube, fitted
with two highly reflective mirrors. When the input laser
frequency matches the resonant frequency of the
cavity, it is said to be in lock with the cavity.
Any deviation between these frequencies is
characterized in terms of the detuning parameter
and is an undesired effect. If the light coupling
into the cavity is interrupted, light inside the cavity
continues to resonate and gradually decays in
intensity. This intensity information is recorded to study
the decay of light inside the cavity as a function of
wavelength. The time taken for the light intensity to
decay to 1/e times its initial value is termed as the
decay time . This decay time depends upon the
reflectivity of the mirrors mounted inside the cavity
and losses due to the sample contained within the
cavity which directly dictates the amount of optical
absorption or scatter. Hence, an estimate of in
such a spectroscopic technique can be used as
a molecular detector in chromatographic systems
and for applications in molecular fingerprinting
which involves detecting various chemicals, such as
explosives and their related compounds.
Though nonlinear least square methods such as the
Levenberg-Marquardt (LM) algorithm can handle
system noise more effectively but is known to limit the
data throughput to below 10Hz. Since the estimation
for is needed in real time, this issue with the
throughput of the LM method motivates us to apply
better estimation techniques. One such method for
the estimation of nonlinear systems is the extended
Kalman filter (EKF), which was considered during
this research. The cavity was modeled in terms of
the amplitude and phase quadrature variables and
the data for the estimation process was obtained
from a CRDS experimental setup in terms of the light
intensity at the output of the cavity.

The cavity was held in lock with the input laser


frequency by controlling the distance between the
mirrors within the cavity by means of a proportionalintegral (PI) controller. The cavity was purged with
nitrogen and placed under vacuum before chopping
the incident light at 25KHz and recording the light
intensity at its output. In spite of beginning the EKF
estimation process with uncertainties in the initial
value for the decay time constant, its estimates
converged well within a small neighborhood of the
expected value for the decay time constant of the
cavity within a few ring-down cycles.
EKF
m1

Laser

m2

PDH

Controller

(s)

5.2
5.1
5
4.9
4.8
4.7
0

time (s)

EKF
LM
expected
8
4
x 10

Figure 3: A comparison of EKF and LM


estimation results for t at the end of each
ring-down cycle, plotted against the expected
true value for .

Highly resonant structural modes in machines and


robots, ground and aerospace vehicles, and precision
instrumentation, such as atomic force microscopes
and optical systems, can limit the ability of control
systems in achieving a desired level of performance.
This problem is simplified to some extent by
using force actuators combined with collocated
measurements of velocity, position, or acceleration.

M1
EOM
MMO

AOM
MMO

MOD2

HWP

PCB

MIXER
PD

QWP

PZT

5.3

HWP

HWP

MMO

5.4

A Stability Result on the Feedback


Interconnection of Negative Imaginary Systems
with Poles at the Origin

Detector

MOD1

ISO

5.5

Figure 1: Proposed CRDS setup with an EKF


estimator and a controller.

LASER

x 10

PZT

Fabry-Perot Cavity

AOM

5.6

CAVITY

DIGITISER

M2
PD

HV AMP

SERVO

Figure 2: Block diagram of CRDS experimental


setup: The red and green lines represent
optical signal paths whereas the blue line
represents the path for electronic signals.
Also, ISO is the Faraday isolator; MMO are
mode matching optics; HWP are half wave
plates; EOM is the electro-optic modulator;
AOM is the acousto-optic modulator; MOD1
is the RF generator and amplifier for phase
modulation; MOD2 is the signal generator
and amplifier used to generate the chopping
waveform; M1 and M2 are beam steering
mirrors; PCB is a polarizing cube beamsplitter;
PD are photodetectors; QWP is a quarter wave
plate; SERVO is the controller; HV AMP is a +/200 V amplifier to drive PZT, the piezoelectric
actuator that controls the cavity length.

The use of force actuators combined with velocity


measurements has been studied using positive real
(PR) systems theory for linear time invariant (LTI)
systems. Many systems that dissipate energy fall
under the category of PR systems. For instance,
they can arise in electric circuits with linear passive
components and magnetic couplings. However, PR
theory can not be used in the cases of position or
acceleration measurements. In the same time, the
position measurements are become widely used
specially in the nanotechnology systems which
known as nano-positioning systems. The use of force
actuators combined with position and acceleration
measurements can be studied using negative
imaginary (NI) systems theory.
Many practical systems can be considered as NI
systems. For example, such systems arise when
considering the transfer function from a force actuator
to a corresponding collocated position sensor (for
instance, a piezoelectric sensor) in a lightly damped
structure. Also, in cavity looking in optical cavities
experiments can be formulated as NI system, since
the PZT in this system is an actuator collocated with
the cavity which is the equivalent of a position sensor.

Engineering and Information Technology Research Report 2011

83

In this work, a new NI definition is presented to


capture the class of systems that have poles at the
origin. Also, a new stability conditions are derived for
this case. The importance of this extension comes
from that the new formulation of the NI theory will
allow for many engineering systems and applications
to be considered as a NI systems. Such applications
include low-friction, free rigid-body motion, such as
single-axis spacecraft rotation, rotary crane motion ,
flexible link manipulator and in the case of dual-stage
hard disk drive.

Enforcing Negative Imaginary Dynamics on


Mathematical System Models
Since flexible structures with collocated force
actuators and position sensors are typically NI, NI
systems theory can be effectively applied to these
systems. For systems involving flexible structure
dynamics, it may be difficult to obtain an exact system
model by constructing differential equations from first
principles for such systems. An alternative method
for obtaining a mathematical system model is by
means of system identification. However, the resulting
mathematical model may not exactly describe the true
dynamics of the underlying system.
Identified system models can sometimes lead to
mathematical models that do not reflect the actual
characteristics of the underlying system. For example,
the process of system identification when applied to
linear time-invariant (LTI) systems which are known to
be NI might lead to a model which is not NI.
The same problem occurs in the identification of
passive systems and typically due to the basis
parametrization imposed by system identification. In
such cases, the system model should be perturbed to
enforce the underling NI dynamics.
This work provides two methods for enforcing NI
dynamics on such mathematical models, given
that it is known that the underlying dynamics ought
to belong to this system class. Also, we presents
an application of the NI enforcement schemes
to a practical system arising in cavity ring down
spectroscopy system.

84

Engineering and Information Technology Research Report 2011

The Development of Super Clip Mathematics


for the Fourier Transform Infrared Spectrometer
The availability of a wide frequency bandwidth as
well as rugged instrument design make Fourier
transform infrared (FT-IR) spectroscopy a reliable
option for remote detection applications. However, a
commonly encountered difficulty in field applications
of this technology is the collection of a background
spectrum representative of the environment being
sampled. Temperature disparities or temporal
differences between the spectra may result in
artifacts in the subsequent absorbance spectrum.
Atop the issue of quality background acquisition,
there are other standard complications involved
with substance identification in a non-stationary
environment. Baseline drift, excess signal noise,
extraneous absorbance features and wavenumber
shifting are among some of the most commonly
encountered issues.
The development of super clip mathematics
addresses the issue of background collection by the
manipulation of the FT-IR raw data in a manner that
enables the calculation of an absorbance spectrum
from a single interferogram. Super clip apodization
(SCA) has previously been discussed in the literature
as a method to calculate a background spectrum
using only the central burst of the interferogram;
but has all but been dismissed because of the
generally narrow field of applicability suggested in
the literature. Complementary super clip apodization
(CSCA) is a sister technique to SCA where the central
burst of the interferogram is omitted and spectral
features on a generally flat baseline remain.
Both SCA and CSCA fall under the umbrella analysis
technique in active development at UNSW Canberra,
called super clip mathematics. These methods can
be used individually or in combination to calculate a
spectrum, but their successful implementation relies
on the use of an iterative spectral comparison routine.
Figure 4 is a process flow diagram of the super clip
mathematics algorithm in its current manifestation.
In this case, SCA and CSCA are used in combination
so the lengths of the complementary truncation
functions are being optimized in series. Absorbance
spectra are iteratively calculated and compared
to a reference using an evaluation metric such
as Euclidian distance or the Pearson Correlation
Coefficient. The spectrum calculated with the best
score is selected as optimal. Therefore, the user must
have an a priori knowledge of the analyte they intend
to detect.

Current research suggests that the combination of


SCA to calculate a background spectrum with CSCA
to calculate a sample spectrum is more robust than
SCA as previously discussed in the literature.
This is largely in part to the judicious optimization
of the truncation lengths of the interferogram.
Furthermore, in a quantitative study of the
nitromethane asymmetric stretch, it has been
concluded that super clip mathematics is applicable
for analytes with a full width half maximum (FWHM)
significantly larger than originally postulated. This
study has also demonstrated a robustness of
the method against extraneous noise as well as
wavenumber shifting.
Time domain analysis in this fashion is generally
difficult, as interferometric manipulation parameters
are often interdependent. However, a current direction
of this work is to investigate and characterize these
interrelationships; such as those that exist between
the calculated spectrum and various instrumental
configurations. Furthermore, a motivation to apply
super clip mathematics to real-time applications
requires further optimization of the iterative spectral
comparison routine and eventual implementation on a
digital signal processor (DSP) or field programmable
gate array (FPGA) device.

Forman Phase Correction

Load Reference Spectrum


Calculate Absorbance

for k = 1:1:L

SCA:

G(o) x N(1:k)

Score Result
Compare to Reference

Calculate B(o) with Optimal L


Calculate Absorbance
Using Optimized B(o)

for k = 1:1:L

CSCA:

G(o) x R(1:k)

Score Result

Improved Signal Processing for Cavity


Ringdown Spectroscopy
Cavity ring-down spectroscopy (CRDS) is a
sensitive spectroscopic technique that can be used
to measure absorption due to weakly absorbing
or dilute samples. In a CRDS measurement, light
(generally this is from a laser, but broadband
techniques have been demonstrated) is coupled
into an optical cavity formed by two or more mirrors.
Upon extinguishing the incident light, the field within
the cavity, I(t), decays exponentially: traditional
instruments focus on fitting this decay using,
typically, non-linear least squares fitting algorithms:
this place a limit on the throughput, and thus the
achievable sensitivity of the technique
We have developed a Fourier-transform based signal
processing method for laser-locked Continuous Wave
Cavity Ringdown Spectroscopy (CWCRDS). Rather
than analysing single ringdowns, as is the norm
in traditional methods, we amplitude modulate the
incident light, and analyse the entire waveform output
of the optical cavity; our method has more in common
with Cavity Attenuated Phase Shift Spectroscopy
than with traditional data analysis methods. We have
compared our method to Levenburg-Marquardt non
linear least squares fitting, and have found that, for
signals with a noise level typical of that from a locked
CWCRDS instrument, our method has a comparable
accuracy and comparable or higher precision.
Moreover, the analysis time is approximately 500
times faster (normalised to the same number of time
domain points). Our method allows us to analyse any
number of periods of the ringdown waveform at once:
this allows the method to be optimised for speed and
precision for a given spectrometer.

Compare to Reference
Calculate D(o) with Optimal L

Calculate Final Optimized Spectrum with Optimized B(o) and D(o)

Figure 4: Process flow diagram for the iterative


spectral comparison algorithm. In this scenario,
SCA and CSCA are being used in combination.

Locked Cavity Attenuated Phase Shift


Spectroscopy
Cavity Attenuated Phase Shift Spectroscopy (CAPS)
is a variant of Cavity Ringdown Spectroscopy (CRDS)
and was the first of the cavity enhanced methods to
be applied to measuring mirror reflectivities, but
was the last to be applied to spectroscopy.
A CAPS measurement offers several advantages over
a traditional CRDS measurement, including a high
throughput and a narrow detection bandwidth.
CAPS, has, however never become popular in the
literature: while PCRDS and CWCRDS rapidly grew
in popularity after the original papers, with many
improvements on the original techniques, CAPS
experiments still use the same setup as that of
the original work in the early 1980s. This lack of
development has resulted in the sensitivity of the
technique not improving in the same way as that for
CRDS, where modern spectrometers are several
orders of magnitude more sensitive than the
original instruments.

Engineering and Information Technology Research Report 2011

85

Robust quantum phase estimation of a weak


coherent state of light
Quantum parameter estimation (QPE) is the problem
of estimating an unknown classical parameter, usually
an optical phase shift, of a quantum system. QPE is
at the heart of many fields such as gravitational wave
interferometry, quantum computing and quantum
key distribution. Recently, it has been demonstrated
experimentally that adaptive quantum smoothing
technique, employed for estimation of the dynamically
varying stochastic phase shift of a weak coherent state,
yields an estimate with a mean-square error of up to
2.240.14 times smaller than non-adaptive filtering (the
standard quantum limit). Quantum smoothing is a timesymmetric estimation technique that takes into account
both past and future observations and can be more
precise and accurate than only filtering, that considers
only the past measurements.

In order to make the filter robust to uncertainties


in one of the underlying parameters, the Riccati
equation approach to building a guaranteed quadratic
cost state estimator for linear uncertain systems was
adopted. Figure 6 (a, b and c) demonstrates the
comparison of the mean-square estimation error
between the robust filter and the Kalman filter for 5%,
50% and 80% uncertainties, respectively. For 5%
uncertainty, there is not much noticeable difference
between the two cases.
However, with increased uncertainties of 50%
and 80%, the Kalman filter tends to significantly
deteriorate as compared to the robust filter at the
positive end of the uncertainty window. Furthermore,
for the 80% case, the robust filter beats the standard
quantum limit throughout unlike the Kalman filter.
Comparison of mean-square error for Kalman filter and Existing filter

0.1

Kalman
Existing

0.09
0.08
0.07
0.06
0.05

We have developed a new variant of CAPS that


involves actively locking the incident light to one of the
resonant modes of a moderately high finesse cavity.
Rather than using a lock-in amplifier and a ratiometer
to measure the phase shift imparted by the cavity,
as is the norm in traditional CAPS experiments, we
have developed digital signal processing that should
allow us to process the data from our instrument in
real time. We report the best sensitivity for a CAPS
measurement, a Minimum Detectable Absorption
Limit(MDAL) of the order of 10-10 cm-1 Hz -1/2,
achieved with only 4 milliseconds of data: this
measurement is two orders of magnitude better than
any sensitivity reported in the literature.

0.04
0.03
0.02
0.01
0 4
10

3

86

Engineering and Information Technology Research Report 2011

/( ||)

1

10

0.42

10

10

Comparison of error covariance as a function of for = 5%

0.04446155

0.0444

2 ()

0.0442

0.044

0.0438

0.0436

0.0434
1

Robust
Kalman
0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

Figure 6a. Comparison of Kalman and Robust


filters for 5% uncertainty.

0.054

Comparison of error covariance as a function of for = 50%


Robust
Kalman

0.052
0.05

0.04881205
2 ()

Figure 5 illustrates the comparison of the meansquare phase estimation error for the Kalman filter
with that for the existing filter. The graph is a plot of
the mean-square estimation error versus a suitably
scaled parameter for the underlying model.
As can be seen, in the lower values regime for the
considered parameter, the existing filter behaves
as good as the optimal Kalman filter; however, as
the parameter value rises, the Kalman filter has
significantly less mean-square error, and therefore
superior performance, than the existing filter. The red
vertical line indicates the value of the parameter as
used for the adaptive experiment.

10

Figure 5. Comparison between Kalman and


existing filters.

0.0446

The aforementioned adaptive experiment made use


of a feedback filter that is sub-optimal in a more
general setting for the noise process considered.
The new research described here modeled the
noise process and the measurement involved in
the experiment in the Kalman filtering framework to
design the feedback filter that would be optimal in the
general case. Also, it is physically unreasonable to set
absolutely precisely to desired values, the parameters
underlying the noise process considered. Hence, it
is desired to further extend the Kalman filter model
to allow for uncertainty in the underlying parameters
and re-design the feedback filter that would be robust
against uncertainty in the linear model.

2

10

0.048
0.046
0.044
0.042
0.04
1

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

Figure 6b. Comparison of Kalman and Robust


filters for 50% uncertainty

Comparison of error covariance as a function of for = 80%

0.07

Robust
Kalman

0.065

0.8

0.055

0.051975399

SQL

0.0542

Probability

2 ()

0.06

0.05

0.045
0.04

1

0.8

0.6

0.4

0.2

0.2

0.4

0.6

0.8

0.6
0.4
0.2

Figure 6c. Comparison of Kalman and Robust


filters for 80% uncertainty

Optical cat states can be generated using a photonsubtracted squeezed vacuum state. A strong
signature of the quality of such a state is when the
state has a negative Wigner function. A negativevalued Wigner function such a cat state has been
experimentally demonstrated around the wavelength
of 860 nm. However, the experimental result for
cat state generation at 1550 nm is still limited by
imperfections in the experiment such as: non-photonnumber-resolving ability, inefficiency and dark count
of the projected photon number detector, which
have been discussed by many researchers. But
even if these factors were considered, experimental
results reported at 1550 nm are still not as good
as that at 860 nm. Therefore, we aim to develop a
more comprehensive model of the experiment to
quantitatively investigate the impact of experimental
imperfections in the experiment.
A mathematical model was developed that
covered the possible imperfections of the input
state, projected photon number detector and the
interactions between them. All these imperfections
degrade the properties of Schrodinger cat state.
We analyzed our experimental results based on the
developed model. Figure 7 shows the photon number
distribution of the input state as reconstructed from
the experimental data, which is quite similar to the
photon number distribution predicted by our model
as shown in figure 8. The predicted photon number
distribution for the photon-subtracted squeezed
vacuum is shown in Figure 9, which is also similar to
the experimental results.

2
3
Photon number

1
0.9
0.8
0.7
Probability

Non-Gaussian states such as Schrdinger cat


states have attracted intense interest for quantum
continuous-variable (QCV) information processing,
since it provides a basis for entanglement distillation,
universal quantum computing, and proposed
loophole-free tests of Bells inequalities. Schrodinger
cat state generation at telecommunication wavelength
is particular important for long-distance quantum key
distribution (QKD).

Figure 7. Predicted photon number distribution


of squeezed vacuum state.

0.6
0.5
0.4
0.3
0.2
0.1
0

2
3
4
Photon number

Figure 8. Measured photon number distribution


of squeezed vacuum state.

1
0.9
0.8
0.7
Probability

Full analysis of Schrodinger kitten generation

0.6
0.5
0.4
0.3
0.2
0.1
0

3
4
Photon number

Figure 9. Predicted photon number distribution


of photon-subtracted squeezed vacuum state.

Engineering and Information Technology Research Report 2011

87

Remote Sensing
SEIT Academics
Dr Xiuping Jia
Dr Mike Ryan
A/Prof Mark Pickering
A/Prof Donald Fraser
Dr Andrew Lambert
A/Prof Tuan Pham

SEIT Visiting Fellows


Dr Jihao Yin
Dr Xiaofeng Li

SEIT Postgraduate Students


Miss Chandrama Dey
Mr Md Al Mamun
Mr Mahmudul Hasan
Mr Guangyun Zhang
Mr Md. Ali Hossain

SEIT Practium Students


Miss Xi Zhang

Other Collaborators
Faculty of Engineering, UNSW
A/Prof Linlin Ge
Dr Ngai Kwok
Australian National University
Prof John Richards
Geoscience Australia
Dr Adam Lewis
Dr John Schneider

Collinearity Effect in Spectral Unmixing


Mixed pixels exist widely in remotely sensed images
due to the inherent heterogeneity of land surfaces
and the relatively coarse spatial resolution of
remote sensing sensors. The extensive presence
of mixed pixels results in failure of traditional
hard-classification methods in which a pixel is
assumed to belong to only a single ground cover
type. Soft-classification techniques were then
developed where the abundances (fractions) of the
components (endmembers) present in the mixed
pixels are quantified. A number of spectral mixture
analysis (SMA) methods have been develop over
the past decades. SMA has the same mathematical
form as multivariate regression analysis, where
collinearity is a common problem. We investigates the
collinearity effects on the inversion accuracy in SMA
quantitatively and analyzes increased collinearity in
the nonlinear SMA model, in particular. (X. Chen, et
al., A quantitative analysis of virtual endmembers
increased impact on the collinearity effect in spectral
unmixing, IEEE Transactions on Geoscience and
Remote Sensing, vol. 49, pp. 2945-2956, 2011.)
The collinearity of linear SMA (LSMA) is often not very
strong because only distinctive spectra are usually
selected as an endmember. However, the collinearity
of nonlinear SMA (NSMA) should be considered
seriously because the virtual endmembers formed
by the interactive term can highly correlate with the
true endmembers. The experimental results show the
strong increase in variance inflation factor (VIF) when
the nonlinear model was used. Simulated results
also showed that the NSMA with high VIF was more
sensitive to Gaussian noise. The inversion accuracy of
NSMA dropped with increasing noise level, even up to
a point that the use of LSMA performed better.

Harbin Institute of Technology, China


Dr Ye Zhang
Harbin Engineering University, China
Dr Liguo Wang
Beijing Normal University, China
Dr Jin Chen

Research Description
Remote sensing has been used widely ranging from
weather forecasting to land cover change monitoring.
It is a field of technology in which sensors are
mounted on aircraft or spacecraft platforms and
used to acquire images of regions on the earths
surface. Each optical image is the reflectance of the
solar spectrum at a particular range of wavelength.
Typically the hyperspectral images are generated
by recording the reflectance of ground cover types
with approximately 200 spectral bands. The data
produced will be processed by computer to extract
valuable information for various applications.
There are three issues in remote sensing data
interpretation: data compression and transmission,
data correction and data analysis. These issues are
investigated by this group.

88

Engineering and Information Technology Research Report 2011

Feature Extraction for Efficient Hyperspectral


Image Classification Mapping
With the advent of hyperspectral remote sensors
hundreds of narrow contiguous spectral bands/
features can now be captured to provide greater
details on the spectral variation of targets than
conventional multispectral systems. For instance,
AVIRIS sensor simultaneously measures 224 bands
with a fine resolution e.g. 0.01m. However, at
present this high dimensional data implies a major
challenge for the traditional classification methods.
Moreover some bands are highly correlated and not
important for a specific application. On the other
hand, as the feature space dimension increases,
if the size of the training data does not grow
correspondingly, a reduction in the classification
accuracy of the test data is observed due to poor
generalization of the supervised classifier. This effect
is known as the Hughes phenomenon. There fore
the aforementioned problem needs to be solved by
efficient dimensionality reduction.

Two approaches are available to overcome the above


problem. One is to apply a feature reduction method
to reduce the dimensionality of the input data and
the other is to modify the classifier design so that
it is suitable for large size data. Feature reduction
can be done by selecting an important subset of
the originals or transforming the input data to a new
space called feature extraction. Some conventional
feature reduction approach PCA, LDA, J-M and
Bhattacharya distance measure presents obvious
limitations and drawbacks including they depend on
the training data, ineffective class pair wise treatment
and is only reliable for normal like data. To solve and
increase the classification accuracy we proposed a
dimensionality reduction method (Hossain, Jia and
Pickering, IGARSS 2011) which combines feature
extraction using PCA and feature selection from
the resulting principal components using a mutual
information measure. The proposed MI-PCA method
selects transformed features (PCs) with higher value
of MI as it measures the relevance of the principal
components regarding the input classes. Experiments
were conducted to evaluate the performance of
MI-PCA method when compared with the standard
MI, and PCA approach. The result shows that the
proposed MI-PCA approach can identify features
that can obtain 80% (best among three) classification
accuracy for test data.

Multimodal Remote Sensing Image


Registration
Image registration is one of the important preprocessing step for data fusion, where two or more
image data sets are combined for information
retrieval. When image data are recorded by sensors
on satellites or aircraft they contain errors in geometry.
The sources of geometry errors include the rotation
of the Earth, and curvature of the earth surface, and
uncontrolled variation in the position and attitude of
the remote sensing platform. These errors need to be
rectified via image registration before data fusion can
be conducted. Automatic image registration can be
classified into two categories feature based image
registration and intensity based image registration.
Automatic intensity based image registration for
images captured by different sensors usually requires
the use of information-theoretic similarity measures
such as mutual information (MI). Recently a new
similarity measure known as Cross-Cumulative
Residual Entropy (CCRE) has been proposed for
multi-modal image registration in medical imaging
applications. We investigated the use of CCRE for
multi-sensor registration of remote sensing imagery
and the extreme case of registering synthetic
aperture radar (SAR) images to optical images.
We also proposed a novel extension to the Parzenwindow optimization approach proposed by Thvenaz
which involves applying partial volume interpolation
in the calculation of the gradients of the similarity
measure (M. Hasan, M. Pickering, X. Jia, TGRS 2012,
Volume 50, Issue 10).

Our experimental results showed that proposed


approach which uses CCRE as the similarity measure
and partial volume interpolation in the optimization
procedure provides superior performance to other
approaches investigated.
The scale invariant feature transform (SIFT) is a widely
used method for feature based image registration and
object recognition. The SIFT method is well known
for its ability to identify objects at varying scales and
rotations among clutter and occlusion with very fast
processing time. The application of SIFT on multimodal remote sensing images for image registration
purposes, however, often results in inaccurate and
sometimes incorrect matching. Commonly a very large
number of feature points are generated from a remote
sensing image but a very small number of feature
points are matched giving a high false alarm rate. We
proposed a method containing several modifications
to improve SIFT feature matching by adapting the
characteristics of remote sensing images (M. Hasan,
M. Pickering and X. Jia, IGARSS 2012 accepted). The
proposed method leads to more matching points with
significantly higher rate of correct matches.

Spectral Spatial based Remotely Sensed


Image Classification
Investigating spatial information into remote sensing
image classification is an important aspect to
improve the classified accuracy. Two important
ways for the spatial information are texture and
contextual information. For the texture, Gabor filter
is an effective way to extract texture information.
For remote sensing image, its value comes from
sufficiently spectral information which means it
has multispectral or hyperspectral information with
several or even hundreds of bands. If texture bands
from Gabor filter are added into the data source,
the higher dimensions become a big problem
for the classification. A feature extraction method
called Kernel Local Fisher Discriminative Analysis
is used to solve this problem. This method improves
the traditional Fisher Discriminative Analysis by
introducing the nonlinear kernel style and at the same
time integrating locality preserving projection to keep
the data structure (Zhang and Jia, IGARSS 2011).
This approach integrates the spectral and texture
information together with reducing the complexity
of the classification. Fig. 1 shows the comparison of
classification results with different methods.
For the contextual information, a method called super
pixel based classification is used to integrate the
spatially neighbouring information into the classified
procedure. This method connects the segmentation
and the classification together and do not need to
generate accurate segmentation, which is as difficult
as accurate classification. The simple majority volt
is used to apply the contextual information from the
super pixels into the posterior processing of the pixel
based classification (Zhang, Jia and Kwok CISP
2011). The salt and pepper phenomenon for the pixel
based classification has been removed significantly.
Engineering and Information Technology Research Report 2011

89

Figure 1 (a) The original DC Mall image. The Maximum Likelihood classification results (b) using reduced
features based on PCA (b) using reduced spectral features based on kernel LFDA, and (c) using reduced
spectral features plus colour texture feature based on kernel LFDA.

Adaptive Markov Random Field (MRF)


Approach for Classification of Hyperspectral
Imagery
There exist two issues when using conventional MRF for
the classification of hyperspectral images. First, when
the spectral term uses an MLC, reliable estimation
of class covariance is difficult for hyperspectral data
if there is inadequate number of training samples
available. Second, spatial weighting coefficient controls
the contribution of spatial term in the MRF approach,
and how to select it automatically is challenging. The
spatial component of conventional MRF is implicitly
based on the assumption that the neighboring pixels
have the same class labels as the central pixel. Each
pixel uses the same weighting coefficient for its spatial
term regardless that it is on a class boundary or within
a homogeneous region. In another word, spatial effect
is treated equally for all the pixels. As a result, with a
given weight, classification accuracy is improved in
homogeneous regions, but pixels at class boundaries
are at the risk of overcorrection.
An adaptive-MRF (a-MRF) approach for spectral
spatial classification of hyperspectral imagery
is developed (B. Zhang, et al., Adaptive Markov
Random Field approach for classification of
hyperspectral imagery, IEEE Geoscience and
Remote Sensing Letters, vol. 8, pp. 973 977, 2011).
Figure 2 is the flow chat of the proposed method.
We introduce a relative homogeneity index (RHI) and
use this index to find the suitable weighting coefficient
of the spatial contribution for each pixel, in order to
improve classification performance. Experiments
using both synthetic and real hyperspectral data sets
demonstrated the better performance.

90

Engineering and Information Technology Research Report 2011

Figure 2. Flowchart of the proposed a-MRF


procedure.

Social Networks Group


(SNG)

SEIT Academics
Dr Rob Stocker
Prof. Hussein Abbass
Mrs Jenny Backhouse
Dr Michael (Spike) Barlow
Dr Gary Millar
Dr Ed Lewis
Dr Tim Turner

SEIT Postgraduate Students


Ms Helen Gilroy
Mr HC Lim
Mr David Kernot
Ms Sue Burdekin
Ms Mai Shouman

Collaborators
INTERNATIONAL
University of Arizona (Tuscon AZ USA)
Prof H Randy Gimblett (School of Renewable Natural
Resources)
University of Southern California (Los Angeles CA
USA)
Prof Thomas W Valente (Keck School of Medicine)
University of Aizu (Aizu-Wakamatsu-shi Fukushimaken JAPAN)
Dr Henry Larkin
AUSTRALIA
Australian National University (Canberra ACT)
Dr Rob Ackland (The Australian Demographic and Social
Research Institute ADSRI)
Dr Jennifer Badham (National Centre for Epidemiology and
Population Health)
Dr Al Klovdahl (Sociology, College of Arts and Social
Sciences)
Dr Dirk Van Rooy (Department of Psychology)
Charles Sturt University (Bathurst NSW)
Prof Terry Bossomaier (Centre for Research in Complex
Systems - CRiCS)
CSIRO (Newcastle)

Research Description
The Social Networks Group (SNG) conducts
theoretical and applied research that is primarily
focussed on network structure and function.
Its foundations lie in considering social organisation,
communication and interaction and how that research
can be used to describe and/or predict complex
(social) system behaviour. A regular newsletter is to
be circulated to members via the official web site and
Wiki (under construction) that provides more detailed
information about SNG.

Mission
The Social Network Group (SNG) will provide a
service to social network researchers that enables key
processes for effective collaboration, communication,
project initiation and completion, publication, and
promotion over national and international boundaries.

Research Projects
Landscape and behavoural factors in
cross-border migration
National security has become an increasingly
important concern for countries where borders
are bounded by or are in close proximity to other
countries whose political, ideological and cultural
perspectives are very different. In particular,
where such reasons seem to encourage illegal
migration and/or criminal activity, management of
such trafficking is of great concern to respective
governments. What factors influence such individual
and group behaviour? We (Dr Rob Stocker, Prof
Randy Gimblett and Dr Spike Barlow) explore
similarities and differences between events in the
USA and Mexico and in Australia and Indonesia for
valuable insights. From these insights we develop a
multi-agent simulation model to examine (in silica)
factors that influence landscape and patterns of
behaviour for application to other locations.
We propose to take a serious-game approach to
developing the model so as to capture the strategic
and tactical planning of the players. This project
(initiated in 2009 and supported by SSP Leave in
2010) is continuing through 2011.

Dr David Cornforth (CSIRO Energy Centre)


Monash University (Clayton Vic)
Prof David G Green (School of Information Technology
CSSE)

Industry Collaboration Projects


NATIONAL
Dr Sean Bergin (DSTO, Effects-based Modelling Unit,
Edinburgh South Australia)

Engineering and Information Technology Research Report 2011

91

Analysis of social structure in aberrant


social groups
People congregate and merge together to form
large groups for a variety of reasons, for example at
sporting events, entertainment venues, in recreational
activities and others. Crowd behaviour can move from
being benign to destructive (to both environment,
others and self) within short timeframes depending
on a variety of stimuli. Some groups deliberately
engage in anti-social and confrontational behaviour
whilst others deliberately engage in practices that
are intended to harm and destroy. What network
structures and patterns of communication are evident
in groups whose activities focus on an intent to
disrupt accepted social order? This project initiated
in 2010 (Ms Helen Gilroy; Dr Rob Stocker; Dr Tim
Turner; Dr Spike Barlow; Dr Ed Lewis) has evolved
to concentrate on terrorist and criminal networks
and employs novel data extraction and analysis
methodologies to conduct comparative analyses
of the network characteristics. Our results so far
demonstrate that social network analysis (SNA) is
productive for the examination of such group activity
(Figure 1). It includes comparisons between different
aberrant social groups and the structure of member
interactions that lead to their activities. This will
enable the construction of explanatory and predictive
simulation models. The project will be completed by
March 2012.

92

Figure 1 Network Map showing identified


cut-points and critical bridge

Trust and social moral norms and the impact of


networks and agents
Mr HC Lim completed his PhD Thesis, submitted it for
examination in January 2011 and his doctorate will be
confirmed at the 2011 Graduation Ceremony.
The project: Interplay of Ethical Trust and Social
Moral Norms in Signal-Behaviour Computational
Social Processes: An Investigation of Agents and
Networks Effects developed an heuristic formalism
to examine the process-structure interplay between
agent based and network based effects on social
moral norms and trust behaviours. The simulation
(Figure 2) demonstrated that patterns of connectivity
and agent characteristics have significant influence.

Figure 2. Conventional findings suggest that the underlying structure of social networks has assortative
mixing of degrees. Here, experimental results show both assortative (on the left) and disassortative (on
the right) mixing of degrees with different network densities.

Engineering and Information Technology Research Report 2011

Affective and cognitive constructs in


social networks

Social Networks: Links and Language (Book of


Edited Chapters Project)

Members of social groups bring different


life-skills, knowledge and capabilities to any groups
to which they belong. Such experience is important
in the formation of the group and its maintenance.
In particular, the influence of peer pressure on
adolescent students in the uptake of cigarette
smoking (and other substance experimentation) has
been shown to be significant. The SNG team (Mr
David Kernot, Dr Rob Stocker; Dr Gary Millar; Prof
Tom Valente) initiated this project prior to SSP Leave
in 2010 undertaken by Dr Rob Stocker to University of
California in Los Angeles to collaborate with Prof Tom
Valente from the Hecks School of Medicine. Empirical
interview data on congitive and affective attributes of
a population were gathered for preliminary analysis.
M Phil Student Mr David Kernot is using this data to
develop formal algorithms to extract specific linguistic
cues that are indiciative of the relationship between
communication and cognitive processing. The project
will be expanded to a PhD Research project in 2012.

The growth in our understanding of networks,


especially human social networks, has been quite
remarkable over the last decade. But equally
remarkable is how the networks themselves have
been evolving, driven in part by the many new tools
in cyberspace. Along with these new communication
tools come increasing numbers of changes to
language and lexicography. English spelling in text
messages is nothing like the spelling we learn in
school, so far at least, while groups differentiate
themselves more and more by linguistic twists and
turns, new words, slight variations in grammatical
usage the so not cool phenomenon.

This book co-authored by Dr Rob Stocker and


Prof Terry Bossomaier(CRiCS) discusses social
networks and their integration with communication
and language (Figure 3). Although accessible to
a wide audience, it contains sufficient technical
detail to serve as a starting point for advanced
undergraduates and postgraduates and reflects
the content of the 5th Biennial Complex Systems
Research Summer School at the Centre for Research
in Complex Systems (CRiCS) in Bathurst NSW.
The seven chapters of the book cover three broad
areas: technical fundamentals; complexity and social
networks; and communication and language and will
be published in 2012.

Figure 3. An overview model describing the process of social interaction between individual human
participants, emphasising key human characteristics and the importance of language in connectivity
patterns between interacting actors in a dyad.
Engineering and Information Technology Research Report 2011

93

Software Engineering
SEIT Academics
Dr Chris Lokan
Dr Gary Millar

SEIT Postgraduate Students


Mr Irman Hermadi
Mr Eugene Suchcicki

Other Collaborators
Zayed University, U.A.E.
A/Prof Emilia Mendes

Research Description
Research in Software Engineering aims to improve our
ability to develop high-quality software as productively
as possible. It is a wide discipline, in which a diverse
range of proposals is made for how things can be
done better but relatively few claims are backed
up by evidence. There is growing realisation of
the importance of empirical software engineering:
conducting experiments and using measurement to
demonstrate the advantages of different techniques
and to improve our ability to manage the software
development process.
The Software Engineering research group
concentrates on empirical software engineering,
and software project management. In particular, the
group has interests in measurement. Some research
is fundamental: what is good or bad about particular
measures, and why? Other research looks at how
measurement can be applied in software project
management, for in-house developments and for
software acquisition projects.

Test case generation for path coverage using


evolutionary algorithms
White box software testing involves running a
program and seeing which parts of the program
were executed; if there are any unreachable parts
of the program after thorough testing, it is likely
that the program contains potential logical error(s).
The weakest form of white box testing is statement
coverage, which aims for every statement of code
in the program to be executed at least once.
A stronger form is branch coverage, which aims
for each branch of each decision to be exercised at
least once. The strongest form is path coverage,
in which the aim is to execute every logical path
through the program. This is hard in the presence of
loop(s), because executing a loop once before loop
termination is considered to be a different path to
twice through the loop, and so on.

94

Engineering and Information Technology Research Report 2011

The main task in white box testing is generating a set


of test cases that will cause different paths through
the program to be executed. Since this is expensive,
there is much interest in automating it. Recently
several researchers have successfully applied genetic
algorithms (GA) in generating test data for white box
testing. Most of this research has concentrated on
statement coverage or branch coverage; there is very
little on path coverage.
The objective of our research is to investigate the
application of GA, and other evolutionary algorithms,
for test data generation for path coverage. Initial
experiments showed that GA is effective for this
problem, and identified the parameters with the
greatest impact on performance.
One current line of research investigates how to
decide whether a path is infeasible when no test data
has been found for it after a period of searching.
An approach is proposed (based on software
reliability models) for deciding when to stop searching,
assuming that paths that have not been covered are
infeasible, based on the history of how many previously
uncovered paths are covered in each generation
of searching. An arbitrary limit on the number of
generations for which to search is not needed; the
searching performance itself defines the limit.
Different rules can be used to decide when to stop
searching, enabling a trade-off between search time
and paths covered. Results have shown that under
the best of the decision rules investigated, searching
can stop after few generations with very little error in
terms of paths that really are feasible being incorrectly
deemed infeasible. (I. Hermadi, C. Lokan and R.
Sarker, Software Reliability Model for Stopping Criteria
in Evolutionary Path Testing, under review).
A second line of research investigates whether
hybrid search methods, combining GA with various
types of local search, is suitable for path testing.
Preliminary results show that combining GA with local
search makes little difference to the number of paths
covered, but reduces the search time to find test data
to cover those paths.

Viable Systems

Planning, Strategy and


Architecture

Research Description
The Viable Systems Planning group carries out
applied research into governance (especially strategy
or policy) a nd architecture of enterprises, using
the principles of cybernetics and the practices of
Enterprise Architecture/ Enterprise Engineering.

SEIT Academics
Dr Edward Lewis
Dr Gary Millar

SEIT PhD Candidates


Cecilia Ridgley
Slade Beard
Mohammad EsmaeilZadeh
Nizami Jafarov

Human Enterprise Model


Slade Beard has prepared the Human Enterprise
Model (see Figure 1) during his work for his PhD.
This model has been tested during his professional
work as an Enterprise Architect responsible for
designing several Emergency Management or
Command and Control Centres. His model is currently
being tested in the design of a new Cancer research
and treatment hospital in Melbourne. It serves to
integrate the essential, yet often overlooked, aspects
of facilities and human behaviour in the design of an
Enterprise Architecture.

Figure 1. Human Enterprise Model

Engineering and Information Technology Research Report 2011

95

Advances in Enterprise Architecture Education


Ed Lewis produced a report, at the request of the
Department of Defence, about the sate-of-the-art
in the Enterprise Architecture discipline. The report
was to support the development of a curriculum for
Enterprise Architects. He visited over 30 institutions
in the US, Europe, and the UK that were involved
in presenting courses or in using the results of
Enterprise Architects, including the Department of
Defense and the Ministry of Defence. The contents
of the report have been incorporated into the website
(www.layrib.com) supporting the Systems Planning
body of knowledge that the team is preparing.
The Learning Framework that was developed as part
of the study is based upon Cec Ridgleys recently
awarded PhD entitled A Systems Approach to Ethical
Decision making. This Framework is shown in
Figure 2. Learning framework for enhancing
Enterprise Architecture.

Viable Governance Model


The team have published papers about the Viable
Governance Model that Gary Millar developed for his
DIT. This model, shown in Figure 2, applies Stafford
Beers Viable Systtems Model to the corporate
governance of Information Technology. It provides
the theory behind many of the empirical findings
about various governance mechanisms, including
organizational structure and the roles of the Board.

Figure 3. Structure of the Viable Governance Model

96

Engineering and Information Technology Research Report 2011

Figure 2. Learning framework for enhancing


Enterprise Architecture

Enterprise Architecture Principles


Following on from the work about the Viable
Governance Model, the team particularly
Mohammad EsmaeilZadeh is developing a
systematic method for generating Enterprise
Architecture principles. This method is based
upon the cybernetic principles of requisite variety,
recursion, and control.

Viable Service Oriented Enterprise Architecture


PhD Candidate Nizami Jafarov is looking at the
challenges parallel technological innovations might
bring to an Enterprise in times of intra-Enterprise
and inter-Enterprise integrations in public, private
and hybrid Cloud eco-systems. In his work, he
is linking the Viable System Model derived from
the Cybernetics theory with the Service Oriented
Enterprise Architecture paradigm to build a novel
Viable Service Oriented Enterprise Architecture as
a remedy against integration risks and as a tool for
enterprise decision makers and architects.

Figure 4. Domains of the Viable Service


Oriented Enterprise Architecture

Engineering and Information Technology Research Report 2011

97

Systems Engineering
SEIT Academics
Dr Mike Ryan
Dr Alan McLucas
Ms Brownwyn Jones

Research Description
The Systems Engineering research group conducts
theoretical and applied research in the fields of
systems thinking and modelling, systems engineering,
and requirements engineering. The following
paragraphs summarise a number of the groups
projects in 2011.

The Utility of Decomposition as a Systems


Engineering Tool
Almost without exception, any writing on systems
engineering (whether text book, research paper
or standard) begins with the same central thesis:
humans address (within our limited intuition) complex
systems (that are beyond our limited intuition)
by using abstraction and decomposition within a
hierarchical framework. Abstraction allows us to
focus on the essential information at any level in the
hierarchy; decomposition allows us to break each
level of the hierarchy down into the next level of detail.
Despite the fact that decomposition is ubiquitous
in systems engineering literature, the methodology
is commonly rejected by the systems engineering
community as a tool of Cartesian reduction, with
all its inherent limitations from a systems thinking
perspective, and is therefore seen to be inappropriate
to systems ideas and methods. Unfortunately, the
discussion fails to identify any robust alternative
approaches so considerable confusion exists in the
systems engineering community with regard to the
application of decomposition, particularly to
systems design.
This work revisits the role of systems engineering,
the nature of systems, and systems design, in
order to show the utility of decomposition as a
systems engineering tool. It highlights that systems
engineering is concerned solely with the design
(composition) of human-made systems in which the
ability to decompose the system into its constituent
elements and interfaces is axiomatic. Decomposition
then supports the designer in making choices
regarding the optimal combination of system
attributes that will best meet the systems purpose
and thus is a valuable tool when applied in the
correct context to an appropriate problem. In terms
of a continuum of effort in the design of a system
then, scientific investigation and systems engineering
are shown to have different but complementary
contributions and support tools. When viewed against
the system lifecycle, scientific investigation reduces
an unknown system in order to determine what it
comprises and how it works.

98

Engineering and Information Technology Research Report 2011

That knowledge then provides the start point for


systems engineering to decompose and define a
system, prior to selecting the optimal architecture
to meet the systems purpose. Decomposition is
a valuable and valid contributor to the systems
engineering effort; particularly so in the absence of
any robust, documented alternatives.

Application of MBSE to Requirements


EngineeringResearch Challenges
Models and simulations have always played
an important role in engineering and systems
engineering. Physical scale models, full-sized models,
and computer models are commonly used in all forms
of engineering and design. In recent times, interest
in modelling has increased to span the full system
lifecycle and there has been a significant focus on
Model-based Systems Engineering (MBSE).
The extension of formal modelling into all phases,
and particularly the conceptual design phase, of
a system development is a significant step and
proponents of MBSE suggest that it will provide
considerable benefits.
The application of modelling requires considerable
care, however. A model, by its nature, is only an
abstraction of a real-world domain in which certain
parameters have been chosen by the modeller for
implementation in the model. Since it is not possible to
model all of the parameters of the real world, a model
is therefore always an abstraction (deliberately or not)
of a real-world domain. The specific nature and level
of abstraction mean that a model is only able to serve
the purpose for which it is designedapplication
of the model outside those constraints can be
misleading at best and potentially dangerous.
In this work we focus on the use of MBSE to support
requirements engineering. We first describe a suitable
framework within which to consider the utility of
MBSE to support requirements engineering. We then
outline the principal activities undertaken as part of
requirement engineering and identify the potential
of MBSE to support each of those activities, as well
as identifying a range of challenges that must be
addressed before MBSE can be applied usefully to
requirements engineering.

Improving Security Systems Terminology: A


Systems Engineering Approach
The specification and design of modern security
systems are hampered by terminology that is
overlapping, recursive and often contradictory in
nature. The terms and associated definitions used
by prominent standards organisations present a
confusing mix of actions, states and governance
functions that lack commonality in meaning and
interpretation and tend to be specific to one problem
domain (mostly electronic or cyber security).
Consequently, despite the critical nature of security in
the design of almost all systemsand the increasing
criticality of security systems themselvesthe current
set of security terms and definitions is of little use to
stakeholders when articulating their requirements,
nor to systems designers when developing system
requirements. This work begins by examining the
definitions and terms applied to security and to
security systems. A systems engineering approach
of functional decomposition is then used to analyse
the set of terms and to illustrate how such terms
are of little use in systems design. A new definition
of security is proposed, from which a suitable set
of security terms is decomposed. This new set
of security definitions incorporates the intent and
meanings of current security terminology, and is not
only applicable to cyber security, but has a broader
application across the electronic, physical, and
personnel security domains. The set also provides
a much more useful basis for use in management,
requirements engineering, systems engineering, and
system design methodologies.

The Need to Address Rework in Project Scope


Management
For projects to be successful, project managers
must manage effectively the risks to delivering on
time, within budget, and within scope. While there
is a large body of knowledge relating to project risk
management, one area that is not widely recognised
is that of the impact of rework on project performance
and outcomes. Rework is generally considered to
be an inevitable consequence of making errors
during work that can be monitored and controlled as
a routine part of day-to-day project management.
This research applies systems dynamics modelling
to obtain insights into the impact of error rate (and
consequent rework) on project scope, particularly
when project requirements are not firmly established
at the outset. The insights gained indicate that rework
contributes substantially to project failure and is
deserving of greater attention than it often receives.
A better understanding of the effect of rework could
be valuable if used to inform not only routine project
management and quality assurance activities but also
governance activities relating to investment approvals
and project or program performance reviews.

On the Validation of System Dynamics Models


Conceptual system dynamics (SD) modelling is
frequently justified on the (intangible) basis that it
facilitates our understanding of complex dynamic
problems, those involving feedback and delay.
Critical threats to building understanding arise
when hypothesised cause-and-effect relationships
become the bases of our models, which then evolve
to become quantified representations upon which
we rely. In SD, validation is taken to mean building
confidence (in the model) whereas in systems
engineering (SE), validation is formally conducted
against specified requirements. An SE approach
to building SD models would demand that each
model be built on the basis of a defined ser of
modelling requirements and validated against those
requirements. Arguably, this would demonstrate the
validity of the model and its utility as a necessary and
sufficient representation of the real world. This work
addresses the challenges arising in the validation of
SD models and how an SE approach to validation
could improve the extant SD modelling methodology.

Engineering and Information Technology Research Report 2011

99

Underwater

Communications

SEIT Academics
Dr Mike Ryan
Prof Elanor Huntington
Prof Michael Frater
Mr Craig Benson
Dr Mark Reed
Dr Andrew Lambert
Dr Frank Jiang
Dr Robin Dunbar

SEIT Postgraduate Students


Mr Rony Rahman
Mr Qichao Zhou
Mr Md Jahangir Alam
Mr Kowshik Paul
Mr Sunit Gosh
Mr Md Shamim Anower

Light can be used to obtain relatively high data


rates through reasonable distances. The biggest
drawback is that the communication distance if light
underwater is dramaticallly affected by turbidity.
Turbid waters, such as Singapore harbour, have
shown communication ranges as short as a few
meters[chitre]. Acoustic signals have long been used
for underwater commuication. Acoustic signals can
travel long distances, and are relatively unaffected by
turbidity and changes in the water composition, such
as temperature, salinity and pressure. The difficulties
faced in using underwater acoustic signals for
commuication are: a slow signal propagation speed
of around 1500m/s, the relatively low frequencies,
and hence signal bandwidths - which cause low data
rates, strong multi-path and doppler effects, and
spreading losses, potentially frustrating channel reuse
in cellular or similar communication concepts.
The underwater communications research group
has been tackling a range of these issues, as well
as considering wider networking and communication
challenges in the underwater domain.

SEIT Research Staff


Dr Aleksandar Davidovic

Research Description
The oceans play a vital role in life on Earth. Not
only do they cover over 70% of the Earths surface,
but they play a vital role in regulating our climate,
providing food and sit over substantial natural
resources. The oceans are however relatively
unexplored, and poorly understood. Technology is
starting to change this lack of knowledge. Sensors
capable of monitoring conditions for extended
periods of time are being placed in ever greater
number and remotely operated or autonomous
vehicles can survey hostile environments. One of the
major bottlenecks in collecting this data in near real
time is underwater communication.
Sending data from submerved sensors and vessels
cannot normally be done by radio communications
as is done in most other domains. Radio waves
propagate only very short distances under water,
and the higher the frequency, the shorter the
communication range. Alternates to radio waves are
light, sound and cables. Cables can include fibreoptics, and can therefore offer very high data rates,
as well as potentially providing power to a sensor or
submerged vehicle. Such tethers to stationary nodes
need expensive connectors, and must be terminated
to an interface node, such as a surface buoy. Surface
buoys being problems of their own in terms of
maintenance cost, potential obstructions to shipping
traffic, and susceptability to damage from extreme
weather. Cables connecting to vehicles are normally
known as umbilicals or tethers, and complicate
vehicle operations as well as requiring specialised
ships on the surface. Wireless solutions such as light
or sound are therefore appealing.

100

Engineering and Information Technology Research Report 2011

A High Frequency Modem


In 2011 we enhanced tha maturity of our FPGA
based acoustic modem assembly. This modem is
novel in that it is designed to operate at a carrier
frequency that is more than an order of magnitude
higher than conventional acoustic modems.
We implemented modulation schemes with raw data
rates up to 160kbps using QPSK. This data rate can
be readily expanded by increasing the symbol rate
(signal bandwidth).
The software based modem incorporates a unique
pulse shaping filter in the modulator and demodulator.
This filter reduces intersymbol interference, using
a root raised cosine at the transmitter and receier.
The filter is implemented as a FIR fiter, but instead of
multiplying the tapped signal by each filter weight,
the weighted tap is calculated by right-shifting a
binary version of the signal. At each tap the weighted
output is calculated as the sum of two right-shofted
versions of the signal. This a tap of, say 0.78 is
approximated as 1/2 + 1/4 = 0.75. The inverse powers
of two being easily obtained by bit-shifting. A long FIR
filter with response suitably close to the ideal can in
this way obtained without incurring the high cost of
multiplication in an FPGA.

The High Frequency Underwater


Acoustic Channel

Frequency Re-use in Underwater Acoustic


Networks

We took measurements to improve the understanding


of the high-frequency underwater acoustic channel.
Such understanding is essential to develop suitable
modulation and equalisation techniques for mature
communication systems. It appears that the
high-frequency channel suffers less from
multi-path than conventional underwater acoustic
commnunication. We plan to further develop our
understanding of the high-frequency channel by
development of a detailed channel model in 2012.

High Data Rates in a many user commuication


network requires channel reuse. If the channel
cannot be reused then the data rate per user will
decline as the number of users in the network
increases. Channel reuse is normally facilitated by
a signal that decays with some exponent of range.
In free space the attenuation would be proportional
to range squared (r2) and close to the ground
the two-ray ground model results in attenuation
that is proportional to r4. Underwater acoustic
communication channels do not show such strong
decay, with practical spreading in underwater
links assumed to be r1.5. At high frequencies
the absorption on an underwater acoustic link
is very high. This is often cited as a reason why
underwater acoustic communications must use
low frequencies. However if links are kept to short
ranges, then the absorption on each link can be
tolerated, and the same absorption could be used
to reduce intereference from more distant nodes in
a network. Part of our work with the high frequency
modem is to also demonstrate that an underwater
acoustic channel can be reused many times across
a network, and that the main provider of isolatioon
is absorption rather than spreading. In 2011 we
conducted trials to verify that this approach results
in observable improvement in network Signal-toIntereference ratios at higher frequencies.

Network Routing Protocols for Underwater


Acoustic Networks
Assuming that high data rates, equal to those above,
on underwater acoustic communication links can be
achieved we have examined how existing mobile
ad-hoc network routing protocols would perform.
This study has indicated two major problems. Firstly the
data rates are still relatively low, so network congestion
develops quite rapidly. Secondly, the slow propagation
speed of acoustic signals means that communication
packets are short relative to the link propagation delay.
This frustrates the operation of a Medium Access
Control (MAC) protocol. MACs that use a virtual
carrier sense such as MACAW (RTS-CTS-DATA-ACK)
handshaking from wifi networks waste most of the
channels time waiting for the handshaking signals
to propagate. Pipelined MAC approaches such as
PCAP[jason] can make better use of the channel, but
transmission slot negotiation must be reserved well in
advance of data transmission, so latency is high.
Our network modelling uses a simple carrier sense
multiple access regime. The carrier sense does nothing
more than ensuring that an existing receive operation is
not terminated by a packet being transmitted.
Existing transmission will still prevent reception of new
packets arriving at the transceiver while it is in transmit
mode. Additionally, multiple packets may be received
at the same time, again resulting in lost packets.
Therefore some arriving packets will be dropped,
the loss rate being a strong function of the
network congestion.
Given that packet loss is so dependent on congestion,
we have developed a protocol that avoids unnecessary
transmissions. Control packets are kept short, and
route maintenance operations are minimised. We also
use a MAC that does not explicitly acknowledge receipt
of messages on each link. This technique produces
superior results to existing protocols in simulations.
The improvement is explainable in terms of the
reduced network congestions, which directly results
from the lightweight routing and MAC protocols.

Improving signal-to-noise ratio and resolution


of cross-correlation function using large
bandwidth ambient noise
The Greens function of the channel between two
points can be extracted from the cross-correlation
between recorded ambient noise fields at those
points. The SNR and resolution of the crosscorrelation function is related to the emergence
rate of the Greens function, which depends on
the number of coherent wave fronts extracted from
the cross-correlation of noise fields. In a given
environment a greater quantity of coherent signals
can be obtained through longer observations,
more observation points, or collection of a broader
bandwidth. Long time averaging has been
demonstrated, but requires that the channel be
stationary over the averaging time. Hydrophone
arrays are commonly used, but result in increased
cost and complexity. In this research it is shown
experimentally that instead of using an array of
hydrophones or recording noises for long time, SNR
and resolution of the correlation function can be
improved by the use of the large bandwidth (48kHz)
noise fields recorded at two sensors. The effect of
power equalization on the ocean noise field is also
demonstrated, which significantly improves the
resolution of the cross-correlation function. This work
shows applicability to passive depth measurement
as well as signal source location.

Engineering and Information Technology Research Report 2011

101

Localization in Underwater Sensor Networks

Sensing Network Scale and Dimensionality

Localization has received considerable attention


because many wireless sensor network applications
require accurate knowledge of the locations of the
sensors in the network. The two main localization
techniques are distance measurement and angle-ofarrival measurement. The former technique suffers
from flip ambiguity due to either the presence of
insufficient reference points or uncertainties in the
inter-nodal distance measurements in a triangular
network structure. A recently proposed quadrilateral
structure (an extended complex version of a
trilateration structure) can resolve flip ambiguity
of a node in dense deployments under restricted
orientations for anchors, however, the technique
leaves open issues to consider imprecise inter-nodal
distances between all pairs of nodes as complexity
increases to address the computations. Moreover,
both the structures (triangular and quadrilateral)
completely fail to resolve flip ambiguity in sparse node
deployments as sufficient nodes are not available in
order to determine the signs in calculated angles.
On the other hand, AOA can provide the sign of the
angles but requires expensive hardware calibration
to provide a high-level of accuracy in the measured
angles. Therefore, there is a need of a localization
technique that is cheaper, less complex, and robust
by considering measurement uncertainties between
all pair of nodes and more importantly, involves fewer
reference nodes.

In a large-scale wireless sensor network, it is often


desirable to count the number of nodes in the
network, or the number of nodes that are within
communications range of a particular node. To date
techniques employed to estimate the number have
been based on some aspect of the communications
protocol(s) in use. In this paper, we propose an
estimation technique based on cross-correlation of
random signals, in which the ratio of the mean of the
cross-correlation function to its standard deviation
determines the number of nodes. This proposed
technique addresses a number of practical issues in
a digital receiver, including fractional-sample delays,
internal noise etc. An error analysis is provided
that demonstrates the superior performance of this
technique to protocol-based methods.

The primary contributions of our work include a


hybrid technique that uses low-accuracy (cheap)
AOA measurements along with erroneous distance
measurements between each pair of nodes in a
much simpler triangular network that corresponds to
a sparse deployment. In our initial phase we develop
mathematical models involving only two reference
nodes that are able to resolve flip ambiguity a unknown
node with a high probability of success even with
an RMS error as high as 150 in the line-of-bearing
estimate, which avoids the need for calibration in many
practical situations. In later phases, we modelled
our hybrid localization technique to accommodate
imprecise inter-nodal measurements between all pairs
of nodes. In the final phase, we extend our localization
technique estimate network layout for extremely sparse
node deployments by eliminating flip ambiguity to
facilitate efficient routing. Our hybrid approach for
resolution of flip ambiguity is useful, not only to develop
lower-complexity localization techniques, but also
for many lower-layer network functionalities such as
geographic routing, topology control, coverage and
tracking, and controlled mobility when a large number
of these nodes have to be deployed and minimal
anchor dependency are required.

102

Engineering and Information Technology Research Report 2011

The estimation of the number of nodes using crosscorrelation depends on the dimensionality of the
nodes. So it will be better if we could estimate
the dimensionality first. The estimation of the
dimensionality of the nodes measures whether
the nodes are oriented in 1D, 2D or 3D. The
dimensionality of the nodes would also be helpful in
obtaining additional information about the network
e.g. localization of the nodes, AOA estimation etc.
Researches are going on regarding the dimensionality
of communication networks. Most researches on
dimensionality are for the network architecture before
deployment of the nodes. But the dimensionality of a
deployed unknown network is almost new research
area. In my knowledge, one protocol-based technique
has been proposed for the dimensionality estimation
in a communication network. It has already been
mentioned that use of protocol is inefficient in some
environment like underwater, underground, etc. In this
work, similar cross-correlation technique to estimate
the number of nodes has been used to estimate the
dimensionality of a communication network after
deployment. The process will be concurrent with
the number of nodes estimation. Thus the proposed
cross-correlation technique to estimate the number of
nodes with their dimensionality will be interesting in
the field of wireless communication network like WSN,
RFID, etc.

Unmanned Vehicles
SEIT Academics
Dr Matt Garratt
A/Prof Hemanshu Pota
Dr Sreenatha Anavatti
Dr Andrew Lambert
Dr Tapabrata Ray

SEIT Postgraduate Students


Mr Tushar Roy
Mr James Taylor
Mr Mohsen Tehrani
Mr Sobers Francis
Mr Osama Hassan
Mr Khirul Alam

SEIT Research Staff


Mr Anthony Peebles
Dr Mahendra Samal
Dr Hamid Teimoori

Other Collaborators
University of New South Wales
A/Prof Jinling Wang

Research Description
The unmanned vehicles research group has been
working with both fixed and rotary wing unmanned
aircraft for over a decade. In the past two years, work
has also begun with unmanned ground vehicles and
unmanned underwater vehicles. Expertise extends
from studies of the aerodynamics of flapping wings
and micro air vehicles, through robust autopilots, to the
practicalities and challenges of rotary wing operations
from maritime platforms. The research carried out
during 2011 is explained in the following paragraphs.

High-Bandwidth Control of an
Unmanned Helicopter
This Australian Research Council (ARC) funded
project aims to develop high bandwidth control
methods and advanced dynamic modelling for
Rotorcraft Unmanned Aerial Vehicles (RUAVs).

This will enable new roles such as the precision


landing of RUAVs to the moving deck of a ship in
rough seas. During the past twelve months, we have
developed a Model Predictive Control (MPC) based
flight control system for the Vario XLC helicopter.
MPC controllers are model based controllers which
explicitly use the predictive model of the system
to compute the future control moves. The MPC
controller architecture replaces the widely used
inner outer-loop controller structure. We have also
proposed an analytical method to calculate offdiagonal terms of Q-matrix used the cost function
for better tracking. A linear state space reduced
order model for longitudinal and lateral dynamics of
the helicopter is used for controller design. We have
enhanced the MPC performance by augmenting the
helicopter model with servo dynamics during the
prediction phase. The inclusion of servo dynamics
and time delay yields a smoother control response
and we now have a control system which can
tolerate a delay of more than 80milliseconds whilst
maintaining superior performance.
The other major contribution of the past year has
been a hierarchical inner-outer loop-based scheme
for control in the presence of servo and delay
constraints. The inner-loop (attitude controller)
employs an inverse optimal control strategy, which
circumvents the tedious task of numerically solving
an online Hamilton-Jacobi-Bellman (HJB) equation to
obtain the optimal controller. The designed controller
is optimal with respect to a meaningful objective
function which considers penalties for control input,
angular position and angular velocity. The outer loop
makes use of the backstepping technique, which
guarantees the asymptotic stability and tracking
performance in three channels (lateral, longitudinal,
vertical) simultaneously. Moreover, in order to
compensate the time delay in the control loop, the
position controller gains are tuned systematically to
accommodate the time delay. This new method takes
advantages of both inverse optimal control strategies
and backstepping which makes it simple, easy to
tune and implement in flight tests. To investigate
the optimality of the proposed attitude controller,
its performance has been compared with another
controller designed using feedback-linearisation (FBL)
method. Our simulations show that, in large attitude or
angular velocity errors, our controller achieves global
stability with less control effort than the one designed
with the FBL method.

Figure 1: Vario XLC Gas turbine helicopter used for experiments into High-Bandwidth Control

Engineering and Information Technology Research Report 2011

103

Concept Air Vehicle Hover


The objective of this Defence Science and
Technology Organisation (DSTO) funded work is to
develop a reliable system for sensing and controlling
the hover of a Micro Air Vehicle (MAV). The work
will be suited to both rotary wing and flapping wing
implementations. The system would require only
visual and inertial sensing, breaking the reliance on
technologies such as GPS which can be selectively
disabled or jammed and which may be unavailable
indoors and in cluttered environments. Range
measuring technologies are required for control
of height in hover but state-of-the-art options for
sensing height such as laser range-finders or radar
are simply impractical for MAVs owing to the physical
barriers to miniaturisation that exist. Additional
advantages of visual guidance are that it is passive,
small and low cost.
The problem of developing a reliable system for
sensing and controlling the hover of a Micro Air
Vehicle (MAV) using visual snapshots is considered.
A new algorithm has been developed that uses a
stored image of the ground, a snapshot taken of the
ground directly under the MAV, as a visual anchor
point. The absolute translation of the aircraft and
its velocity are then calculated by comparing the
subsequent frames with the stored image and fed into
the position controller.

A Collaborative 3D Ranging and Mapping for


Satellite Remote Sensing
This project aims to demonstrate a miniature payload
system for mapping of terrain using multiple imaging
platforms. By geo-referencing imagery using a single
sensor, 2D maps can be constructed. However with 2
or more spatially separated sensors, image disparity
(stereo vision) can be used to determine the range
and 3D layout of terrain. The eventual aim would be
to use a cooperating swarm of micro-satellites to
build 3D terrain databases in real-time. With real-time
processing being our eventual goal, a snapshot of
dynamic environments such as beaches, battlefields,
urban landscapes etc can be mapped. 3D data is
valuable for many applications such as visualisation
of buildings, construction, monitoring of erosion,
surveying etc. The 3D products can be used as
ground truth provider for large scale satellite data
processing. They can also be used alone in disaster
management on a local scale where manned aircraft
cannot access. For example they can serve as a
real time assistant in fire control by identifying fuel
distribution (such as tree heights and density), and
magnitude of damage.

104

Engineering and Information Technology Research Report 2011

Our aim in this project is to demonstrate 3D mapping


using an unmanned aerial vehicle (UAV) flying with
spatially separated sensors. In future work, this would
be expanded to multiple UAVs flying in formation.
The basic concept will be to fly over terrain using at
least 2 sensors with video logging capability. Using
accurate time stamping and accurate position and
orientation data from GPS/INS, the logged image
streams will be able to be combined and compared
to determine 3D position for each pixel image. Image
registration techniques will be applied to overlapping
images so that the pixel disparity between
corresponding features from both images can be
used to determine range in the images.
For this work, we have developed a system for
storing uncompressed images to a compact flash
card media for later post-processing. Image data
is collected from large format CMOS sensors,
and parsed by a Field Programmable Gate Array
(FPGA) system. Whilst our eventual aim is for realtime evaluation, work for this project will be done
using post-processing. The FPGA based image
processing engines are existing units developed
at UNSW Canberra with added high speed LVDS
links for transmission of image data to the logging
system. The logging system comprises a secondary
FPGA which assembles the imagery, synchronisation
data and telemetry into a format suitable for highspeed storage onto CompactFlash. The video logger
stores high-speed uncompressed digital video at 50
Mpixels/sec. The ancillary data includes timestamps
and live telemetry such as the helicopter position,
speed and orientation.

Figure 2: An External FPGA based imaging


system incorporating megapixel CMOS image
sensor provides high speed LVDS transmission
of data and telemetry to the Video Logger, also
FPGA based, for storage on the high throughput
non-volatile CompactFlash card.

Obstcale Avoidance for Autonomous Vehicles:


Our research aims to overcome some of the significant
challenges facing autonomous navigation in cluttered
environments which limits numerous applications
including flight of micro air vehicles through urban
landscapes, automated mining exploration, digital
mapping, environmental monitoring, military
surveillance and reconnaissance, search and rescue
missions, underwater applications and industrial
automation. Path planning for an autonomous ground
vehicle (AGV) is a challenging task due to incomplete
information about the surroundings and is particularly
hard in a dynamic environment. For successful and
complete path planning in cluttered environment, the
AGV needs to replan its path quickly in such a way to
avoid any obstacles if necessary, until it completes the
assigned task.
Efficient D* lite algorithm has been proposed by
using the Fibonacci heap data structure as its priority
queue and it performs well in the dynamically varying
environment by reducing the total computational time
for replanning the path. The Efficient D* lite algorithm
has been implemented in the onboard Pioneer 3DX
mobile robot. The various experiments have been
done in the dynamic environment and the results are
compared with the existing algorithms. A 3D range
camera, which provides a range measurement for
every pixel in the image, is used along with sonar
sensors to obtain the behaviour of the obstacles.
The range camera is also used to get the depth
information of the obstacles and to determine the
3D geometry of the obstacles. These data from the
sensors are used effectively to predict the trajectory of
the dynamic obstacles from their previous positions.

Figure 3: Pioneer 3DX mobile robots used to


tyest obstacle avoidance using a 3D range
camera.

Smart Guidance and Control of Autonomous


Underwater Vehicle (AUV)
Autonomous Underwater Vehicles (AUV) have gained
importance over the years as specialized tools for
performing various underwater missions in military
and civilian operations. The autonomous control of
underwater vehicles poses serious challenges due
to the AUVs dynamics. AUVs dynamics are highly
nonlinear and time varying and the hydrodynamic
coefficients of vehicles are difficult to estimate
accurately because of the variations of these
coefficients with different navigation conditions
and external disturbances. This study presents a
system identification of AUV dynamics as a black
box which has an input-output relationship instead
of using the mathematical model with hydrodynamic
parameters to obtain the dynamic model to overcome
the uncertaininaty, nonlinearity and the difficulties of
modelling the AUVs.
We have developed AUV dynamic model identification
based upon fuzzy and hybrid nural fuzzy techniques
with online adaptive and learning algorithm.
The modelling techniques have been validated using
simulated data in the presence of noise
and disturbances.
Moreover, we have designed and developed an
on-line system identification using adaptive neural
fuzzy network (ANFN) based upon the error between
the identified model and the actual output.
The proposed ANFN model is based on a functional
link neural network (FLNN) as the consequent
part of the fuzzy rules. In this study, the functional
expansion block comprises of a subset of orthogonal
polynomials bases function. The FLNN has been
inserted to the consequent part of the fuzzy rules.
The local properties of the consequent part in the
ANFN model enable a nonlinear combination of
input variables to be approximated more effectively.
The FLNN is a single-layer neural structure capable
of forming arbitrarily complex decision regions
by generating nonlinear decision boundaries with
nonlinear functional expansion.
In addition, we have applied an online parameter
tuning technique for the fuzzy model and ANFN
model. The learning process involves determining the
minimum of a given cost function. The gradient of the
cost function is computed and the parameters are
adjusted with the negative gradient. Back Propagation
technique is used for online tuning of the fuzzy model.

Engineering and Information Technology Research Report 2011

105

On the controller side, indirect adaptive fuzzy


controller and Adaptive Neural Fuzzy Network (ANFN)
Controller for forward pitch and yaw motion controllers
are developed. We used triangular membership
function in the fuzzy controllers. We have used Back
Propagation technique for online tuning of the fuzzy
controller. We have derived the tuning equations for
triangular membership function. This adaptive fuzzy
has been tested and validated with normal operating
condition and with existing of noisy sensor data and
applying some disturbance and parameter variations
into the model.
We have proposed an automatic model generating
technique comprising a structure learning phase and
a parameter learning phase. The only information
required for generating the system is the input -output
data. It means that there is no need for any prior
knowledge of the physical relationship inside the
system and it offers an automatic generating black
box modeling tool. Structure learning is based on
the entropy measure used to determine whether a
new rule for fuzzy system should be added to satisfy
the fuzzy partitioning of input variables. Parameter
learning is based on supervised learning algorithms.
The back-propagation algorithm minimizes a given
cost function. Initially, No rules and memberships
exist and there are no nodes in the network except
the inputoutput nodes. The memberships, rules
and nodes are created automatically as learning
proceeds, upon the reception of online incoming
training data in the structure and parameter learning
processes. We have applied this technique to
generate automatically fuzzy model, fuzzy controller,
ANFN model and ANFN controller.
We have finished the design and setting up the
electrical and electronic system of the AUV that
needs to consider the functionality and tasks required
of it. In addition, The electronics system consists of
a PC/104 single-board industrial type computer with
IO module that which contain digital I/O, analogue I/O
and PWM channels. For this study, a low-cost IMU is
used together with other relatively low-cost sensors
such as magnetometer and water pressure sensor for
depth measurement. The following figure shows the
AUV in different conditions.

106

Engineering and Information Technology Research Report 2011

All the electrical and electronic system components


already have been installed in the vehicle.
We have tested the functionality and all electrical and
electronic system and the bouncy of the vehicle in the
swimming pool. We are going to implement and test
all the modeling and control techniques which has
been mentioned above in real time on AUV.

Virtual Environments

Spatio-Temporal Dynamics of Groups in


Standing Conversations: Complexity through
Local Interactions

& Simulation

High fidelity virtual environments are being used


extensively in various domains ranging from the
military, social sciences, and artificial life, to the
entertainment industry and city/building planning.
The applications include such diverse purposes
as training, education, planning and emergency
management. The simulation industry has long
recognised the importance of incorporating the
richness of real life action and interaction into the
characters in the virtual environments in order
to improve the believability of the simulations.
Rule based Multi Agent Systems facilitate the
representation of collective behaviour as an emergent
property of the interactions happening between
simple rules codified into the individual agents.
Such an approach has the desirable characteristic of
creating sophisticated group-level behaviours while
keeping the computational cost low. This project is
motivated by the boid-rules introduced to synthesize
the complexity of a flock of bird-like objects and
extends the approach into the social simulations of
conversation group dynamics. Four conceptual
rules- namely- Keep Personal Distance, Keep
Centre of the Conversation, Keep Visibility
and Keep Distance to the Nearest Neighbours
synthesize position and orientation dynamics of
people taking part in a standing conversation for
instance a cocktail party particularly when new
participants join the conversation and existing
participants leave the conversation. Different rule
configurations (at varying number of rules and
combination mechanism) were evaluated for the
visual fidelity by human evaluators. The results of
the evaluation suggest that the interactions between
the rules are capable of closely recreating positional
and orientation dynamics of real world standing
conversation groups. The goal of this ongoing work
is to derive the relationship between rule-complexity
and visual fidelity of boid-like Multi Agent Systems.
The work is also progressing into the other problem
domains such as simulation of sheepdog
herding behaviours.

SEIT Academics
Dr Michael Barlow
Prof. Hussein Abbass
Dr Sameer Alam
Mr. Mike Ford
Dr Ed Lewis
Dr Chris Lokan
Dr Kathryn Merrick
Dr Ruhul Sarker
Dr Kamran Shafi
Dr Rob Stocker

SEIT Postgraduate Students


Mr. Hock Chuan Lim
Ms. Erandi Lakshika Hene Kankanamge
Mr. Umran Abdulla
Mr. Suranjith De Silva
Mr. Joe Winter

Other Collaborators
University of Newcastle
Dr Ning Gu
SimCentric Technologies
Dr Adam Easton

Research Description
Virtual Environments (virtual reality) and simulation
are technologies undertaking increasing roles in the
areas of education, industry, defence, government,
and entertainment. As tools for a military commander
to quantify whether a Course Of Action is feasible,
a means for government to assess the impact of a
change in health care policy, or a physics teacher to
convey and illustrate a dry mathematical formula taken together simulation and VEs offer the ability to
model any scenario or problem and see that situation
play out in immersive 3D.
Work in VEs and simulation within SEIT proceeds along
several major axes or themes the technology of multiagent systems; the application of commercial games for
training, teaching, and decision-support; the application
of high fidelity simulations to answer questions about
change in policy or introduction of technology in a
complex system (e.g., air traffic management); the value
of and task-dependent choice of level of immersion;
visualisation techniques for rich and complex data;
the analysis of strategy and evolutionary games; and
models of human decision-making, leadership morals,
ethics, and teamwork. All are viewed with a complex
systems lens; interconnecting work in this area with that
in other groups such as Transportation Modelling and
Automation, Computational Intelligence, Modelling &
Cognition, Machine Learning & Developmental Robotics,
and Social Networks. Indeed, a number of projects
straddle more than a single group and may be found
listed under one of the groups previously mentioned.

Figure 1: The relationship between number of


rules and scores (0 being not realistic at all
and 9 being as realistic as a real life scenario)
provided by human evaluators.

Engineering and Information Technology Research Report 2011

107

Generative Social Simulation


A generative social simulation project to investigate
the interplay of ethical trust and social moral norms,
(specifically in the areas of simulated agent moral
cognition, increased duration of agent interaction and
feedback from the environment) was initiated as part
of a PhD research program. This project addresses
two key aspects namely the computational modelling
of ethical processes and the social simulation of
agents and network effects using a priming behaviour
framework of animal behaviour research.
Simulations and experiments are closely linked to the
generative philosophy. The simulation and generative
experiment logic is introduced and can be compared
to the traditional logic of simulation as a research
method (See Figure 2. and Figure 3).

Simulation run

Model

Simulated
Data
Similarity

Abstraction

Target
(Visible
Social
processes)

Collected
Data

Data
Gathering

The social simulation applied a modified generative


linked experimentation approach wherein the
experiments are linked in the form where the
outcomes of the previous experiments were
used as design inputs to aid in the formulation of
the next experiment. A total of five experiments
were conducted. These five experiments covered
investigations of:

Agent attributes (such as values for ethical


trust and ethical dispositions)

Extended agent parameters/attributes;

Social structures (

Extended network parameters

Environment feedback (selected mode)

The key findings are trust signals influence agents


interactions and ethical trust can primes moral
behaviours and hence impact the spread of moral
norms in social agent communities. The addition
of social structure and additional moral cognitive
elements, however, mediates and reduce the effects
of trusts signals. The further addition of feedback
from the environment serves to increase the impact
of network structure. From these findings, three
observations are drawn namely:

Agent-based, network-based and generative


simulation are useful for modelling and
investigating complex social processes

Relationships in social processes and social


structure will require careful investigation to
derive meaningful results;

Elements of agent attributes (such as ethical


trust, moral cognition, ethical dispositions and
network structures) are important for the study
on the spread of social moral norms

In 2011, this project facilitated the publication


of a journal paper and a self-contained
chapter for Book on Advances in Intelligent
Digital Ecosystems.

Figure 2: Logic of simulation modelling as a


research method

Simulation,
Generative experiments

Model

Generated
Data
Abstraction

Target
(Social
processes)

Data
Gathering

Similarity

Proxy
Data

Figure 3: Logic of simulation and generative


experiment

108

Engineering and Information Technology Research Report 2011

Computational Creativity and Procedural


Content Generation in Computer Games
With rapid growth in both production costs and
player populations over the last decade, the
computer games industry is facing new scalability
challenges in game design and content generation.
The application of computers to these tasks called
procedural content generation has the potential to
reduce the time, cost and labour required to produce
games. A range of generative algorithms have so far
been proposed for procedural content generation.
However, automated game design requires not only
the ability to generate content, but also the ability
to judge and ensure the novelty, quality and cultural
value of generated content. This includes factors
such as the surprise-value of generated content as
well as the usefulness of content in the context of a
particular game design. Studies of human designers
have identified that the ability to generate artefacts
that are novel, surprising, useful and valuable
are facets of the human cognitive capacity for
creativity. This suggests that computational models
of creativity may be an important consideration for
developing tools that can aid in or automate design
processes. However such cognitive models have not
yet been widely considered for use in procedural
content generation for games. This project has
developed a framework for procedural content
generation systems that use computational models
of creativity as a part of the generative process.
A software system has been implemented that
combines the generative shape grammar formalism
with a model of creativity based on the Wundt curve
to select new designs that are similar-yet-different
to existing human designs. The approach aims to
capture the usefulness and value of existing designs
while introducing novel and surprising variations.
The system incorporates a metric that permits
generated designs to be evaluated in terms of both
their similarity to high quality human designs and
their creative novelty.

Figure 4: A procedurally generated game level


(left) and an actual game level (right) from a
popular game which served as the template for
the shape/level grammar.

Simulation and Analysis of the Australian


Aiport Network
As for all means of transportation, the relationship
between origin and destination results in a complex
network of routes, which can then be complemented
with information associated with the routes
themselves, for instance, frequency, traffic load and
distance. The theory of complex networks provides
a framework for simulating and investigating the
dynamics on the resulting network structure.
In this work, we [Hossain, Alam, and Abbass]
investigated the structure and robustness of the
Australian Airport Network (AAN) by simulating it
as a complex network. This study investigates the
indices of degree distribution, characteristics path
length, clustering coefficient and centrality measure
as well as the correlation between them. Complex
network simulation and analysis of the ANN indicates
that the resulting network has a cumulative degree
distribution described by an exponential function
and displays small-world network properties.

Figure 5: Cumulative degree distribution in the


Australian air transport network.

Engineering and Information Technology Research Report 2011

109

A Simulation and evaluation environment for


Aircraft User Preferred Routes
Future air navigation will be based on 4-D Trajectory
Management which will be translated from a User
Preferred Routes (UPR), which are the routes with
the best business outcome from the airspace users
perspective. In this work we [Pham, Alam, Lokan
& Hussein] designed and developed a simulation
environment to generate UPRs. The environment can
evaluate UPR concept for different UPR algorithms in
different flight and weather scenarios. The impacts
on environment, delay, and safety are analysed.
Its expected to find the best UPR algorithms for
specific routing circumstances as well as to provide
insight understanding of UPR routes.
The simulation of a 4-D route for an aircraft, which is
given aircraft performance, a 2-D (latitude, longitude)
route, and altitude profile in a weather environment,
includes the following three basic simulation models:

Climb to Top of Climb

Top of Climb to Top of Descent (Cruise Phase)

Top of Descent to Touch Down

This work provides the framework for fundamental


simulation and evaluation of UPR segments and
trade-off analysis of different UPR routes.

Simulation and Operations Research


Simulation is an important research area in
Operations Research. At UNSW Canberra, there is a
strong research team who conduct applied research
using simulation in both defence and non-defence
domains. Two of such examples are briefly
reported here.
Supply Chain Simulation: In the last decade, supply
chain operations have received tremendous attention
in manufacturing and business sectors due to an
increasingly challenging marketplace. The research
team develops a multi-agent simulation model for a
manufacturing supply chain operation. The model is
capable of handling complex networks with many
tiers, each tier with many business units and complex
interactions among them. The team has discussed
the multi-agent architecture and run simulation for
analysing the operational aspects. This will allow
companies to quantify different interacting parameters
in the supply chain and help support to make
improvement in operations.

S1

M1

W1

R1

S2

M2

W2

R2

S3

M3

W3

R3

Sn

Mm

Wk

Rp

Market

Figure 7: A Supply Chain with multiple entities in


each stage.

110

Figure 6: UPR Simulation Model

Engineering and Information Technology Research Report 2011

Simulation for Spare Planning: In practice,


maintenance and spare parts inventory policies
are treated separately or sequentially. To ensure
availability of spare parts for a production system
use, when necessary, there is always a tendency to
overstock them. Excess inventory involves
substantial working capital. The stock level of spare
parts is dependent on the maintenance policy.
Therefore, maintenance programs should be
designed to reduce both maintenance and inventory
related costs. In this research, a manufacturing
system is considered with stochastic item failure,
replacement and order lead times of statistically
identical items. The development of mathematical
model for such a system is extremely difficult. A
simulation model is therefore developed for the
system operating with block replacement and
continuous review inventory policy. The response
of the system was studied for a number of case
problems. The study clearly shows that the jointly
optimized policy produces better results than that
of the combination of separately or sequentially
optimized policies.

Research
Facilities
The School of Engineering and Information Technology
occupies five adjacent buildings at UNSW Canberra,
located at the Australian Defence Force Academy.
Building 15 houses most of the Computer Science
and Information Systems components of the School.
It also contains:

Several general-purpose, high-quality


Computer Laboratories

Image Coding Laboratory

Computer Network Laboratory

Studio, with Teleconferencing Facilities

Cognitive Engineering Laboratory

Building 16 is a four level building containing some


1800 m2 of office and laboratory accommodation
and houses most of the Electrical Engineering
components of the School. It also contains two
general purpose teaching laboratories, along with the
following specialist laboratories used for both teaching
and research:

Microwave Laboratory and Anechoic Chamber

Signal Processing Systems Laboratory

Materials Processing Laboratory with Class


100 Clean Room

Quantum and Laser Laboratories

Electrical Machines and Power Electronics


Laboratory

Final-year project Laboratory

Control Laboratory

Robotics Laboratory

Communication Laboratory

VESL Laboratory

Building 16 also has an antenna range on its Eastern


Roof, which can be controlled from within the
Microwave Laboratory. Above the Signal Processing
Systems Laboratory is a small, optical observatory
that is used for teaching and research in restoration
of atmospherically degraded images.

Schools teaching and research activities are


supported by an Electronics Workshop (which has
associated with it an RF screened room and printed
circuit board facility), a Mechanical Workshop,
(complete with carpentry shop, plating room, welding
bay, and paint shop) and a components and raw
materials store.
Building 17 is the John Baird Building, which houses
the Schools Administration Group and most of the
Mechanical and Aerospace Engineering components
of the school. It also contains a meeting room, a
computer room and the laboratories listed below.

Non destructive Inspection (NDI) Laboratory

Optics Laboratory

Aviation Safety Studio

Flight Simulator Laboratory

Figure 1. Traffic Management and Simulation


laboratory
Engineering and Information Technology Research Report 2011

111

Figure 2. Test Section of Shock Tube

Materials and Mechanical Testing Facility

Micro-nano Photonics Laboratory

Autonomous Vehicles Laboratory

Advanced Composites Laboratory

Building 18 contains a workshop and a design


room that is set up for computer aided design and
manufacturing work for undergraduate thesis work
and postgraduate research projects. It also contains:

Acoustics and Vibrations Laboratory

Anechoic Chamber

Fluids Laboratory

Thermodynamics and Vehicles Laboratory

Model Aircraft Laboratory

Vibrations Laboratory

Supersonic Wind Tunnel and Shock Tube


laboratory

Building 20 houses most of Civil Engineering


components of the School and also contains:

112

Aquatic Systems Test Tank

Hypersonics Laboratory

Fluidic Thrust Control Laboratory

Geotechnical Teaching Laboratory

Microscope and Balance Room

Temperature and Humidity Controlled


Environmental Rooms

Concrete, Soils and Bitumen Laboratory

Fog Room

Structural Testing Soils, Concrete and Steel


Laboratory

Hydraulics and Environmental Engineering


Laboratory

Geotechnical Engineering Laboratory

Mechanics of Solids and Material Testing


Laboratory

Engineering and Information Technology Research Report 2011

SEIT also has a number of mechanical workshop


facilities with modern facilities for the design,
manufacture and maintenance of equipment and
experiments for undergraduate project work and
postgraduate research.
Building 21 The Microfluidics Laboratory houses
a range of equipment for the manufacture and
study of flows in micro-fluidic devices and
analytical instruments for measurement of physical
properties of fluids. Micro-end milling is the main
manufacturing method used for the manufacturing
methods in the undergraduate student workshop
enabling undergraduate students to gain an
insight into the micro/nano technology area.
Two fume cupboards are available for chemistry
based experiments with facilities for the storage of
flammable and corrosive chemicals.
Building 32 houses the Schools Traffic Management
and Simulation Laboratory, located in Lecture Theatre
North, (SR 101).

Figure 3. Autoclave

Figure 4. Cognitive Engineering Laboratory

Figure 5. Hypersonic Shock Tunnel

Figure 6. Fixed Wing UAV

Figure 7. Telescope

Engineering and Information Technology Research Report 2011

113

Figure 8. Yamaha UAV in transit for flight testing

Figure 9. Adaptive optics

Figure 11. Laser Diagnostics Laboratory

114

Engineering and Information Technology Research Report 2011

Figure 10. Shimadzu universal testing machine

2011 SEIT
Academics

Prof Elanor Huntington


Head of School

Dr Chris Lokan
Deputy Head of School
(Administration)

A/Prof Ruhul Sarker


Deputy Head of
School (Research)

Dr Alan McLucas
Deputy Head of
School (Teaching)

A/Prof Andrew Neely


Deputy Head of School
(Technical Support)

Prof Hussein Abbass


Professor

Dr Safat Al-Deen
Lecturer

Dr Sreenatha Anavatti
Senior Lecturer

Dr John Arnold
Professor
Deputy Rector

Dr Michael Barlow
Senior Lecturer

Mr Craig Benson
Senior Lecturer

Dr Lawrie Brown
Senior Lecturer

Ms Sue Burdekin
Senior Lecturer

Mr Martin Copeland
Lecturer

Dr Daryl Essam
Senior Lecturer

Engineering and Information Technology Research Report 2011

115

116

Mr Alan Fien
Senior Lecturer

Michael Frater
Professor
Rector

Dr Matt Garratt
Senior Lecturer

Dr C.T. (Rajah)
Gnanendran
Senior Lecturer

A/Prof Charles Harb


Associate Professor

Dr Haroldo Hattori
Senior Lecturer

Dr Rik Heslehurst
Senior Lecturer

Professor Jiankun Hu
Professor

Dr Xiuping Jia
Senior Lecturer

Ms Bronwyn L. Jones
Associate Lecturer

A/Prof Obada Kayali


Associate Professor

Dr Amar Khennane
Senior Lecturer

A/Prof Harald Kleine


Associate Professor

Prof Joseph Lai


Professor

Dr Andrew Lambert
Senior Lecturer

Dr Edward Lewis
Senior Lecturer

Mr Raymond Lewis
Senior Lecturer

Dr Jong-Leng Liow
Senior Lecturer

A/Prof Robert Lo
Associate Professor

Dr Michael j Maher
Senior Lecturer

Engineering and Information Technology Research Report 2011

Dr Abdun Nasser
Mahmood
Lecturer

Dr Kathryn Merrick
Lecturer

Dr Gregory Milford
Senior Lecturer

Dr Gary Millar
Lecturer

Prof Evgeny Morozov


Professor

Dr Robert Niven
Senior Lecturer

Dr Sean OByrne
Senior Lecturer

A/Prof Valeri
Ougrinovski
Associate Professor

Prof Ian Petersen


Professor

A/Prof Mark Pickering


Associate Professor

A/Prof Hemanshu Pota


Associate Professor

Mr Heath Pratt
Lecturer

Dr Tapabrata Ray
Senior Lecturer

Dr Mark C. Reed
Senior Lecturer

Dr Michael Ryan
Senior Lecturer

Dr Kamran Shafi
Lecturer

Dr Krishna Shankar
Senior Lecturer

Dr Warren Smith
Senior Lecturer

Dr Murat Tahtali
Senior Lecturer

Dr Tim Turner
Senior Lecturer

Engineering and Information Technology Research Report 2011

117

Mr Trevor Wheatley
Lecturer

Dr Sarah (Yixia) Zhang


Senior Lecturer

118

Mr Alan White
Senior Lecturer

Mr Eric Wilson
Senior Lecturer

Dr Weiping Zhu
Senior Lecturer

Engineering and Information Technology Research Report 2011

Dr Kathryn Wilson
Senior Lecturer

Dr John Young
Senior Lecturer

Engineering and Information Technology Research Report 2011

119

Cover image: Rotary UAV

Back cover image: Hypersonic Tunnel

Production:
Editor: Dr Sreenatha Anavatti
School of Engineering and Information Technology
Design: Creative Media Unit

Engineering and Information Technology Research Report 2011

Engineering and Information Technology


Research Report 2011
Never Stand Still

School of Engineering and Information Technology

Contact us

If you would like further information, please contact


the Research Student Admissions Coordinator:
A/Prof Mark Pickering
Telephone: +61 2 6268 8238
Fax: +61 2 6268 8443
Email: m.pickering@adfa.edu.au
The School of Engineering and Information Technology
The University of New South Wales Canberra
PO Box 7916
CANBERRA BC ACT 2610
Cricos Provider Code: 00100G CMU 13492

Once we accept our limits, we go beyond them Albert Einstein

The School of Engineering and Information Technology is one of four


Schools of the University of New South Wales located at the ADFA
campus in Canberra. Research is a key focus for the School, and inspires
our approach to teaching and other activities.

Potrebbero piacerti anche