Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
Economically effective maintenance and monitoring of power systems to ensure high quality and reliability of
electric power supplied to customers is becoming one of the most significant tasks of today's power industry.
This is highly important because in case of unexpected failures, both the utilities as well as the consumers will
have to face several losses.
The ideal power network can be approached through minimizing maintenance cost and maximizing the service
life and reliability of existing power networks.
But both goals cannot be achieved simultaneously. Timely preventive maintenance can dramatically reduce
system failures. Currently, there are three maintenance methods employed by utilities: corrective maintenance,
scheduled maintenance and condition-based maintenance. The following block diagram shows the important
features of the various maintenance methods.
Corrective maintenance dominates in today's power industry. This method is passive, i.e. no action is taken
until a failure occurs. Scheduled maintenance on the other hand refers to periodic maintenance carried out at
pre-determined time intervals. Condition-based maintenance is defined as planned maintenance based on
continuous monitoring of equipment status. Condition-based maintenance is very attractive since the
maintenance action is only taken when required by the power system components. The only drawback of
condition-based maintenance is monitoring cost. Expensive monitoring devices and extra technicians are
needed to implement condition-based maintenance. Mobile monitoring solves this problem.
Mobile monitoring involves the development of a robotic platform carrying a sensor array. This continuously
patrols the power cable network, locates incipient failures and estimates the aging status of electrical insulation.
Monitoring of electric power systems in real time for reliability, aging status and presence of incipient faults
requires distributed and centralized processing of large amounts of data from distributed sensor networks. To
solve this task, cohesive multidisciplinary efforts are needed from such fields as sensing, signal processing,
control, communications and robotics.
As with any preventive maintenance technology, the efforts spent on the status monitoring are justified by the
reduction in the fault occurrence and elimination of consequent losses due to disruption of electric power and
damage to equipment. Moreover, it is a well recognized fact in surveillance and monitoring fields that
measurement of parameters of a distributed system has higher accuracy when it is when it is accomplished
using sensing techniques. In addition to sensitivity improvement and subsequent reliability enhancement, the
use of robotic platforms for power system maintenance has many other advantages like replacing man workers
for dangerous and highly specialized operations such as live line maintenance.
SENSOR FUSION:
The aging of power cables begins long before the cable actually fails. There are several external phenomena
indicating ongoing aging problems including partial discharges, hot spots, mechanical cracks and changes of
insulation dielectric properties. These phenomena can be used to locate the position of the deteriorating cables
and estimate the remaining lifetime of these cables. If incipient failures can be detected, or the aging process
can be predicted accurately, possible outages and following economical losses can be avoided.
In the robotic platform, non-destructive miniature sensors capable of determining the status of power cable
systems are developed and integrated into a monitoring system including a video sensor for visual inspection,
an infrared thermal sensor for detection of hot spots, an acoustic sensor for identifying partial discharge
activities and a fringing electric field sensor for determining aging status of electrical insulation. Among failure
phenomena, the most important one is the partial discharge activity
MOTION PATTERN:
Inspection robots used in power systems can be subdivided into external and internal ones. External robots
travel over the outer surface of cables and may possess a high degree of autonomy, whereas internal robots
travel in the inner spaces of ducts and pipes are often implemented as track-following devices with a
predetermined
route and a limited set of operations. The selection of motion patterns is determined by their complexity and
their autonomy level. The internal pattern requires an extra guide inside the cable. This causes a significant
problem because it impairs the integrity of the cable. Another problem is that the size of the internal pattern is
too small to carry many functions. Due to its simplicity and high autonomy level over internal pattern, the
external traveling method is preferred.
POWER SUPPLY:
Since the cable network is a global distributed system, it is very limiting for the inspection robot to draw a power
cord behind itself. Ideally, the power supply has to be wireless. Therefore, it is desirable that the platform
harvests energy from energized cables. Inductive coupling for wireless power supply could be a desired
method. Although low frequency coupling is less efficient, direct proximity to the cable makes it a viable choice.
Of course, the platform requires independent backup power source as well.
CONTROL STRATEGY:
Control strategy includes object tracking, collision avoidance and prevention of electrical short circuits. The
control system receives initial commands from the operator for the global tasks and small tasks are often pre-
programmed. The control should be robust because of the complicated motions required and the irregular
surface of the cable connections. It should include resources to locate the sensor array with respect to the
inspected system, to determine the shortest path and to adaptively switch sensor operation from a fast
superficial inspection mode to a slow detailed inspection mode.
SIGNAL PROCESSING STRATEGIES:
The major purpose of signal processing is to determine the fault type, fault extent and aging status. Then the
accurate estimation can be given to aid the decision on maintenance. The robot requires considerable
computational resources to be adaptive and flexible. This is highly problematic because of the limited size of
the robot, especially or underground applications. Generally speaking, the smaller is the size of the sensor or
the actuator, the higher is its bandwidth. This implies higher control rates in the robot, which ultimately
translates into additional computational load. Accordingly, this strongly argues for the use of communication
and off-board intelligence. This also involves allocation between local and remote signal processing. In local
signal processing, all data is processed onboard whereas in remote signal processing, all data is relayed to the
host computer for analysis.
COMMUNICATION:
The communication module exchanges data between the master computer and the mobile robot, including data
originating from different streams on both sides of communication link and different priorities associated with it.
POSITIONING SYSTEM:
The positioning system should work like a Global Positioning System (GPS), i.e., the exact position of the robot
can be estimated by such a system. Once this system is implemented, effective maintenance and rescue tasks
for cable systems, even for the robot itself, can be carried out. In most applications, two basic position
estimation systems are employed, relative positioning and absolute positioning. Relative positioning can
provide a rough estimate of location, while the absolute positioning can compensate the errors introduced.
MicroGrid
Abstract
Evolutionary changes in the regulatory and operational climate of traditional electric utilities and the emergence
of smaller generating systems such as microturbines have opened new opportunities for on-site power
generation by electricity users. In this context, distributed energy resources (DER) - small power generators
typically located at users' sites where the energy (both electric and thermal) they generate is used - have
emerged as a promising option to meet growing customer needs for electric power with an emphasis on
reliability and power quality.
The portfolio of DER includes generators, energy storage, load control, and, for certain classes of systems,
advanced power electronic interfaces between the generators and the bulk power provider. This paper
proposes that the significant potential of smaller DER to meet customers' and utilities' needs, can be best
captured by organizing these resources into MicroGrids.
MicroGrid concept assumes an aggregation of loads and microsources operating as a single system providing
both power and heat. The majority of the microsources must be power electronic based to provide the required
flexibility to insure operation as a single aggregated system. This control flexibility allows the MicroGrid to
present itself to the bulk power system as a single controlled unit that meets local needs for reliability and
security.
The MicroGrid would most likely exist on a small, dense group of contiguous geographic sites that exchange
electrical energy through a low voltage (e.g., 480 V) network and heat through exchange of working fluids. In
the commercial sector, heat loads may well be absorption cooling. The generators and loads within the cluster
are placed and coordinated to minimize the joint cost of serving electricity and heat demand, given prevailing
market conditions, while operating safely and maintaining power balance and quality. MicroGrids move the
PQR choice closer to the end uses and permits it to match the end user's needs more effectively. MicroGrids
can, therefore, improve the overall efficiency of electricity delivery at the point of end use, and, as micrgrids
become more prevalent, the PQR standards of the macrogrid can ultimately be matched to the purpose of bulk
power delivery.
MICROGRID ARCHITECTURE
The MicroGrid structure assumes an aggregation of loads and microsources operating as a single system
providing both power and heat. The majority of the microsources must be power electronic based to provide the
required flexibility to insure controlled operation as a single aggregated system. This control flexibility allows the
MicroGrid to present itself to the bulk power system as a single controlled unit, have plug-and-play simplicity for
each microsource, and meet the customers' local needs. These needs include increased local reliability and
security.
The figure shows the basic Microgrid Architecture. The electrical system has 3 feeders A, B and C. At the end
of feeder there is collection of loads. This system is connected to the distribution system through separation
device usually a static switch. The feeder voltage at the load are usually 415V or less. In Feeder A, there are
several microsources such as PV (photovoltaic), microturbine (which use renewable energy and fuel sources
as it input) and one microsource, which provides combined heat and power. Each feeder has a circuit breaker
and power flow controller.
Power flow controller regulates feeder power flow at a level prescribed by energy manager. As load down
stream changes, the local microsources increase or decrease their power output to hold the power flow
constant. Feeders A and C are assumed to have a critical load and feeder B to have non-critical load. When
there are power quality problems on the distribution system, the MicroGrid can island (isolate) itself by using
the separation device. The non-critical feeder can be dropped using the breaker at B.
Technologies:
The key feature that makes the MicroGrid possible is the power electronics, control, and communications
capabilities that permit a MicroGrid to function as a semiautonomous power system. The power electronics are
the critical distinguishing feature of the MicroGrid, and they are discussed in detail below. This section
describes some of the other technologies
whose development will shape MicroGrids.
Microturbines, currently in the 25-100 kW range, although larger ones are under development, may ultimately
be mass-produced at low cost. These are mechanically simple, single shaft devices, using high-speed (50,000-
100,000 rpm) typically with airfoil bearings. They are designed to combine the reliability of commercial aircraft
auxiliary power units (APU’s) with the low cost of automotive turbochargers. Despite their mechanical simplicity,
microturbines rely on power electronics to interface with loads. Microturbines should also be acceptably clean
running. Their primary fuel is natural gas, although they may also burn propane or liquid fuels in some
applications, which permits clean combustion, notably with low particulates.
Fuel cells are also well suited for distributed generation applications. They offer high efficiency and low
emissions but are currently expensive. Phosphoric acid cells are
commercially available in the 200-kW range, and high temperature solid-oxide and molten- carbonate cells
have been demonstrated and are particularly promising for MicroGrid application. A major development effort
by automotive companies has focused on the possibility of using on-board reforming of gasoline or other
common fuels to hydrogen, to be used in low temperature proton exchange membrane (PEM) fuel cells. Fuel
cell engine designs are attractive because they promise high efficiency without the significant polluting
emissions associated with internal combustion engines.
Renewable generation could appear in MicroGrids, especially those interconnected through power electronic
devices, such PV systems or some wind turbines. Biofueled microturbines are also a possibility.
Environmentally, fuel cells and most renewable sources are a major improvement over conventional
combustion engines.
Storage technologies such as batteries, and ultracapacitors are important components of MicroGrids. Storage
on the microsource’s dc bus provides ride-through capabilities during system changes. Storage systems have
become more versatile than they were five years ago. Twenty eight-cell ultracapacitors can provide up to 12.5
kW for a few seconds.
Heat recovery technologies for use in CHP systems are necessary for MicroGrid viability, as is explained in the
following section. Many of these technologies are relatively developed and familiar, such as low and medium
temperature heat exchangers. Others, such as absorption chillers, are known but not in widespread use.
Environmentally, fuel cells and most renewable sources are a major improvement over conventional
combustion engines. Microturbines should also be acceptably clean running. Their primary fuel will be natural
gas, although they may also burn propane or liquid fuels in some applications, which permits clean combustion,
notably with low particulates. NOx emissions, which are a precursor to urban smog, are mainly a consequence
of combustion. Some traditional combustion fuels, notably coal, contain nitrogen that is oxidized during the
combustion process, but even burning fuels that contain no nitrogen emits NOx, which forms at high
combustion temperatures from the nitrogen and oxygen in the air.
Gas turbines, reciprocating engines, and reformers all involve high temperatures that result in NOx production.
These devices must be carefully designed to limit NOx formation. Thermal microsources that effectively use
waste heat can also have low overall carbon emissions that compete with those of modern central station
combined-cycle generators. Human exposure to smog also depends on the location of smog precursor
emissions. Since DER is likely to move NOx emissions closer to population centers, exposure patterns will be
affected.
????
1.3 Visible Light Communications Consortium
The Visible Light Communications Consortium (VLCC) which is mainly comprised of Japanese
technology companies was founded in November 2003. It promotes usage of visible light for
data transmission through public relations and tries to establish consistent standards. A list of
member companies can be found in the appendix. The work done by the VLCC is split up
among 4 different committees:
2. TECHNOLOGY
2.1 Transmitters
Every kind of light source can theoretically be used as transmitting device for VLC. However,
some are better suited than others. For instance, incandescent lights quickly break down when
switched on and off frequently. These are thus not recommended as VLC transmitters. More
promising alternatives are fluorescent lights and LEDs. VLC transmitters are usually also used
for providing illumination of the rooms in which they are used. This makes fluorescent lights a
particularly popular choice, because they can flicker quickly enough to transmit a meaningful
amount of data and are already widely used for illumination purposes.
However, with an ever-rising market share of LEDs and further technological improvements
such as higher brightness and spectral clarity,
LEDs are expected to replace fluorescent lights as illumination sources and VLC transmitters.
The simplest form of LEDs are those which consist of a bluish to ultraviolet LED surrounded by
phosphorus which is then stimulated by the actual LED and emits white light. This leads to data
rates up to 40 Mbit/s.
RGB LEDs do not rely on phosphorus any more to generate white light. They come with three
distinct LEDs (a red, a blue and a green one) which, when lighting up at the same time, emit
light that humans perceive as white. Because there is no delay by stimulating phosphorus first,
Data rates of up to 100 MBit/s can be achieved using RGB LEDs ([Won et al. 2008]).
In recent years the development of resonant cavity LEDs (RCLEDs) has advanced
considerably. These are similar to RGB LEDs in that they are comprised of three distinct LEDs,
but in addition they are fitted with Bragg mirrors which enhance the spectral clarity to such a
degree that emitted light can be modulated at very high frequencies. In early 2010, Siemens has
shown that data transmission at a rate of 500MBit/s is possible with this approach.
It should be noted that VLC will probably not be used for massive data transmission. High data
rates as the ones referred to above, were reached under meticulous setups which cannot
be expected to be reproduced in real-life scenarios. One can expect to see data rates of about 5
kbit/s in average applications, such as estimation [Haruyama et al. 2008]. The distance in which
VLC can be expectedto be reasonably used ranges up to about 6 meters [Won et al. 2008].
2.2 Receivers
The most common choice of receivers is photodiodes which turn light into electrical pulses. The
signal retrieved in this way can then be demodulated into actual data. In more complex VLC-
based scenarios, such as Image Sensor Communication [Iizuka and Wang 2008],
even CMOS or CCD sensors are used (which are usually built into digital cameras).
3. MODULATION
In order to actually send out data via LEDs, such as pictures or audio _les, it
is necessary to modulate these into a carrier signal. In the context of visible light
communication, this carrier signal consists of light pulses sent out in short intervals.
How these are exactly interpreted depends on the chosen modulation scheme, two of which will
be presented in this section. At first, a scheme called sub carrier pulse-position modulation is
presented which is already established as VLC-standard by the VLCC. The second modulation
scheme to be addressed is called frequency shift keying, commonly referred to as FSK.
A detailedaccount on modulation can be found in Sugiyama et al. [2007]. They also explore how
to combine pulse-position modulation with illumination control.
3.1 Pulse-position modulation
To successfully carry out sub carrier pulse {position modulation (SC -PPM) a time window T is
chosen in which exactly one pulse of length T/k is expected. Thus, sub carrier pulse {position
modulation can also be described as parameterized form, i.e. SC-kPPM. k has to be a power of
two, i.e. k =2l for some l. Then there are k = 2l different points of time for the pulse to occur.
Suppose a pulse is registered at some point k/ <=k. The data represented by this pulse is then
simply the number k/ written as k-digit binary number.
Figure 2 exemplifies pulse-pase modulation by showing how the data 1, 0, 1, 0,0, 1, 1, 1, 0, 0, 1,
0, 1, 1, 0, 1 is modulated into a succession of pulses with SC-4PPM and SC-2PPM. The
standard JEITA CP-1222 [Haruyama et al. 2008] which is promoted by the VLCC, recommends
using a SC-PPM modulation scheme.
Data is represented by presence and absence of the carrier wave which is a scheme generally
referred to as On-Off Keying (OOK). An alternative scheme is presented in the upcoming
section.
????? ?Fig. 2 Examples for sub-carrier pulse position modulation in context of VLC: SC-4PPM
and SC-2PPM
??
????? Fig. 3 Example for binary frequency-shift keying in VLC
At this point it is important to clarify a common source of confusion: In none of the modulation
schemes it is the actual light frequency that is changed. That would lead to undesired effects as
changing the light frequency also means changing the wave length of the light. Since VLC
transmitters also serve general illumination purposes, ongoing variation of the color of
surrounding light is unacceptable in most circumstances.
In subcarrier pulse position modulation it is the occurrence of light pulses that defines the
frequency whereas in frequency shift keying the actual pulse frequency is changed depending
on the data that is to be sent. In FSK, there is no ?position" of pulses, because light pulses are
sent uninterruptedly.
4. STANDARDIZATION EFFORTS
There are currently two JEITA (Japan Electronics and Information Technology Industries
Association) standards (JEITA CP-1221 and JEITA CP-1222) which will be presented in this
section. There is also an IEEE task group working on the specification of PHY and MAC layers
for VLC.
The VLCC played a key role in specifying the two JEITA standards. In 2007, the VLCC
proposed two standards which they called Visible Light Communication System Standard and
Visible Light ID System Standard. These two standards were accepted by the JEITA and
became known as JEITA CP-1221 and JEITA CP-1222, respectively.
Both standards were introduced in an effort to avoid fragmentation of proprietary protocols
which experience shows to happen usually when a technology is not standardized.
Within the IEEE working group for wireless and personal area networks (802.15) the IEEE has
formed task group 7 (TG7) which is supposed to write a PHY and MAC standard for
VLC.Unfortunately though, as of writing this document, information on their progress (the most
recent of which dates back to January 2009) is scarce.
5. APPLICATIONS
It should be noted that most proposed VLC applications are far from being market-ready.
Therefore, most applications mentioned in this section have often been tried out in research
settings, but their usage in real world scenarios is still somewhat hypothetical.
5.1 Localization
One of the major applications of VLC, especially in the medical field consists of estimating one's
location. [Liu et al. 2008] propose a scenario for visually handicapped people. Location
estimation is put to use in this scenario to guide people through a series of hallways.
All hallways are assumed to be illuminated by fluorescent lights which are capable of
transmitting a unique ID via VLC. Estimating the current location consists of two steps: Firstly,
the distance to each fluorescent light in reach is computed and secondly, the current position is
estimated based on the previously computed distances. The distance to each light source is
computed by first measuring the angle of incident light with assistance of a photo sensor that is
attached to the person's shoulder. Then, using some trigonometric functions, the distance
between the receiver and the light source in horizontal direction is calculated. The distance to
each light source describes a unique range curve (a rectangle with two half circles at each end).
The intersection of all distance range curves is the estimated location as shown by figure 4.