Sei sulla pagina 1di 376

1

Life Cycle Assessment:


Quantitative Approaches for Decisions That
Matter

H. Scott Matthews
Chris T. Hendrickson
Deanna H. Matthews

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Copyright/permissions/creative commons/etc on this page

Microsoft, Encarta, MSN, and Windows are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries.
MATLAB is a registered trademark of The MathWorks, Inc. in the United States and/or other
countries.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Dedication

To Lester Lave

Who taught us to work on problems that matter

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Table of Contents
Dedication ................................................................................................................... 3
Table of Contents ......................................................................................................... 4
Preface ........................................................................................................................ 7
Chapter 1 : Life Cycle and Systems Thinking ............................................................... 10
Learning Objectives for the Chapter .................................................................................... 10
Overview of Life Cycles ....................................................................................................... 10
A Brief History of Engineering and The Environment ........................................................... 12
Life Cycle Thinking .............................................................................................................. 14
Systems Thinking in the Life Cycle ....................................................................................... 18
A History of Life Cycle Thinking and Life Cycle Assessment .................................................. 18
Decisions Made Without Life Cycle Thinking ....................................................................... 21
Inputs and Outputs of Interest in Life Cycle Models ............................................................ 22
From Inputs and Outputs to Impacts ................................................................................... 25
The Role of Design Choices ................................................................................................. 27
What Life Cycle Thinking and Life Cycle Assessment Is Not .................................................. 28
Chapter Summary ............................................................................................................... 29
Chapter 2 : Quantitative and Qualitative Methods Supporting Life Cycle Assessment 32
Learning Objectives for the Chapter .................................................................................... 32
Basic Qualitative and Quantitative Skills ............................................................................. 32
Working with Data Sources ................................................................................................. 33
Accuracy vs. Precision ......................................................................................................... 37
Uncertainty and Variability ................................................................................................. 38
Management of Significant Figures ..................................................................................... 39
Ranges ................................................................................................................................ 41
Units and Unit Conversions ................................................................................................. 44
Considerations for Energy Unit Conversions ........................................................................ 45
Use of Emissions or Resource Use Factors ........................................................................... 47
Estimations vs. Calculations ................................................................................................ 48
Attributes of Good Assumptions ......................................................................................... 53
Validating your Estimates ................................................................................................... 54
Building Quantitative Models ............................................................................................. 56
A Three-step method for Quantitative and Qualitative Assessment .................................... 58
Chapter Summary ............................................................................................................... 59
Chapter 3 : Life Cycle Cost Analysis ............................................................................ 62
Learning Objectives for the Chapter .................................................................................... 62
Life Cycle Cost Analysis in the Engineering Domain ............................................................. 62
Discounting Future Values to the Present ........................................................................... 64
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Life Cycle Cost Analysis for Public Projects .......................................................................... 67


Deterministic and Probabilistic LCCA ................................................................................... 69
Chapter Summary ............................................................................................................... 72

Chapter 4 : The ISO LCA Standard Goal and Scope ................................................... 77


Learning Objectives for the Chapter .................................................................................... 77
Overview of ISO and the Life Cycle Assessment Standard .................................................... 77
ISO LCA Study Design Parameters ....................................................................................... 80
Chapter Summary ............................................................................................................... 92
Chapter 5 : Data Acquisition and Management for Life Cycle Inventory Analysis ....... 96
Learning Objectives for the Chapter .................................................................................... 96
ISO Life Cycle Inventory Analysis ......................................................................................... 97
Life Cycle Interpretation .................................................................................................... 108
Identifying and Using Life Cycle Data Sources .................................................................... 109
Details for Other Databases .............................................................................................. 119
LCI Data Module Metadata ............................................................................................... 120
Referencing Secondary Data ............................................................................................. 124
Additional Considerations about Secondary Data and Metadata ....................................... 125
Chapter Summary ............................................................................................................. 128
Advanced Material for Chapter 5 ...................................................................................... 131
Section 1 - Accessing Data via the US LCA Digital Commons .............................................. 131
Section 2 Accessing LCI Data Modules in SimaPro ........................................................... 136
Section 3 Accessing LCI Data Modules in openLCA .......................................................... 142
Chapter 6 : Analyzing Multifunctional Product Systems ........................................... 155
Learning Objectives for the Chapter .................................................................................. 155
Allocation of Flows for Processes with Multiple Products .................................................. 157
Allocation Example from LCI Databases ............................................................................. 163
Chapter Summary ............................................................................................................. 179
Further Reading ................................................................................................................ 180
Chapter 7 Another chapter TBA? .............................................................................. 182
Chapter 8 : LCA Screening via Economic Input-Output Models ................................. 185
Learning Objectives for the Chapter .................................................................................. 185
Input-Output Tables and Models ....................................................................................... 185
InputOutput Models Applied to Life Cycle Assessment .................................................... 193
Introduction to the EIO-LCA Input-Output LCA Model ....................................................... 197
EIO-LCA Example: Automobile Manufacturing ................................................................... 200
Beyond Cradle to Gate Analyses with IO-LCA .................................................................... 205
Chapter Summary ............................................................................................................. 207
Homework Questions for Chapter 8 .................................................................................. 210
Advanced Material for Chapter 8 - Overview ..................................................................... 213
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Section 1 - Linear Algebra Derivation of Leontief (Input-Output) Model Equations ............ 213
Section 2 Commodities, Industries, and the Make-Use Framework of EIO Methods ....... 215
Section 3 Further Detail on Prices in IO-LCA Models ....................................................... 218
Section 4 Mapping Examples from Industry Classified Sectors to EIO Model Sectors ...... 224
Section 5 Spreadsheet and MATLAB Methods for Using EIO Models ............................... 229

Chapter 9 : Advanced Life Cycle Models .................................................................. 241


Learning Objectives for the Chapter .................................................................................. 241
Process Matrix Based Approach to LCA ............................................................................. 241
Connection Between Process- and IO-Based Matrix Formulations .................................... 246
Extending process matrix methods to post-production stages .......................................... 254
Categories of Hybrid LCA Models ...................................................................................... 258
Chapter Summary ............................................................................................................. 264
Advanced Material for Chapter 9 Section 1 Process Matrix Models in MATLAB ........... 267
Advanced Material for Chapter 9 Section 2 Process Matrix Models in SimaPro ............ 269
Advanced Material for Chapter 9 Section 3 Process Matrix Models in openLCA ........... 273
Chapter 10 : Life Cycle Impact Assessment .............................................................. 278
Learning Objectives for the Chapter .................................................................................. 278
Why Impact Assessment? ................................................................................................. 278
Overview of Impacts and Impact Assessment ................................................................... 279
Chapter Summary ............................................................................................................. 300
Homework Questions for Chapter 10 ................................................................................ 300
Chapter 11 : Uncertainty and Variability Assessment in LCA ..................................... 302
Learning Objectives for the Chapter .................................................................................. 302
Methods to Address Uncertainty and Variability ............................................................... 316
Quantitative Methods to Address Uncertainty and Variability .......................................... 320
Deterministic and Probabilistic LCCA ................................................................................. 328
Chapter Summary ............................................................................................................. 329
Homework Questions for Chapter 11 ................................................................................ 330
Chapter 12 : Advanced Hybrid Hotspot and Path Analysis ........................................ 341
Learning Objectives for the Chapter .................................................................................. 341
Results of Aggregated LCA Methods ................................................................................. 341
A Disaggregated Two-Unit Example .................................................................................. 344
Structural Path and Network Analysis ............................................................................... 345
Web-based Tool for SPA ................................................................................................... 354
Chapter Summary ............................................................................................................. 365
Homework Questions for Chapter 12 ................................................................................ 366
Advanced Material for Chapter 12 Section 1 - MATLAB Code for SPA ............................. 367

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Preface
After finishing our book on Economic Input Output Life Cycle Assessment (EIO-LCA)
(Hendrickson, Lave and Matthews 2006) with help from our colleagues Arpad Horvath,
Satish Joshi, Fran McMichael, Heather MacLean, Gyorgyi Cicas, Deanna Matthews and
Joule Bergerson, we assumed that would be our final word. We did not imagine writing
another book on the topic. The 2006 book successfully demonstrated the EIO-LCA
approach and demonstrated various applications. At Carnegie Mellon University (CMU), we
had a sustainability sequence of four half-semester courses in our graduate program in Civil
and Environmental Engineering. Only one of those courses was on environmental life cycle
assessment (LCA), and over the course of a seven-week term there was only so much
material that could be covered. Also, that LCA follows an established process set by the
International Organization for Standardization (ISO) (and other similar agencies) meant that
it was hard to justify writing a book that teaches you how to use an existing recipe. Imagine
writing a cookbook that intends to teach you how to read other cookbooks!
But after using the book for a few years, we realized how much other material was needed
and how the book had only limited value as a textbook (which was not even the intent of the
book in the first place). Our half-semester graduate LCA course grew to a full semester. We
supplemented readings from our book with many other resources to the point that as of a
few years ago we were only assigning a few of the original book chapters. So while this book
was not really planned, the preparations for it have been happening for the last five years.
Another driving force is that LCA has changed since 2006. From our observations as
educators, researchers, practitioners, and peer reviewers in the LCA community, there are
trends that concern us. One of the trends is that practitioners are depending too much on
LCA software features (i.e., pressing buttons) without fully understanding the implications of
simply pressing buttons in existing software tools and reporting the results. In particular,
many practitioners accept calculations without considering the large amount of underlying
uncertainty in the numbers. These observations are especially concerning as LCA (as the
title of the book implies) is increasingly being used to support "big decisions" rather than
simple decisions such as whether to use paper or plastic bags (we actually favor cloth bags).
And thus we have prepared this free e-book to help educate you about LCA. Let us clearly
note that this book should be a supplement to, not a substitute for, acquiring, reading, and
learning the established LCA standards that we do not re-publish here. This book is
intended to be a companion to an organized tour of those standards, not a replacement. In
addition, we have organized chapters in a consistent way so that it can be used for
undergraduate or graduate audiences. For many of the chapters, there are sections at the end
of each chapter that we expect an undergraduate course may skip but that a graduate course
may dive into quite deeply. We use the book in this serial format in our own undergrad and
graduate LCA courses at CMU.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

This book (like its predecessor) is about life cycle thinking and acquiring and generating the
information to make sound decisions to improve environmental quality and sustainability.
How can we design products, choose materials and processes, and decide what to do at the
end of a product's life in ways that produce fewer environmental discharges and use less
material and energy?
We also should add that pursuing environmental improvement is only one of many social
objectives. In realistic design situations, costs and social impacts are also important to
consider. This book focuses on environmental impacts, although life cycle costs are
discussed in Chapter 3. Readers are encouraged to also seek out material on life cycle costs
and social impacts. A good starting point is our free, online book on Civil Systems Planning,
Investment and Pricing (Hendrickson and Matthews, 2013).
We expect that readers of this book (and thus students in courses using the book) are
generally knowledgeable about environmental and energy issues, are comfortable with
probability, statistics, and building small quantitative models, and are willing to learn new
methods that will help organize broad thoughts about how products, processes, and systems
can be assessed.
In summary, we consider this a "take two" of our original purpose to have a unified
resource for use in our own courses. This book's significantly expanded scope benefits from
our collective 40 years of experience in LCA. We overview the ISO LCA Framework, but
spend most of the time and space discussing the needs and practices associated with
assembling, modeling, and analyzing the data that will support assessments.
We thank our colleagues Xiaoju (Julie) Chen, Gwen DiPietro, Rachel Hoesly, and Francis
McMichael from CMU, Vikas Khanna and Melissa Bilec at the University of Pittsburgh, and
Joyce Cooper of the University of Washington for their many thoughts, comments, and
contributions to make this book project a success. A special thanks to Cate Fox-Lent and
Michael M. Whiston who provided substantial proofreading assistance for drafts. We also
thank dozens of students and colleagues for many interactions, questions and inspirations
over the years.
We hope that our experiences, as represented here in this free e-book, will make you a more
informed and educated teacher and practitioner of LCA and allow you to learn it and apply it
right the first time - as you are introduced to the topic.
H. Scott Matthews
Deanna H. Matthews
Chris T. Hendrickson
July 2014
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

References
Hendrickson, Chris T., Lester B. Lave, and H. Scott Matthews. Environmental life cycle
assessment of goods and services: An input-output approach. RFF Press, 2006.
Hendrickson, Chris T. and H. Scott Matthews, Civil Infrastructure Planning, Investment and
Pricing. http://cspbook.ce.cmu.edu/ (accessed July, 2013).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

10

Chapter 1: Life Cycle and Systems Thinking

Chapter 1 : Life Cycle and Systems Thinking


In this chapter, we introduce the concept of "thinking" about life cycles. Whether or not
you become a practitioner of LCA, this skill of broadly considering the implications of a
product or system is useful. We first provide definitions of life cycles and a short history of
LCA as it has grown and developed over the past decades and then give some examples
where life cycle thinking (not full-blown LCAs) has demonstrated where analyses can lead
(or has already led) to poor decisions. The goal is to learn how to think about problems
from a system wide perspective.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:
1. State the concept of a life cycle and its various stages as relevant to products.
2. Illustrate the complexity of life cycles for even simple products.
3. Explain why environmental problems, like physical products, are complex and
require broad thinking and boundaries that include all stages of the life cycle.
4. Describe what kinds of outcomes we might expect if we fail to use life cycle
thinking.

Overview of Life Cycles


We first learn about life cycles at a young age the butterfly's genesis from egg to larva to
caterpillar to chrysalis to butterfly; the path of water from precipitation into bodies of water,
then evaporation or transpiration back into the air. Frogs, tomatoes in the garden, seasons
throughout the year all life cycles we know or experience in our own life cycle. Each
individual stage along the cycle is given a distinct term to distinguish it from the others, yet
each stage flows seamlessly into the next often with no clear breaks. The common theme is a
continuous stepwise path, one stage morphing into the next, where after some time period
we are back to the initial starting point. A dictionary definition of life cycle might be "a series
of stages or changes in the life of an organism". Here we consider this definition for
products, physical processes, or systems.
While we often are taught or consider life cycles as existing in the natural world, we can just
as easily apply the concept to manmade products or constructs: aluminum's journey from
beverage can to recycle bin back to beverage can; a cellphone we use for our 2-year contract
period then hold onto (because it must have some value!) before donating to a good cause
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

11

where (we presume) it is used again beforebeing recycled? being thrown away? The
same common theme a continuous stepwise path, one stage morphing into the next, where
after some time we are (or may be) back to the initial starting point. It is these kinds of life
cycles for manmade products and systems that are the focus of this book.
As the domain of sustainable management has taken root, increasingly stakeholders describe
the need for decision making that considers the "life cycle". But what does that mean?
Where does that desire and intent come from?
The entire life cycle for a manmade product goes from obtaining everything needed to make
the product, through manufacturing it, using it, and then deciding what to do with it once it
is no longer being used. Returning to the natural life cycles described above this means
going from the birth of the product to its death. As such, this kind of view is often called a
"cradle to grave" view of a product, where the cradle represents the birthplace of the
product and the grave represents what happens to it when we are done with it often to be
thrown into a landfill. Some life cycles may focus on the process of making the product (up
to the point of leaving the factory) and have a "cradle to gate" view, where the word gate
refers to the factory gate. If we have a fairly progressive view, we might think about
alternatives to a "grave". That might mean recycling of some sort, or taking back the
product and using it again. Building on this alternative terminology, proponents have also
referred to the complete recycling of products as going from "cradle to cradle".
Consider some initial product life cycle views:

A piece of fruit is grown on a farm which uses water and perhaps various fertilizers
and equipment to bring it to market. There it is sold to either a food service
business or an individual consumer. While much of it is hopefully eaten, some of it
will not be edible and the remainder will be disposed of as food waste either as
compost or in the trash.

A tuxedo is sewn together at a factory and then distributed and sold. It is purchased
either for personal use (perhaps only being used once or twice a year), or for the
purposes of renting it out for profit to people who need it only once, and maybe
cannot justify the cost of buying one. The rental tuxedo will be rented several times
a month, and after each rental it is cleaned and prepared for the next rental.
Eventually the tuxedo will either be too worn to use, or the owner will grow out of
it. At that point it is likely donated or thrown away.

A car is put together from components at a factory. It is then delivered to a dealer,


purchased by a consumer, and driven for a number of years. At some point the
owner decides to get rid of the car perhaps selling it to another driver who uses it

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

12

Chapter 1: Life Cycle and Systems Thinking

for several years. Eventually its owner finds no sufficient value for it, and it will
likely be shredded into small pieces and useful metals reclaimed.

A computer is assembled from components manufactured across the world (all of


which are shipped to an assembly line). It is bought and plugged in by the owner,
consuming electricity for several years before becoming obsolete. At the end of its
useful life it might be sold for a fraction of its purchase price, or may be donated to a
party that still finds value in it, or it may be stored under a desk for several years.
Like the car example above, though, eventually the owner will find no sufficient
value for it and want to get rid of it.

We can already start to think about some implications of these basic life cycles. Using fuels
and electricity generates pollution. Applying fertilizers results in runoff and stream
contamination. Washing a tuxedo releases chemicals into wastewater systems that need to
be removed. Making semiconductor chips consumes large amounts of water and uses
hazardous chemicals. Finally, putting items in landfills minimizes our opportunity to
continue extracting usefulness from those value-added items, takes up land that we cannot
then use for other purposes, and, if the items contain hazardous components, leaks may
eventually contaminate the environment.
This is a modern view of a product. We have not always been so broad and comprehensive
in thinking about such things. In the next few sections we briefly talk about the related
history of this kind of thinking, and also give some sobering examples of decisions and
products that were made (or promoted) that had not fully considered the life cycle.

A Brief History of Engineering and The Environment


Before we further motivate life cycle thinking, let's briefly talk about the history of industrial
production, environmental engineering, science, and management as it applies to managing
the impacts of products. While engineers and others have been creating production or
manufacturing processes for products for centuries, nearly all of the production systems we
have created in that time are "linear", i.e., we need to keep feeding the system with input at
one end to create output at the other. We design such linear processes independently of
whether we will have long-lasting supplies of the needed inputs, and certainly have not made
contingencies for how to change the process should we begin to run out of those resources.
We also have thought quite linearly in terms of how well the natural environment could deal
with any potential wastes from the production systems we have designed.
It is worth realizing that environmental engineering (i.e., the integration of science to
improve our natural environment) is a fairly young discipline. While there is evidence of
ancient civilizations making interesting and innovative solutions to dealing with wastes, the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

13

establishment of a real environmental engineering profession was not really formalized until
around 1900. Initially, what we now call environmental engineering grew out of the need to
better manage urban wastes, and thus most of the activity was originally referred to as
"sanitary engineering". Such activities involved diversion of waste streams to distant sinks to
avoid local health problems, such as sewer systems (Tarr 1996). Eventually, end of pipe
treatment emerged. By end of pipe, we mean that the engineering problem was focused on
what to do with the waste of a system (e.g., a factory or a social waste collection system)
after it has already been produced. Releases of wastes and undesirable outputs to the
environment are also called emissions. Another historical way of dealing with
environmental problems has been through remediation. Remediation occurs after the
pollution has already occurred, and may involve cleaning up a toxic waste dump, dredging a
river to remove long-buried contaminants that were dumped there via an effluent pipe, or
converting contaminated former industrial sites (brownfields) into new developments. The
remediation activities may occur soon after or even decades after the initial pollution
occurred.
An alternative paradigm was promoted in the 1980s, referred to as pollution prevention
(P2, or cleaner production). It is probably obvious that the whole point of this alternative
paradigm was to make stakeholders realize that it is costly and late in the process to wait
until the end of the pipe to manage wastes. If we were to think about the inevitable waste
earlier in the process chain, we could create a system that produces less (or ideally, no) waste.
A newer paradigm is to promote sustainability. Achieving sustainability refers to the
broader balancing of social, economic, and environmental aspects within the planet's ability
to provide. The United Nations' Brundtland Commission (1987) suggested "sustainable
development is development that meets the needs of the present without compromising the
ability of future generations to meet their own needs".
Almost all people in developed nations share the goals of improving environmental quality
and making sure that future generations have sufficient resources. Unfortunately consumers,
business leaders, and government officials do not have the information required to make
informed decisions. We need to develop tools that tell these decision makers the life cycle
implications of their choices in selecting materials, products, or energy sources. These
decisions are complicated: they depend on the environmental and sustainability aspects of all
products and services that contribute to making, operating, and disposing of those materials,
products, or energy sources. They also depend on being able to think non-linearly about our
production systems and envision the possibilities of resource scarcity or a lack of resilience
in the natural environment. Accomplishing these goals requires life cycle thinking, or
thinking about environmental problems from a systems perspective.
Nowadays all of these activities are part of what we refer to as environmental engineering.
Despite trends towards pollution prevention and sustainability, basic challenges remain to
design better end of pipe systems even in the developed world where pollution prevention is
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

14

Chapter 1: Life Cycle and Systems Thinking

well known but is deemed as too expensive for particular processes (or where all costeffective P2 solutions have already been implemented). But the general goal of the field is to
reduce pollution in our natural environment, and a primary objective is to encourage broader
thinking and problem solving that goes back before the end of the pipe and prevents
pollution generation. Practically, we will not achieve a pollution-free world in our lifetimes.
But we can help get there by thinking about environmental problems in a life cycle context,
and ideally identify solutions that focus on stages earlier in the life cycle than the point where
the waste pipe interfaces with our natural environment.

Life Cycle Thinking


Now that we have introduced the idea of a life cycle, and motivated why thinking about
products as systems or life cycles is important, we can dive deeper into the ways this kind of
thinking is defined and how it has evolved. Much of this development has come in the
engineering and science communities, and thus the views and representations of life cycles
are fairly technical. That said, given the typically focused and detailed views of scientists and
engineers, you will see that the way these systems are studied is quite broad.
A conceptual view of the stages of such life cycles is in Figure 1-1. Beginning with the
linear path along the top, we first extract raw materials from the ground, such as ores or
petroleum. Second, these are processed, transformed or combined to make basic material or
substance building blocks, such as metals, plastics or fuels. These materials are combined to
manufacture a product such as an automobile. These final products are then shipped (while
not shown) by some mode of transport to warehouses and/or stores to be purchased and
used by other manufacturers or consumers. During a product's use phase it may be used to
make life easier, provide services, or make other products, and this stage may require use of
additional energy or other resources (e.g., water). When the product is no longer needed, it
enters its "end of life" which means managing its disposition, possibly treating it as waste.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

15

Figure 1-1: Overview of a Physical Product Life Cycle (OTA, 1992)

As Figure 1-1 also shows, at the end of life phase there are alternatives to treating a product
as waste. The common path (linear path across the top) is for items to be thrown away, a
process that involves collection in trucks and putting the item as waste in a landfill.
However, the bottom row of lines and arrows connect the end of life phase back to previous
stages of the typical life cycle through alternative disposition pathways. Over the course of
a life cycle, products, energy and materials may change form but will not disappear. Reuse
takes the product as is (or with very minor effort) and returns it to the use phase, such as a
tuxedo. Remanufacturing returns the product to the manufacturing stage, which may
mean partially disassembling the product but then re-assembling it into a new final product
to be delivered, such as a power tool or a photocopier. Finally, recycling involves taking a
product back to its raw materials, which can then be processed into any of a number of
other products, such as aluminum beverage cans or cardboard boxes. This bottom row also
reminds us that despite the colloquial use of the word "recycling" in society, recycling has a
very distinct definition, as noted above. Other disposition options have their own terms.
An Internet search would turn up hundreds more pictures of life cycles, but for our
introductory purposes these will suffice. Once we discuss the actual ISO LCA Framework
in Chapter 4 we will see the standard figures and some additional useful ones.
If you are from an engineering background, you might be asking where the other traditional
product stages fit in to the product life cycle described above. In engineering, the typical
product life cycle starts with initiation of an idea, as well as research and design iterations
that lead to multiple prototypes, and eventually, mass production. One could classify all
such activities as research and development (or R&D) that would come to the left of all
activities (or perhaps in parallel with some activities such as material extraction) in Figure 11. We could imagine a reverse flow arrow for "Re-design" going along the bottom of Figure
1-1 to represent product failures or iterations. While not represented in the figure above, all
of these R&D-like activities are relevant stages in the life cycle. As we will see, though, when
analyzing life cycles for environmental impact, these stages are typically ignored.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

16

Chapter 1: Life Cycle and Systems Thinking

Simple and Complex Life Cycles


Before we go further in our discussion of life cycles, it is useful to pause and think about all
of the components of something with a very simple life cycle, like a paper clip. Get a blank
sheet of paper, and write "paper clip" in a corner of the sheet. If we think very simply about
its life cycle (e.g., using Figure 1-1 as a guide), we can work backwards from the paper clip
we are used to. To get its shape, it is coiled with machinery. We can write "coiling" and
draw an arrow from the words "coiling" to "paper clip". Before coiling it is just a straight
wiry piece of steel. Steel is made from iron and carbon. We can write "steel" and draw an
arrow to "coiling". Iron ore and the carbon source both need to be extracted from the
ground. All of these components and pieces are shipped between factories by truck, rail, or
other modes of transportation. Any or all of these stages of the life cycle could be added to
the diagram.
Putting all these materials and processes into a diagram is not so simple. Even that
description above for a paper clip was very terse. If we think a little more, we realize that all
of those stages have life cycles of their own. For example, the machinery that coils the steel
wire into a paper clip must be manufactured (its use phase is making the paper clip!). The
metal and other parts needed to make the machine also must be processed and extracted.
The same goes for all of the transportation vehicles and the infrastructure they travel on and
the factories to make iron and steel, etc. Figure 1-2 shows what the diagram might look like
at this point.

Figure 1-2: Exploded View Diagram of Production of Paper Clip

This chain goes back, almost infinitely, and the sheet of paper is quickly filled with words
and arrows. Even a product as simple as a paper clip has a complex life cycle. Thus a

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

17

product that we consider to be "complex" (for example a car) has a ridiculously complex life
cycle! Now that we can appreciate the complexity of all life cycles, you can begin to
understand why our thought processes and models need to be sufficiently complex to
incorporate them.
Without going in to all of the required detail, but to impress upon you the complexity of
LCA for more complex products, consider that a complete LCA of an automobile would
require careful energy and materials balances for all the stages of the life cycle:
1. the facilities extracting the ores, coal, and other energy sources;
2. the vehicles, ships, pipelines, and other infrastructure that transport the raw
materials, processed materials, and subcomponents along the supply chain to
manufacture the consumer product, and that transport the products to the
consumer: iron ore ships, trucks carrying steel, engines going to an automobile
assembly plant, trucks carrying the cars to dealers, trucks transporting gasoline,
lubricating oil, and tires to service stations;
3. the factories that make each of the components that go into a car, including
replacement parts, and the car itself;
4. the refineries and electricity generation facilities that provide energy for making
and using the car; and
5. the factories that handle the vehicle at the end of its life: battery recycling,
shredding, landfills for shredder waste.
Each of these tasks requires energy and materials. Reducing requirements saves energy, as
well as reducing the environmental discharges, along the entire supply chain. Often a new
material requires more energy to produce, but promises energy savings or easier recycling
later. Evaluating whether a new material helps improve environmental quality and
sustainability requires an examination of the entire life cycles of the alternatives. To make
informed decisions, consumers, companies, and government agencies must know the
implications of their choices for environmental quality and sustainability. Having good
intentions is not sufficient when a seemingly attractive choice, such as a battery-powered car,
can wind up harming what the manufacturer and regulator were trying to protect. This book
provides some of the tools that allow manufacturers and consumers to make the right
choices.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

18

Chapter 1: Life Cycle and Systems Thinking

Systems Thinking in the Life Cycle


All of this discussion of increasingly larger scales of problems requires us to be more explicit
in discussing an issue of critical importance in LCA studies that relates to system boundaries.
Of course a system is just a collection or set of interconnected parts, and the boundary is
the subset of the overall system that we care to focus on. Our chosen system boundary
helps to shape and define what the appropriate parts are that we should study. Above we
suggested that the entire life cycle boundary goes from cradle to grave or cradle to cradle.
Either choice means that we will have a very large system boundary, and maintaining that
boundary (as we will see later) will require a significant amount of effort to complete a study.
Due to this effort requirement, or because of different interests, we may instead choose a
smaller system boundary. If we are a manufacturer, perhaps our focus is only the cradle to
gate impacts. If so, our boundary would include only the stages up to manufacturing. It is
also possible that the boundary of our interest lies only in our factory, which further
constrains the system boundary.
Life cycle thinking is not restricted to manufactured products. Services, systems, and even
entire urban areas can be better understood via life cycle thinking. Services are particularly
interesting because such activities are typically considered as having very low impacts (e.g.,
consulting or banking) because there is no physical good being created, but in reality the
same types of effects are uncovered across the life cycle through the service sector's
dependence on fuels and electricity. Entire systems (e.g., a roadway network or the electric
power grid) can be considered from building all of the equipment components and also then
thinking about its design and disposition. At an even higher level, the life cycle of cities
includes the life cycles of all of the resources consumed by residents of the city, not just the
activities they do within the city's borders.
Finally, life cycle thinking is often useful when making comparisons, such as paper vs. plastic
bags or cups, cloth vs. disposable diapers, or retail shopping vs. e-commerce. The relevant
issues to deal with in such comparisons would be whether one option is more useful than
another, whether they are equal, whether they have similar production processes, etc. In fact
as we will see some of the great classic comparisons that have been done in the life cycle
analysis domain were very simple comparisons.

A History of Life Cycle Thinking and Life Cycle Assessment


We will discuss the formal methods that apply life cycle thinking to real questions in future
chapters (called life cycle analysis or assessment). In a life cycle analysis or assessment, the total
and comparative impacts of the life cycle stages are considered, with or without
quantification of those impacts. But to start, let us talk about some of the original studies

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

19

that inspire the field of life cycle thinking (before we even knew there was a field for such
things).
Most people attribute the first life cycle assessment (LCA) to Coca-Cola in 1969. At the
time, Coca-Cola sold its product to consumers in individual glass bottles. Coca-Cola was
trying to determine whether to use glass or plastic containers to deliver their beverage
product, and wanted to formally support a decision given the tradeoffs between the two
materials. Glass is a natural material, but Coca-Cola suggested switching to plastic bottles.
They reasoned that this switch would be desirable for the ability to produce plastics in their
own facilities, the lower weight of plastic to reduce shipping costs, and the recyclability of
plastic versus glass at the time. No specific form of this study has been publicly released but
we can envision the considerations that would have been made.
More recently, in the early 1990s, there were various groups of researchers debating the
question of "Paper or plastic?" This simple question, which you might get at the grocery
store checkout counter or coffee shop, turned into relatively complex exchanges of ideas and
results. We may think that we know that the correct answer is "paper," because it is a
"natural" product rather than some chemical based material like plastic. We can feel selfsatisfied, even if the bag gets wet and tears, spilling our purchases on the ground because we
made the natural and environmentally friendly decision. But even these simple questions
can, and should, be answered by data and analysis, rather than just a feeling that the natural
product is better. The ensuing analysis ignited a major controversy over how to decide which
product is better for the environment, beginning with an analysis of paper versus polystyrene
cups (Hocking 1991). Hocking's initial study was focused on energy use and estimated that
one glass cup used and rewashed 15 times required the same amount of energy as
manufacturing 15 paper cups. He also estimated break-even use values for ceramic and
plastic cups. The response generated many criticisms and spawned many follow-up studies
(too many to list here). In the end, though, what was clear at the time of these studies was
that there was no single agreed upon answer to the simple question of "paper vs. plastic".
Even now, any study using the best data and methods available today, will still conclude with
an answer along the line of "it depends". This is a sobering outcome for a discipline (life
cycle thinking) trying to gain traction in the scientific community.
Beyond these studies, other early analyses surprised people, since they found that paper bags,
paper cups (or even ceramic cups), and cloth diapers were not obviously superior to their
maligned alternatives (i.e., plastic bags, styrofoam cups and disposable diapers) in terms of
using less energy and materials, producing less waste, or even disposal at the end of life.

Paper for bags requires cutting trees and transporting them to a paper mill, both of
which use a good deal of energy. Paper-making results in air emissions and water
discharges of chlorine and biological waste. After use, the bag goes to a landfill
where it gradually decays, releasing methane.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

20

Chapter 1: Life Cycle and Systems Thinking

A paper hot-drink cup generally has a plastic coating to keep the hot liquid from
dissolving the cup. The plastic coating introduces the same problems as the foam
plastic cup. The plastic is made from petroleum with relatively small environmental
discharges. Perhaps most surprising, washing a single ceramic cup by hand uses a
good deal of hot water and soap, resulting in discharges of waste water that has to be
treated and the expenditure of a substantial amount of fuel to heat the water,
although washing the cup in a fully loaded dish washer uses less soap and hot water
per cup.

The amount of hot water and electricity required to wash and dry cloth diapers is
substantial. If water is scarce or sewage is not treated, washing cloth diapers is likely
to cause more pollution than depositing disposable diapers in a landfill. The best
option depends on the issue of water availability (washing uses much more water)
and heating the water.

In short, it is not obvious which product is more environmentally benign and more
sustainable. Such results are counterintuitive, but they reinforce the importance of life cycle
thinking.
The analyses found that the environmental implications of choosing paper versus plastic
were more similar than people initially thought. Which is better depends on how bad one
thinks water pollution is compared to air pollution compared to using a nonrenewable
resource. Perhaps most revealing was the contrast between plants and processes to make
paper versus plastic. The best plant-process for making paper cups was much better than the
worst plant-process; the same was true for plastic cups. Similarly, the way in which the cups
were disposed of made a great deal of difference. Perhaps the most important lesson for
consumers was not whether to choose one material over another, but rather to insist that the
material chosen be made in an environmentally friendly plant.
The original analyses showed that myriad processes are used to produce a material or
product, and so the analyst has to specify the materials, design, and processes in great detail.
This led to another problem: in a dynamic economy, materials, designs, and processes are
continually changing in response to factor prices, innovation, regulations, and consumer
preferences. For example, in a life cycle assessment of a U.S.-manufactured automobile done
in the mid-1990s, the design and materials had changed significantly by the time the analysis
was completed years later. Still another problem is that performing a careful material and
energy balance for a process is time-consuming and expensive. The number of processes
that are practical to analyze is limited. Indeed, the rapid change in designs, materials, and
processes together with the expense of analyzing each one means that it is impractical and
inadvisable to attempt to characterize a product in great detail. The various dependencies,
rationales, and assumptions used all make a great deal of difference in the studies mentioned

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

21

above (for which we have provided no real detail yet). LCA has a formal and structured way
of doing the analysis, which we will begin to discuss in Chapter 4.

Decisions Made Without Life Cycle Thinking


Hopefully you are already convinced that life cycle thinking is the appropriate way of
thinking about problems. But this understanding is certainly not universal, and there are
various examples of not taking a life cycle view that led to poor (albeit well intentioned)
decisions being made.
A useful example is the consideration of electric vehicles in the early 1990s. At the time,
California and other estates were interested in encouraging the adoption of vehicles with no
tailpipe emissions in an effort to reduce emissions in Southern California and to gain the
associated air quality benefits. Policymakers at the time had a specific term for such vehicles
"zero emissions vehicles (ZEVs)".
The thought was that getting a small
but significant chunk of the passenger
vehicle fleet to have zero emissions
could yield big benefits. Regulations
at the time sought to get 2% of new
vehicles sold to be ZEVs by 1998. In
parallel, manufacturers such as
General Motors had been designing
and developing the EV-1 and similar
cars to meet the mandated demand for
the vehicles (see Figure 1-3).
Figure 1-3: General Motors' EV-1
(Source: motorstown.com)

So why did we refer to this case as one


about life cycles? The electric vehicles to be produced at the time were much different than
the electric vehicles of today that include hybrids and plug-in hybrids. These initial cars were
rechargeable, but the batteries were lead-acid batteries basically large versions of the
starting and ignition batteries we use in all cars (by large, we mean the batteries were 1,100
pounds!). Let us go back to Figure 1-1 and use life cycle thinking to briefly consider such a
system. How would the cars be recharged? They would run on electricity, which even in a
progressive state like California leads to various emissions of air pollutants. Similarly, the
batteries would have large masses of lead that would need to be processed efficiently. Lead
must be extracted, smelted, and processed before it can be used in batteries and then, old
lead-acid batteries are often collected and recycled. None of these processes are 100%
efficient, despite the claims at the time by industry that it was the case. Would these vehicles
be produced in factories with no pollution? It is hard to consider that these vehicles would
really have "zero emissions" but then again, zero is a very small number! There would be

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

22

Chapter 1: Life Cycle and Systems Thinking

increased emissions in the life cycle of these electric vehicles the question was whether
those increases would fully offset the potential gains of reduced tailpipe emissions.
Aside from the perils of considering anything as having zero emissions, various parties began
to question whether these vehicles would in fact have any positive improvement on air
quality in California, and further, given the need for more electricity and lead, whether one
could even consider them as beneficial. In a study published by Lave et al. (to whom this
book is dedicated) in Science in 1995, the authors built a simple but effective model of the
life cycle of these vehicles that estimated that generating the electricity to charge the batteries
would result in greater emissions of nitrogen oxide pollution than gasoline-powered cars.
Eventually, California backed off of its mandate for ZEVs, partly because of such studies,
and policymakers learned important lessons about considering whole life cycles as well as
casual use of the number zero. The policymakers had been so focused on the problem of
reducing tailpipe emissions that they had overlooked the back-end impacts from lead and
increased electricity generation.
It is fair to say this was one of the first instances of life cycle thinking being used to change a
"big decision". The lesson again is that life cycle thinking is needed to make informed
decisions about environmental impacts and sustainability. Being prepared to use life cycle
thinking and analysis to support big decisions is the focus of this book.
A more recent example of life cycle thinking in big decisions is the case of compact
fluorescent lamps (CFLs), which were heavily promoted as energy efficient alternatives to
incandescent bulbs. While CFLs use significantly less electricity in providing the same
amount of light (and thus cost less in the use phase) as traditional bulbs, their disposal
represented a problem due to the presence of a small amount of mercury in the lamps (about
4mg per bulb). This amount of mercury is not generally a problem for normal, intact, use of
the lamps (and is less mercury than would be emitted from electric power plants to power
incandescent bulbs). However, broken CFLs could pose a hazard to users due to mercury
vapor and the DOE Energy Star guide to CFLs has somewhat frightening
recommendations about evacuating rooms, using sealed containers, and staying out of the
room for several hours. None of this information was good news for consumers thinking
about a choice of incandescent vs. CFL lighting choices.
The examples and discussion above hopefully reveal that you can think about life cycles
qualitatively or quantitatively, meaning with or without numbers (more on that in Chapter 2).

Inputs and Outputs of Interest in Life Cycle Models


Above we have suggested that there is a need to think about products, services, and other
processes as systems by considering the life cycle. We have also mentioned some popular

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

23

examples of the kinds of life cycle thinking studies that have been done. It is also worth
discussing the types of effects across a life cycle that we might be interested in tracking or
accounting for.
By 'effects' we mean what happens as a result of a product being manufactured, or a service
being provided, etc. There are likely economic costs incurred, for example by paying for the
parts and labor needed for assembly. There are interesting and relevant issues to consider
when focused purely on economic factors, and Chapter 3 discusses this type of thinking.
In many cases, the 'effects' of producing or using a product mean consuming energy in some
way. Likewise, there may be emissions of pollution to the air, water, or land. There are
many such effects that one might be interested in studying, and more importantly, in being
able to detect and measure. Thus we can already create a list of potential effects that one
might be concerned about in a life cycle study. In terms of effects associated with inputs to
life cycle systems, we could be concerned about:

Use of energy inputs, including electricity, as well as solid, liquid, and gaseous fuels.

Use of resources as inputs, such as ores, fertilizers, and water.

Note that our concern with energy and resource use as inputs may be in terms of the
quantities of resources used and/or the extent to which the use of these resources depletes
the existing stock of that resource (i.e., are we consuming a significant share of the available
resource?). We may also be concerned with whether the energy or resources being
consumed are renewable or non-renewable.
In terms of effects associated with outputs of life cycle systems, we could be concerned
about:

The product created as a result of an activity, such as electricity from a power plant.

Emissions of air pollution, for example conventional air emissions such as sulfur
dioxide, nitrogen oxides, and carbon monoxide.

Emissions of greenhouse gases, such as carbon dioxide, methane, and nitrous oxide.

Emissions to fresh or seawater, including solid waste, chemical discharges, toxics,


and warming.

Other emissions of hazardous or toxic wastes to air, land, water, or recycling


facilities.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

24

Chapter 1: Life Cycle and Systems Thinking

In short, there is no shortage of energy, environmental, and other effects that we may care
about and which may be estimated as part of a study. As we will see later in the book, we
may have interest in many effects but only be able to get quality data for a handful of them.
We can choose to include any effect for which we think we can get data over as many of the
parts of the life cycle as possible. One could envision annotating the paper clip life cycle
diagram created above with colored bars representing activities in the life cycle we anticipate
have significant inputs or outputs associated with them. For example, activities that we
expect to consume significant quantities of water could have a blue box drawn around them
or to have a blue square icon placed next to them. Activities we expect to release significant
quantities of air pollutants could have black boxes or icons. Activities we expect to create a
large amount of solid waste could be annotated with brown. While simplistic (and not
informed by any data) such diagrams can be useful in terms of helping us to look broadly at
our life cycle of interest and to see where in the life cycle we anticipate the problems to
occur.
Aside from simply keeping track of (accounting for) all of these effects across the life cycle, a
typical reason for using life cycle thinking is to not just measure but prioritize. Another way
of referring to this activity might be hot spot analysis, where we look at all of the effects
and decide which of the life cycle stages contributes most to the total (where "hot spots"
appear). Our colored box or icon annotation above could be viewed as a crude hot spot
analysis, because it is not informed by actual data yet.
For most cars, the greatest energy use happens during the use phase. Cars in the United
States are typically driven more than 120,000 miles over their useful lives. Even fairly fuelefficient cars will use more energy there than at any other stage of their life cycle. This is a
seemingly obvious example but it illustrates the reason we use life cycle thinking as we
have shown above our intuition is not sufficient in assessing where effects occur, and only
by actually collecting data and estimating the effects can we effectively identify hot spots.
This use of life cycle thinking to support hot spot analysis helps us identify where we need to
focus our attention and efforts to improve our engineering designs. If done in advance, it
can have a significant benefit. If done too late, it can lead to designs such as large lead-acid
battery vehicles.
Likewise if we create a plan to generate numerical values representing several of these life
cycle effects, we will eventually have to make decisions about how to compare them or
prioritize them. Such a decision process will be complicated by needing to compare releases
of the same type of pollution across various media (air, water, or land) and also by needing
to compare releases of one pollutant against another, comparing pollution and energy, etc.
While complicated, the process of making all of these judgments and choices will assist with
making a study that we can use to help our decision process. Chapter 12 overviews the types
of methods used to support such assessments.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

25

From Inputs and Outputs to Impacts


It is appropriate early on in this textbook to briefly discuss the kinds of uses, emissions, and
releases discussed above in connection with the types of environmental or resource use
problems they create. The new concept in this section is the idea of an environmental
impact. Unlike the underlying inputs and outputs of interest such as resource use or
emissions, an environmental impact exists when the underlying flows cause an
environmental problem. One can think of the old phrase "if a tree falls in the forest but no
one is there to hear it, does it make a sound?" This is similar to the connection between
environmental releases and environmental impacts. It is possible that a release of a specific
type and quantity of pollutant into the environment could have little or no impact. But if the
release is of sufficient quantity, or occurs in a location near flora or fauna (especially
humans) it is likely that there will be measurable environmental impact. Generally, our
concerns are motivated by the impacts but are indicated by the uses or releases because most
of us can not directly estimate the impacts. In other words, we often look at the quantities
of inputs and outputs as a proxy for the impacts themselves that need to be estimated
separately.
This brief section is not a substitute for a more rigorous introduction to such environmental
management issues, and should be supplemented with external work or reading if this is not
an area of your expertise. One could easily spend a whole semester learning about these
underlying connections before attempting to become an expert in life cycle thinking.

Example Indicators for Impacts that Inspire Life Cycle Thinking


In this section, we present introductory descriptions of several prominent environmental
impacts considered in LCA studies as exemplars and discuss how various indicators can
guide us to the actual environmental problems created. If interested, there are more detailed
summaries available elsewhere from agencies, such as the US Environmental Protection
Agency, US Geological Survey, the Department of Energy, and we will circle back to
discussing them in Chapter 12.
Impact: Fossil fuel depletion Use of energy sources like fossil fuels is generally an easy to
measure activity because energy costs us to acquire, and there are billing records and energy
meters available to give specific quantities. Beyond the basic issue of using energy, much of
our energy use comes from unsustainable sources such as fossil fuels that are finite in supply.
We might care simply about the finiteness of the energy resource availability as a reason to
track energy use across the life cycle. As mentioned above, we might seek to separately
classify our use of renewable and non-renewable energy. We might also care about whether
a life cycle system at scale could consume significant amounts of the available resources. If
so, the use of energy by our life cycle could be quite significant. In the context of our
descriptions above, some quantity of fossil energy use (e.g., in BTU or MJ) may be an

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

26

Chapter 1: Life Cycle and Systems Thinking

indicator for the impact of fossil fuel depletion. Of course, all of the energy extraction,
conversion, and combustion processes may lead to other types of environmental impacts
(like those detailed below).
Impact: Global Warming / Climate Change Most people know that there is considerable
evidence suggesting that manmade emissions of greenhouse gases (GHGs) lead to global
warming or climate change. The majority of such GHG emissions come from burning fossil
fuels. While we might already be concerned with the use of energy (above), caring more
specifically about how our choices of energy sources may affect climate change is an
additional impact to consider. Carbon dioxide (CO2) is the most prominent greenhouse gas,
but there are other GHGs that are emitted from human activities that also lead to warming
of the atmosphere such as methane (CH4) and nitrous oxide (N2O). These latter GHGs
have far greater warming effects per unit than carbon dioxide and are emitted from systems
such as oil and gas energy infrastructure systems and agricultural processes. GHGs are
inevitably global pollutants, as increasing concentrations of them in the atmosphere lead to
impacts all over the planet, not just in the region or specific local area where they are
emitted. These impacts may eventually manifest as increases in sea levels, migration of biotic
zones, changes in local temperatures, etc. Our concern about climate change may be rooted
in a desire to assess which stage or component of our product or process has the highest
carbon footprint, and thus all else equal, the biggest contributor to climate change. The
GHG emissions are indicators of the impacts of global warming and climate change.
Impact: Ozone Depletion In the early 1970s, scientists discovered that human use of
certain substances on the earth, specifically chlorofluorocarbons (CFCs), led to reduction in
the quantity of ozone (O3) in the stratosphere for a period of 50-100 years. This
phenomenon is often tracked and referred to as "holes in the ozone layer". The ozone layer,
amongst other services, keeps ultraviolet rays from reaching the ground, preserving plant
and ocean life and avoiding impacts such as skin cancers. The Montreal Protocol called for a
phase out of chemicals that deplete the ozone layer, but not all countries ratified it, not all
relevant substances were included, and not all uses were phased out. Consequently, while
emissions of many of these substances have been dramatically reduced in the past 30 years,
they have not been eliminated, and given the 50-100 year lifetime, ozone depletion remains
an impact of concern. Thus, releases of the various ozone-depleting substances can be
indicators of potential continued impacts of ozone depletion. Note that there is also "ground
level" ozone that is created by interactions of local pollutants and helps to create smog,
which, when breathed in, can affect human health. This is an entirely different but important
potential environmental impact related to ozone.
Impact: Acid Rain Releases of various chemicals or chemical compounds lead to increased
levels of acidity in a local or regional environment. This acidity penetrates the water cycle
and can eventually move into clouds and rain droplets. In the developed world the key
linkage was between emissions of sulfur dioxide (SO2) and acidity of freshwater systems.
One of the original points of concern was emissions of sulfur dioxide by coal-fired power
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

27

plants because they were large single sources and also because they could be fairly easily
regulated. Emissions of these pollutants are an indicator of the potential impacts of more
acidic environments such as plants and aquatic life destroyed. While in this introduction we
have only listed acid rain as an impact, acid rain is part of a family of environmental impacts
related to acidification, which we will discuss in more detail later. In short, other non-sulfur
compounds like nitrogen oxides can also lead to acidification of waterways, and systems
other than freshwater can be affected. Acidification of water occurs due to global uptake of
carbon dioxide and is of increasing concern in oceans where acidification affects coral reefs
and thus the entire ocean ecosystem.
There are various other environmental impacts that have been considered in LCA studies,
such as those associated with eutrophication, human health, and eco-toxicity, but we will
save discussion of them for later in the text. These initial examples, though, should
demonstrate that there are a wide variety of local and global, small and large scale, and
scientifically relevant indicators that exist to help us to assess the many potential
environmental impacts of products and systems.

The Role of Design Choices


The principles of LCA can help to build frameworks that allow us to consider the
implications of making design (or re-design) decisions and to track the expected outcomes
across the life cycle of the product. For example, deciding whether to make a car out of
aluminum or steel involves a complicated series of analyses:
Would the two materials provide the same level of functionality? Would structural
strength or safety be compromised with either material? Lighter vehicles have been
found to be less safe in crashes, although improved design and new automation
technology might remove this difference (NRC 2002, Anderson 2014). A significant
drop in safety for the lighter vehicles would outweigh the energy savings, depending
on the values of the decision maker.
Are there any implications for disposal and reuse of the materials? At present, about
60% of the mass of old cars is recycled or reused. Moreover, motor vehicles are
among the most frequently recycled of all products since recycling is usually
profitable; both aluminum and steel are recycled and reused from automobiles (Boon
et al. 2000). It takes much less energy to recycle aluminum than to refine it from ore.
The advantage for recycling steel is smaller.
What is the relative cost of the two materials, both for production and over the
lifetime of the vehicle? An aluminum vehicle would cost more to build, but be lighter
than a comparable steel vehicle, saving some gasoline expenses over the lifetime of

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

28

Chapter 1: Life Cycle and Systems Thinking

the vehicle. Do the gasoline savings exceed the greater cost of manufacturing? Of
energy? Of environmental quality?
In this example, steel, aluminum, copper, glass, rubber, and plastics are the materials, while
electricity, natural gas, and petroleum are the energy that go into making, using, and
disposing of a car. The vehicle runs on gasoline, but also needs lubricating oil and
replacement parts such as tires, filters, and brake linings. At the end of its life, the typical
American car is shredded; the metals are recycled, and the shredder waste (plastic, glass, and
rubber) goes to a landfill.

What Life Cycle Thinking and Life Cycle Assessment Is Not


The purpose of this chapter has been to motivate life cycle thinking, and why it should be
chosen to ensure broadly scoped analysis of issues with potential environmental impacts
i.e., we have been introducing "what life cycle thinking is". We end the chapter by briefly
summarizing what life cycle thinking (and, by extension, life cycle assessment) is not able to
achieve.
First, life cycle thinking will not ensure a path to sustainability. If anything, thinking more
broadly about environmental problems has the potential side effect of making environmental
problems seem even more complex. At the least it will typically lead to greater estimates of
environmental impact as compared to studies with more limited scopes. But life cycle
thinking can be a useful analytical and decision support tool for those interested in
promoting and achieving sustainability.
Second, life cycle thinking is not a panacea - a magic pill or remedy that solves all of society's
problems. It is merely a way of structuring or organizing the relevant parts of a life cycle and
helping to track performance. Addressing the economic, environmental, and social issues in
the context of sustainability can be done without using LCA. To reduce energy and
environmental impacts associated with product or process life cycles, we must want to take
action on the findings of our studies. By taking action we decide to improve upon the
current impacts of a product and make changes to the design, manufacture, or use of the
current systems so that future impacts are reduced.
LCA is not a single model solution to our complex energy and environmental problems. It
is not a substitute for risk analysis, environmental impact assessment, environmental
management, benefit-cost analysis, etc. All of these related methods have been developed
over many years and may still be useful in bringing to the table to help solve these problems.
LCA can in most cases interact with these alternative methods to help make decisions.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 1: Life Cycle and Systems Thinking

29

Chapter Summary
Life cycle assessment (LCA) is a framework for viewing products and systems from the
cradle to the grave. The key benefit of using such a perspective is in creating a "systems
thinking" view that is broadly encompassing and can be analyzed with existing methods.
When a life cycle perspective has not been used, unexpected but predictable environmental
impacts have occurred.
As we will see in the chapters to come, even though there is a standard for applying life cycle
thinking to problem solving, it is not a simple recipe. There are many study design choices,
variations, and other variables in the system. One person may apply life cycle thinking in
one way, and another in a completely different way. We cannot expect then that simply
using life cycle thinking will lead to a single right answer that we can all agree on.

References for this Chapter


Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., Oluwatola, O. Autonomous
Vehicle Technology: A Guide for Policymakers, Santa Monica, CA: RAND Corporation,
RR-443-RC, 2014.
Boon, Jane E., Jacqueline A. Isaacs, and Surendra M. Gupta, "Economic Impact of
Aluminum-Intensive Vehicles on the U.S. Automotive Recycling Infrastructure", Journal of
Industrial Ecology 4(2), pp. 117134, 2000.
Hocking, Martin B. "Paper versus polystyrene: a complex choice." Science 251.4993 (1991):
504-505.
Lave, Lester, Hendrickson, Chris, and McMichael, Francis, "Environmental implications of
electric cars", Science, Volume 268, Issue 5213, pp. 993-995, 1995.
Mihelcic, James R., et al. "Sustainability science and engineering: The emergence of a new
metadiscipline." Environmental Science and Technology 37.23 (2003): 5314-5324.
Tarr, Joel, The Search for the Ultimate Sink, University of Akron Press, 1996.
United Nations General Assembly (1987) Report of the World Commission on Environment and
Development: Our Common Future. Transmitted to the General Assembly as an Annex to
document A/42/427 - Development and International Co-operation: Environment.
United States Office of Technology Assessment (OTA), Green Products by Design:
Choices for a Cleaner Environment, OTA-E-541, 1992.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

30

Chapter 1: Life Cycle and Systems Thinking

End of Chapter Questions


1. On a sheet of paper, draw by hand or software a diagram of a life cycle for a simple
product other than a paper clip, with words representing the various activities in the
life cycle needed to make the product, and arrows representing connections between
the activities. Annotate the diagram with colors or shading to try to represent hot
spots for two inputs or outputs that you believe are relevant for decisions associated
with the product.
2. Do the same exercise as in Question 1, but for a school or university, which is a
service not a physical product.
3. Describe the major activities in each of the five life cycle stages of Figure 1 for a soft
drink beverage container of your choice. Describe also the activities needed to
support reuse, remanufacturing, and recycling activities for the container chosen.
4. Power plants (especially fossil-fuel based coal and gas-fired units) are frequently
mentioned sources of environmental problems. List three specific types of outputs
to the environment resulting from these fossil plants. Which other parts of the life
cycle of producing electricity from fossil plants also contribute to these problems?
5. Suppose that a particular truck requires diesel fuel to provide freight transportation
(that is, moving tons of freight over some distance). In the process, carbon dioxide
is emitted from the truck.
a. In the terminology of life cycle thinking presented in this chapter, what does
the diesel fuel represent?
b. What do the freight movement and carbon dioxide emissions represent?
c. What stage of the truck life cycle is being presented in this problem so far?
What other truck life cycle stages might be important to consider?
d. In considering the environmental impacts of trucks, would it be advisable to
expand our system of thinking to include providing roadways? Why or why
not?
6. Across the life cycle of a laptop computer, discuss which life cycle stages might
contribute to the environmental impact categories discussed in the chapter (global
warming, ozone depletion, and acid rain). Are there other classes of environmental
impact you can envision for this product?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Dana Fradon, The New Yorker May 17, 1976


(Permission pending but NOT granted to use this figure yet)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

31

32

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Chapter 2 : Quantitative and Qualitative Methods


Supporting Life Cycle Assessment
In this chapter, we introduce basic quantitative skills needed to perform successful work in
LCA. The material is intended to build good habits in critically thinking about, assessing,
and documenting your work in the field of LCA (or, for that matter, any type of systems
analysis problem). First we describe good habits with respect to data acquisition and
documentation. Next we describe skills in building and estimating simple models. These
skills are not restricted to use in LCA and should be broadly useful for business, engineering,
and policy modeling tasks. As this book is intended to be used across a wide set of
disciplines and levels of education, we write as if aimed at undergraduates who may not be
familiar with many of these concepts. It may be a cursory review for many graduate
students. Regardless, improving such skills will make your LCA work even more effective.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:
1. Apply appropriate skills for qualitative and quantitative analysis.
2. Document values and data sources in support of research methods.
3. Improve your ability to perform back of the envelope estimation methods.
4. Approach any quantitative question by means of describing the method, providing
the answer, and describing what is relevant or interesting about the answer.

Basic Qualitative and Quantitative Skills


To be proficient in any type of systems analysis, you need to have sharp analytical skills
associated with your ability to do research, and much of this chapter is similar to what one
might learn in a research methods course. While the skills presented here are generally
useful (and hopefully will serve you well outside of the domain of LCA) we use examples
relevant to LCA to emphasize and motivate their purpose.
Much of LCA involves doing "good research" and communicating the results clearly. That
is why so many people with graduate degrees are able to learn LCA quickly because they
already have the base of skills needed to be successful, and just need to learn the new
domain knowledge. Amongst the most important skills are those associated with your
quantitative and qualitative abilities.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

33

Quantitative skills are those associated with your ability to create numerical manipulations
and results, i.e., using and applying math and statistics. Qualitative skills are those related
to your ability to think and write about your work beyond numbers, and to describe the
relevance of your results. While this textbook is more heavily geared towards improving
your quantitative skills, there are many examples and places of emphasis throughout the text
that are intended to develop your qualitative skills. You will need to be proficient at both to
successfully appreciate and perform LCA work.
Identifying your own weaknesses in these two areas now can help you improve them while
you are also learning new material relevant to the domain. Your quantitative skills are
relatively easy to assess e.g., if you can correctly answer a technical or numerical question
by applying an equation or building a model, you can "pass the test" for that quantitative
skill. Qualitative skills are not as easy to evaluate and so must be assessed in different ways,
e.g., your ability to synthesize or summarize results or see the big picture could be assessed
by using a rubric that captures the degree to which you put your findings into context.
In the remainder of this chapter, we'll first review some of the key quantitative types of skills
that are important (and which are at the core of life cycle studies) and then discuss how to
mix qualitative and quantitative skills to produce quality LCA work. One of the most
important skills is identifying appropriate data to use in support of analyses.

Working with Data Sources


Most data are quantitative, i.e., you are provided a spreadsheet of numerical values for some
process or activity and you manipulate the data in some quantitative way (e.g., by finding an
average, sorting it, etc.). But data can also be qualitative you may have a description of a
process that discusses how a machine assembles inputs, or you may generally know that a
machine is relatively old (without knowing an exact date of manufacture). Being able to
work with both types of data is useful when performing LCA.
As we seek to build a framework for building quantitative models, inevitably one of the
challenges will be to find data (and in LCA, finding appropriate data will be a recurring
challenge). But more generally we need to build skills in acquiring and documenting the data
we find. As we undertake this task, it is important to understand the difference between
primary and secondary sources. A primary source of data comes directly from the entity
collecting the data and/or analyzing it to find a result. It is thus generally a definitive source
of information, which is why you want to find it. A secondary source is one that cites or
reuses the information from the primary source. Such sources may use the information in
different ways inconsistent with the primary source's stated goals and intentions, and may
incorporate biases. It is thus good practice to seek the primary source of the information
and not merely a source that makes use of it. Finding (and reading, if necessary) the primary
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

34

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

source also allows you to gain appreciation for the full context that reported the result. This
context may include the sponsor of the study, any time or data constraints, and perhaps
caveats on when or how the result should be used.
In today's Internet search-enabled world, secondary sources are far more prominent. Search
engines are optimized to find often linked to and repeated sources, not necessarily primary
sources. As an example, the total annual emissions of greenhouse gases in the US are
prepared in a study and reported every year by the US Environmental Protection Agency
(EPA). The EPA spends a substantial amount of time - with the assistance of government
contractors - each year refining the methods and estimates of emissions to be reported.
Given their official capacity and the work done, the reporting of this annual estimate (i.e.,
"the number") is a primary source. This number, which is always for a prior period and is
therefore a few years old, gets noticed and reported on by hundreds of journalists and media
outlets, and thousands of web pages or links are created as a result. A web search for
"annual US GHG emissions" turns up millions of hits. The top few may be links to the
latest EPA report or the website that links to the report. The web search may also point to
archived EPA reports of historical emissions published in previous years. But there is only a
single primary source for each year's emissions estimate the original study by EPA.
The vast majority of the web search results lead to studies "re-reporting" the original
published EPA value. It is possible that the primary source is not even in the top 10 of the
ordered websites of a web search. This phenomenon is important because when looking for
data sources, it is easy to find secondary sources, but there is often a bit of additional work
needed to track backwards to find and cite the primary source. It is the primary source that
one should use in any model building and documentation efforts (even if you found it via
finding a secondary source first). A primary source of data is typically from a credible
source, and citing "US EPA" instead of "USA Today" certainly improves the credibility of
your work. Backtracking to find these primary sources can be tricky because often
newspaper articles will simply write "EPA today reported that the 2011 emissions of
greenhouse gases in the United States were 7 billon metric tons" without giving full
references within the article. Blogs on the other hand tend to be slightly more academic in
nature and may cite sources or link to websites (and of course they still might link to a
secondary source). If your secondary sources do not link to the EPA report directly, you
need to do some additional searching to try to find the primary source. It will help your
search that you know the numerical value that will be found in the primary source (but of
course you should confirm that the secondary source used the correct and most up to date
value). With some practice you will become adept at quickly locating primary sources.
The relevant contextual information that may appear in the official EPA source includes
things like how the estimate was created, what year it is for, what the year-over-year change
was, and which activities were included. All of that contextual information is important. A
more frequently reported estimate of US GHG emissions (only a few months old when
reported) comes from the US Department of Energy, but only includes fossil fuel
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

35

combustion activities, which are far easier to track because power plants annually report their
fuel use to the Department. If you were looking for a total inventory of US greenhouse gas
emissions, the EPA source is the definitive source.
After finding appropriate data, it is essential to reference the source adequately. It is assumed
that you are generally familiar with the basics of creating footnote or endnote references or
bibliographical references to be used in a report. You can see short bibliographical reference
lists at the end of each of the chapters of this textbook. Primary data sources should be
completely referenced, just as if you were excerpting something from a book. That means
you need to give the full bibliographic reference as well as point to the place inside the
source where you found the data. That might be the page number if you borrow something
from the middle of a report, or a specific Table or Figure within a government report. For
example, if you needed data about the electricity consumption per square foot for a
commercial building, the US Department of Energy's Energy Information Administration
2003 Commercial Buildings Energy Consumption Survey (CBECS) suggests the answer is
14.1 kWh/square foot (for non-mall buildings). The summary reports for this survey are
hundreds of pages in length. The specific value of 14.1 kWh/sf is found (on page 1) of
Table C14. By referencing this source specifically, you allow others to reproduce your study
quickly. You also are allowing others (who may stumble upon your own work when looking
for something else) to use your work as a secondary source. The full primary source
reference for the CBECS data point could look like this:
US Dept. of Energy, 2003 Commercial Buildings Energy Consumption Survey (CBECS), Table
C14. "Electricity Consumption and Expenditure Intensities for Non-Mall Buildings, 2003", 2006,
http://www.eia.gov/consumption/commercial/data/2003/pdf/c14.pdf, last accessed July 5, 2013.

What is unfortunately common is to see very loose or abbreviated referencing of data


sources, such as "DOE CBECS". Such casual referencing is problematic for many reasons.
The DOE has done at least four CBECS surveys, roughly four years apart, since 1992, for
which they have made the results available online. If one finds a single data point on the
Energy Information Administration's website and uses it in a study, that data point might
come from any of these four surveys, which span 20 years of time, from any of the
thousands of pages of data summaries. With only a reference to "CBECS", one would have
no way of knowing how recent, relevant, or useful is your data point.
Beyond the examples above, one might be interested in the population of a country, the
average salary of workers, or other fundamental data. You are likely (and encouraged) to
find and report multiple primary sources. These multiple sources could come from
independent agencies or groups who sought to find answers to the same or very similar
questions. A rule of thumb is to seek and report results from at least three such sources if
possible. In the best case, the primary sources yield the same (or nearly equal) data. In
reality, they will likely disagree to a small or large extent. There may be very easy
explanations for why they differ, such as using different assumptions or methods. By noting
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

36

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

and representing that you have found multiple data points, and summarizing reasons for the
differences, you gain the ability to judge whether to simply use an assumption based on the
three sources, or need to use a range or an average. The practice of seeking multiple sources
will sometimes even uncover errors in original studies or data reports, or at the least make
you realize that a primary source found is not appropriate to use in your own work given
differences in how the result was made.
"When we look up a number in more than one place, we may get several different
answers, and then we have to exercise care. The moral is not that several answers are
worse than one, but that whatever answer we get from one source might be different if
we got it from another source. Thus we should consider both the definition of the
number and its unreliability." -- Mosteller (1977)
If you end up with several values, it may be useful to summarize them in a table. If you had
been trying to find the total US greenhouse gas emissions as above, you might summarize it
like in Figure 2-1. Additional rows could be added for other primary or secondary sources.
A benefit of organizing these summary tables is that it allows the audience to better
understand your underlying data sources as well as potential issues with applying them.
Value (million
metric tons CO2)

Source

Type of
Source

Comments

6,702

US EPA, Inventory Of U.S. Greenhouse


Gas Emissions And Sinks: 1990-2011

Primary

Value is for 2011.

5,471

US DOE, U.S. Energy-Related Carbon


Dioxide Emissions, 2011

Primary

Value is for 2011. Only


counts energy-related
emissions.

6,702.3

Environmental News Network, US


Greenhouse Gas Emissions are Down, April
21, 2013.

Secondary

Specifically references EPA.

Figure 2-1: Summary of Sources for US Greenhouse Emissions

A final note about seeking data sources pertains to the use of statistical abstracts. Such
references exist for many countries, states and organizations like universities. These
abstracts are valuable reference materials that are loaded with many types of summary data.
They are typically organized by sections or chapter of related data. For example, the
Statistical Abstract of the United States (2011) has sections on agriculture, manufacturing, energy,
and transportation (all of which are potentially relevant for LCA studies). Each of the
sections contains a series of data tables. The Agriculture section has, amongst other
interesting facts, data on the number of farms and area of cropland planted for many types
of crops. The Table (Number 823) of farms and cropland has a footnote showing the
primary source of the data, in this case the 2007 Census of Agriculture. Such abstracts may
also have other footnotes that need to be considered when using them as a source, such as
noting the units of presentation (e.g., dollar values in millions), or the boundaries considered.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

37

This example is intended to reinforce two important facts of using statistical abstracts. First,
it is important to realize that while generally statistical abstracts may be a convenient "go to"
reference source, they are not a primary source. The best practice is to use statistical
abstracts as links to primary sources and then go read the primary source. Re-publication
of data sometimes leads to errors, or omissions of important footnotes like units or
assumptions used. Second, despite the "2012" in the title of the abstract, it is generally not
true that all data within is from the year 2012. Generally though, any data contained within
is the most recent available. Abstracts for states and other organizations are organized in
similar ways and with similar source referencing. Finally, it is worth noting that in the age of
Google, statistical abstracts are no longer the valuable key reference sources that they once
were. Nonetheless, they are still a great first "one stop" place to look for information,
especially if doing research in a library with an actual book.

Accuracy vs. Precision


We seek primary sources (and multiple primary sources) because we want to get credible
values to use. Depending on the kind of model we are building, we may simply need a
reasonable estimate, or we may need a value as exact as possible. This raises the issue of
whether we are seeking accuracy or precision in our search for sources and/or our model
building efforts. While the words accuracy and precision are perhaps synonyms to lay
audiences, the "accuracy versus precision" dialogue is a long-standing one in science. We are
often asked to clarify our goals more clearly in terms of what we are seeking accuracy or
precision (or both)in our system of measurement.
The accuracy of a measurement system is the degree to which measurements made are
close to the actual value (of course, as measured by some always correct system or entity).
The precision of a measurement system is the degree to which repeated measurements give
the same results. Precision is thus also referred to as repeatability or reproducibility.
In addition to physical measurement systems, these features are relevant to computational
methods on data, such as statistical transformations, Microsoft Excel 1 models, etc.
Figure 2-2 summarizes the concepts of accuracy and precision within the context of aiming
at a target, but could be analogously used to consider our measurements of a value.

Microsoft and Excel are registered trademarks of Microsoft Corporation. In the rest of the book, just "Microsoft Excel" will be used.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

38

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Accurate

Inaccurate

Precise

Imprecise

Figure 2-2: Comparison of accuracy and precision. Source: NOAA 2012

Systems can thus be accurate but not precise, precise but not accurate, or neither, or both.
Systems are considered valid when they are both accurate and precise. With respect to our
CBECS example above, the survey used could provide an inaccurate (but precise) result if
mall and non-mall buildings are included in an estimate of retail building effects. It could
produce an imprecise (but accurate) result if samples from different geographical regions do
not align with the actual geographical mix of buildings. Performing mathematical or
statistical operations (e.g., averages) on imprecise values may not lead to a value that is
credible to use in your work.
When a measurement system is popular and needs to be known to be accurate and precise,
typically a standard is made for all parties to agree upon how to test and formalize the
features of the system (e.g., how to perform the test many times and assess the results).

Uncertainty and Variability


As we seek to find multiple sources for our data needs, inevitably we will come across
situations where the data do not agree to the extent that we would hope. This will lead us to
situations of dealing with uncertainty and variability of our data. While the ways in which we
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

39

work with and model uncertain and variable data are similar, we first separately define each
condition. These simple definitions will be used here, with further detail in later chapters as
needed. Variability exists because of heterogeneity or diversity of a system. It may be, for
example, that the energy used to manufacture an item differs between the morning and
afternoon shift in a factory. Uncertainty exists because we either are unable to precisely
measure a value, or lack full information, or are ignorant of some state. It is possible that if
we did additional research or improved our measurement methods, we could reduce the
uncertainty and narrow in on a most likely outcome or value. Variability, on the other hand,
is not likely to be reducible it may exist purely due to natural or other factors outside of
our control.

Management of Significant Figures


Beyond thinking that we have created a way of accurately and precisely measuring a quantity,
we also want to ensure that we appropriately represent the result of our measurement. Many
of us learned of the importance of managing the use of significant figures (or digits) in
middle school. Two important lessons learned that merit mention in this context relate to
leading and trailing zeros and reporting the results of mathematical operations. Remember
that trailing zeros are always significant and indicate the level of precision of the
measurement. Leading zeros (after a decimal point), however, are not significant. This means
that a value like 0.00037 still has only two significant digits because scientific notation would
refer to it as 3.7E-04 and the first component of the notation (3.7) represents all of the
significant digits. Also take care not to introduce extra digits in the process of adding,
subtracting, multiplying, or dividing significant figures. That means, for example, not
perpetuating a result from a calculator or spreadsheet that multiplies two 2-digit numbers
and reporting 4 digits. The management of significant digits means reporting only 2 digits
from such a result, even if it means rounding off to achieve the second digit.
Recall that the basis for such directives is that our measurement devices are calibrated to a
fixed number of digits. A graduated cylinder used to measure liquids in a laboratory usually
shows values in 1 ml increments (e.g., 10, 11, or 12 ml). We then attempt to estimate the
level of the liquid to the nearest 10th of an increment. As an example, when measuring a
liquid we would report values like 10.2 ml with three significant figures - which expresses
our subjective view that the height of the liquid is approximately 2/10ths of the way between
the 10 and 11 ml lines. Given our faith in the measurement system, we are quite sure of the
first 2 digits to the left of the decimal point (e.g., 10), and less sure of the digit to the right of
the decimal point as it is our own estimate given the uncertainty of the measurement device,
and thus is the least significant figure.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

40

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

When counting significant figures, think about scientific notation.

All nonzero digits are significant

Zeroes between nonzero digits are significant

Trailing zeroes that are also to the right of a decimal point in a number are
significant

Digits do not increase with calculations.

When adding and subtracting, the result is rounded off to have the same
number of decimal places as the measurement with the least decimal
places.

When multiplying and dividing, the result is rounded off to have the same
number of significant figures as in the component with the least number of
significant figures.
Figure 2-3: Summary of Rules of Thumb for Managing Significant Figures

Inevitably, our raw measurements will be used in additional calculations. For example our
graduated cylinder observation of volume can then be used to find mass, molarity, etc. If
those subsequent calculations are presented with five significant figures (since that's what the
calculator output reads), such results overstate the accuracy of the calculations based on the
original data, and by implication understate their uncertainty. Figure 2-3 summarizes rules
for managing significant figures. We will circle back to discussing data acquisition in the
context of life cycle assessment in a later chapter.
Going back to our CBECS example, the published average electricity use of 14.1
kWh/square foot is a ratio with three significant figures. That published value represents an
average of many buildings included in the survey. The buildings would give a wide range of
electricity consumption values in the numerator. However, the three significant figures
reported are likely because some relatively small buildings led to a value with only three
significant figures. If not concerned about managing significant figures, DOE could have
reported a value of 14.1234 kWh/sf. This result would have led to negligible modeling
errors, but would have added extraneous digits for no reason.
One of the main motivations for managing the number of significant digits is in considering
how to present model results of an LCA. As many LCAs are done in support of a
comparison of two alternatives, an inevitable task is comparing the quantitative results of the
two. For such a comparison to be valid, it is important not to report more significant figures
in the result than were present in the initial measured values. A common output of an LCA,
given the need to maintain assumptions between the modeling of various alternatives, is that
the alternatives would have very similar effects across at least one metric. Consider a
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

41

hypothetical result where the energy use Alternative A is found to be 7.56 kWh and for
Alternative B is 7.57 kWh. Would one really expect a decision maker to prioritize one over
the other because of a 0.01 kWh reduction in energy use, which is a 0.1% difference, or a
savings worth less than 0.1 cents at current US electricity prices? Aside from the fact that it
is a trivial amount, it is likely outside of the range of measurement available.
In LCA, we do not have the same "measurement device" issues used to motivate a middle
school introduction to significant digits. Instead, the challenge lies in understanding the
uncertainty of the "measurement process" or the "method" used to generate the numerical
values needed for a study. So while we do not worry about the number of digits on a
graduated cylinder, we need to consider that the methods are uncertain. Thus you will see
many studies create internally consistent rules that define "significance" in the context of
comparing alternatives. These rules of thumb are rooted in the types of significance testing
done for statistical analyses, but which are generally not usable given the small number of
data points used in such studies. Often used rules will suggest that the uncertainty of values
such as energy and carbon emissions are at least 20%, with even higher percentages for other
metrics. When implemented, that means our values for Alternatives A and B would need to
be at least 20% different for one to consider the difference as being meaningful or
significant. The comparative results would be "inconclusive" for energy use using such a
study's rules of thumb.
In the absence of study rules of thumb for significance, what would we recommend?
Returning to our discussion above an LCA practitioner should seek to minimize the use of
significant digits. We generally recommend reporting no more than 3 digits (and, ideally,
only 2 given the potential for a 20% consideration of uncertainty). In the example of the
previous paragraph that would mean comparing two alternatives with identical energy use
i.e., 7.6 kWh. The comparison would thus have the appropriate outcome that the
alternatives are equivalent.
Ranges
If you are able to find multiple primary sources, it is typically more useful to fully represent
all information you have than to simply choose a single point as a representation. If you use
a single value, you are making a conscious statement that one particular value is the most
correct and the others are irrelevant. In reality, you may have more than one value being
potentially correct or useful, e.g., because you found multiple credible primary sources. By
using ranges, you can represent multiple data points, or a small set or subset of data. While
individual data points are represented by a single number (e.g., 5), a range is created by
encapsulating your multiple data points, and may be represented with parentheses, such as
(0,5) or (0-5). A range represented as such could mean "a number somewhere from 0 to 5".
The values used as the limits of a range may be created with various methods. Often used
parameters of ranges are the minimum and maximum values of a dataset. In an energy
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

42

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

technology domain, you might want to represent a range of efficiency values of an electricity
generation technology, such as (30%, 50%).
If you have a large amount of data, then it might be more suitable to use the 5th and 95th
percentile values as your stated range. While this may sound like an underhanded way of
ignoring data, it can be appropriate to represent the underlying data if you believe some of
the values are not representative or are overly extreme. Using the same technology
efficiency example, you may find data on efficiencies of all coal-fired or gas-fired power
plants in the US, and decide that the lowest efficiency values (in the teens) are far outside of
the usual practice because they represent the efficiencies of plants that are used very
infrequently or are using extremely out of date technology. There could be similarly
problematic values at the high end of the full range of data if the efficiency for a newer plant
has be estimated by the manufacturer, but the plant has not been in service long enough to
measure the true efficiency. Using these percentile limits in the ranges can help to slightly
constrain the potential values in the data.
Ranges can be used to represent upper and lower bounds. Bounding analysis is useful when you
do not actually have data but have a firm (perhaps even qualitative) belief that a value is
unlikely to be beyond a certain quantity. A bounding analysis of energy technology might
lead you to conclude that given other technologies, it is unlikely that an efficiency value
could be less than 20% or greater than 90%. Using a range in this way constrains your data
to values that you feel are the most realistic or representative.
Finally, ranges can be used to represent best or worst case scenarios. The limit values chosen
for the stated ranges are thus subjectively chosen although perhaps by building on some
range limits derived from some of the other methods above. For example, you might decide
that a "best case value" for efficiency is 100% and "worst case" value is 0% (despite
potentially being unrealistic). Best and worst case limits are typically most useful when
modeling economic parameters, e.g., representing the highest salary you might need to pay a
worker or the lowest interest rate you might be able to get for a bank loan. Best and worst
cases, by their nature, are themselves unlikely. It is not very probable that all of your worst
parameters will occur, just as it is improbable that all best parameters will occur. Thus you
might consider the best-worst ranges as a type of bounding analysis.
Another way of implementing a range is by using statistical information from the data, such as
the variance, standard error, or standard deviation. You may recall from past statistics
courses that the variance is the average of the squared differences from the mean, and the
standard deviation (how much you expect one of the data points to be different from the
mean) is the square root of the variance. The standard error (the "precision of the average",
or how much you might expect a mean of a subsample to be different from the mean of the
entire sample) is the standard deviation divided by the square root of the number of samples
of data. Either of these values if available can be used to construct confidence intervals to
give some sense of the range of the underlying data. A related statistical metric is the relative
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

43

standard error (RSE), which is defined as the standard error divided by the mean and
multiplied by 100, which gives a percentage-like range variable. Another way to think about
the RSE is as a metric representing the standard error relative to the mean estimate on a
scale from zero to 100. As the RSE increases, we would tend to believe our mean estimate is
less precise when referring to the true value in the population being studied. Of course
when found in this way, the range will be symmetric around the mean.
A 95-percent confidence range is calculated for a given survey (mean) estimate from the
RSE via a three-step process. First, divide the RSE by 100 and multiply by the mean
presented by the survey to get the standard error. Second, multiply the standard error by
1.96 to generate the confidence error (recall from statistics that the value 1.96 comes from
the shape and structure of an arbitrarily assumed normal distribution and its 0.975 quantile).
Finally, add and subtract the confidence error to the survey estimate from the second step to
create the 95% confidence range. Note that a 95% confidence range is not the same as a 5th95th percentile range. A 95% confidence range represents the middle 95% of a normal
distribution, or a 2.5th-97.5th percentile range, leaving only 2.5% of the distribution at the top
and bottom. A 5th-95th percentile range leaves 5% on the top and bottom.
Example 2-1:
Question:
Develop a 95% confidence interval around the 2003 CBECS estimate of US
commercial building electricity consumption per square foot (14.1 kWh/sf) given the stated
RSE (3.2).
Answer:
Given the RSE definition provided above, the standard error is (3.2/100)*14.1 =
0.45 kWh/square foot, and the confidence error is 0.88 kWh/square foot. Thus, the 95th
percentile confidence interval would be 14.1 +- 0.88 kWh/square foot. Note that this range
seems to contradict the 25th-75th percentile range of 3.6-17.1 provided directly by the survey (it
is a much tighter distribution around the mean of 14.1). However the confidence interval is
representing something different how confident we should be that the average electricity use
of all of the buildings surveyed (as if we re-did the survey multiple times) would be
approximately 14.1, not trying to represent the underlying range of actual electricity use of the
buildings! If you are making a model that needs to represent the range of electricity use, the
provided 25th-75th percentile values are likely much more useful.
Source: US Dept. of Energy, 2003 Commercial Buildings Energy Consumption Survey
(CBECS), RSE Tables for Consumption and Expenditures for Non-Mall Buildings,
http://www.eia.gov/consumption/commercial/data/2003/pdf/c1rse-c38rse.pdf, page 94.

A main benefit of using ranges instead of single point estimates is that the range boundaries
can be used throughout a model. For example one can propagate the minimum values of
ranges through all calculations to ensure a minimum potential result, or the maximum values
to get a maximum potential result. One word of caution when using ranges as suggested
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

44

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

above is to maintain the qualitative sense of the range boundaries. If you are envisioning a
best-worst kind of model, then the "minimum" value chosen in your range boundary should
consistently represent the worst case possible. This is important because you may have a
parameter in your model that is very high but represents a worst case, for example, a loss
factor from a production process. In a best-worst range type of model, you want to have all
of your best and worst values ordered in this way so that your final output range represents
the worst and best case outputs given all of the worst possible variable values, and all
possible best values.

Units and Unit Conversions


In quantitative analysis, it is critical to maintain awareness of the unit of analysis. That
might mean noting grams or kilograms, short tons or metric tons (a.k.a. tonnes). While
conversions can be simple, such as multiplying or dividing by 1000 in SI units, this is an area
where many errors occur, especially when done manually. It is easy to make errors by not
thinking out the impacts and accidentally multiply instead of divide, or vice versa. Thus a
good practice is to ask yourself whether the resulting conversion makes sense. This is also
known as applying a reasonableness test, or a sanity check. Some refer to it as a "sniff
test", suggesting that you might be able to check whether the number smells right. To
convert from kilograms to grams, we multiply by 1000 - the result should be bigger because
we should have many more grams than we do kilograms. If we accidentally divide by 1000
(an error the authors themselves have made many times in the rush of getting a quick
answer) the number gets smaller and the sniff test would tell us it must be an error.
In the context of finding sources for data, simple changes of unit scales, such as grams to
kilograms, don't require extensive referencing. When performing simple unit conversions
like this, it is typical that instead of seeking external data sources you would simply
document the step used (e.g., you would state that you "converted to kilograms").
There are however more complex unit conversions that change the entire basis of
comparison (not just kg to g). If you are changing more than just the scale, such as
switching from British Thermal Units (BTU) to megajoules (MJ), this is referred to as
performing physical or energy unit conversions. A unit conversion factor is just a
mathematical relation between the same underlying phenomena but with different
measurement scales, such as English and SI (metric) units. For example you may find a data
source expressing emissions in pounds but need to report it in kilograms (or metric tons).
This type of conversion does not require much documentation either, e.g., you could write
that you "assumed 2.2 pounds per kilogram". Such conversions still need to be done and
managed correctly. In 1999, NASA famously lost the Mars Climate Orbiter after a ninemonth mission when navigation engineers gave commands in metric units to the spacecraft,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

45

whose software engineers had programmed to it to operate with English units, causing the
vehicle to overshoot the planet.
If you do not know the conversion factors needed, then you will need to search for sources
of your conversion factors using the same methods discussed above. If you were to do a
search for unit conversions with the many tools and handbooks available, you will certainly
find slightly different values in various sources, although most of these differences are simply
due to rounding off or reducing digits. One source may say 2.2 pounds per kg, another
2.20462, and yet another 2.205. Practically speaking any of these unit conversions will lead
to the same result (they would be at most 0.2% apart) and quantity aside, in the big picture
they are all the same number, i.e., 2.2. The existence of multiple conversion factors is the
reason why to state the one you used. Without stating the actual conversion factor used,
someone else may not be able to reproduce your study results (or may assume an alternative
unit conversion factor and not understand why your results are different). Given the
scientific and engineering basis of unit conversion factors, you do not typically need to cite
specific 'sources' for them, just the numbers used.
As you build your models, your calculations will become increasingly complex. You can
double-check your calculations by tracing your units. As a simple example, assume you have
tugboat transit time data for a stretch of river between two locks. You know the transit time
in minutes (140), and the distance between locks in miles (6.1). Equation 2-1 shows how to
calculate the tugboat speed in kilometers/hour, which could later allow you to calculate
power load and emissions rates. Getting the speed units wrong, despite being a trivial
conversion, will have disastrous effects on your overall model results. Tracing the units
confirms that you have used all of the necessary conversion factors, and used them
appropriately and in the right order.
!"

!.! !"#$% !"#$""% !"#$%

!!"# = !"# !"#$%&' !"#$%&! !"#$

! !"#$%&'&(
!.!"# !"#$

!" !"#$%&'
! !!"#

= 4.2 /

(2-1)

We end this section by briefly discussing the need to manage units in calculations. Note that
when solving equation 2-1, your calculator would suggest that the speed is actually 4.2098
km/hr, a level of accuracy that would be impossible to achieve (and silly to present). The
reason to document the units is so that when we are using them in calculations that we do
the mathematical operations correctly, i.e., adding kg to kg, not kg to g. The graphic made in
1976 for The New Yorker (presented at the beginning of this chapter) is a reminder of this.
Considerations for Energy Unit Conversions
Sometimes changing units involves more than applying a single conversion factor. You may
recall from a physics course that energy is a measure of the amount of work done or
generated (typical units are joules, BTU, or kilowatt-hours). On the other hand, power is the
rate at which energy is used or transformed (typical units are watts or joules/second). Unit
conversions in the energy domain, e.g., between BTU and kWh, can be more complicated
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

46

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

than they appear. Unlike physical unit conversions that are just different ways of measuring
or weighing, there can be different interpretations or contexts about use of energy sources.
The quantity of energy used locally for a specific task is typically referred to as site energy,
such as the electricity we use for recharging a laptop or mobile phone. However, site uses of
energy typically lead to an even greater use of energy elsewhere, such as at a power plant.
The energy conversion performance of a coal-fired power plant and losses from the power
grid means that for every 3 units of energy in the coal burned at a plant we can use only
about 1 unit of energy at our electrical outlet. That amount of original energy needed, such
as at a power plant, is referred to as primary or source energy. A conversion between
English and metric units (e.g., BTU and MJ) for primary energy is straightforward because
BTU and MJ both represent energy content (e.g., the quantity of BTUs in a gallon of
gasoline). However, our assessment of energy use should certainly include consideration for
the inefficiencies in the conversion processes of our chosen methods, as discussed below.
A related concept that is more specific to the modeling of fuel use pertains to the heating
value of the fuel, which refers to the energy released from combusting the fuel, with units
such as kJ/kg or BTU/lb. Of particular importance is which heating value the lower or
higher heating valueis used. The difference between the lower heating value (LHV) and
the higher heating value (HHV) is whether the energy used to vaporize liquid water in the
combustion process is included or not. While the difference between HHV and LHV is
typically only about 10%, you can often argue that the HHV is a more inclusive metric,
consistent with the system and life cycle perspectives relevant to LCA. Regardless, this is yet
another example of why all relevant assumptions need to be explicit in energy analysis.
We may also need to make assumptions about the conversion process. A difficulty in
converting from BTU to kWh can depend on whether an intermediate thermodynamic
process is involved. For example, many engineering reference manuals suggest the
conversion factor "1 kWh is equal to 3,413 BTU". But this assumes a perfect conversion
with no losses and thus is pure energy equivalence. The likely context behind such a
conversion is an energy process where a fuel input is used to generate a quantity of
electricity, known as a heat rate. However, in describing the conversion of fossil energy
from a fuel in a power plant, the heat rate for a typical coalfired plant may be 10,000 BTU
(of coal input) to generate 1 kWh (electricity output). The reason that power plant heat rate
is so much larger than the pure engineering conversion factor is that converting coal to
electricity requires burning the coal and then using the produced heat to turn water into
steam, and then using the pressurized steam to spin a turbine, which is connected to a
generator. There are losses throughout all of these steps, and thus far more than the 3,413
BTU are needed to make 1 kWh. The overall difference between the 3,413 BTU and the
10,000 BTU is expressed as a ratio representing the efficiency of the power plant, which is
3,413 BTU per kWh / 10,000 BTU per kWh, or 34%. While this may sound like a
convenient example with rounded off numbers, it is quite common for a traditional coalfired power plant to have this approximate efficiency. Natural gas plants can have
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

47

efficiencies of about 50%. While not comprised of burning fuels, solar PV cells are about
10% efficient. It may be surprising to you to learn that in the 21st century we rely on such
inefficient methods to make our electricity! The important point of this example is that in
such contexts, you cannot use or assume the basic BTU to kWh conversion factor. You also
need to know either the heat rate or efficiency. Careful management of units and the
conversion process is generally needed when working with fuels. Fuels can be inputs to a
variety of processes, not just making electricity. For example, when used to make heat in a
building, natural gas with an energy content of 40 MJ/m3 may be used in a furnace that is
72% efficient to produce 29 MJ of heat/m3.
Overall, while the same documentation guidelines apply, in these cases it is even more
important to document all conversion factors and assumptions used, as other authors might
choose different conversions or efficiencies as a result of personal or domain-specific
knowledge. Many external references detail the various conversions available and needed to
work in the energy domain.
As a final note, "converting" between energy and power is not appropriate for LCA analyses,
but is often done to provide examples or benchmarks to lay audiences. For example, 300
kWh of electricity may be referred to as the quantity such that a 30-Watt light bulb is used
for 10,000 hours (a bit more than a complete year).

Use of Emissions or Resource Use Factors


Many production processes have releases to the environment, such as the various types of
pollutants mentioned in Chapter 1. For many analyses, an emissions factor is needed to
represent the units of emissions released as a function of some level of activity. We will
discuss specific data sources for emissions factors in later chapters, but most emissions
factors can be found using the same type of methods needed to find primary data sources or
unit conversions. Emissions factors may be sourced from government databases or reports
(e.g., the US EPA's AP-42 database) or technical specifications of a piece of equipment and
as such should be explicitly cited if used. Given the potential for discrepancies in emissions
factors, you should look for multiple sources of emissions factors and represent them with a
range of values.
Beyond finding sources, knowledge of existing physical quantities and chemical processes
can be used to find emissions factors. Equation 2-2 can be used to generate a CO2
emissions factor for a combusted fuel based on its carbon content (as found by laboratory
experiments) and an assumed oxidation rate of carbon (the percent of carbon that is
converted into CO2 during combustion):
CO2 emissions from burning fuel (kg / MMBTU) =
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

48

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Carbon Content Coefficient (kg C / MMBTU) * Fraction Oxidized * (44/12)

(2-2)

where the 44/12 parameter in Equation 2-2 is the ratio of the molecular weight of CO2 to
the molecular weight of carbon, and MMBTU stands for million BTU.
If we were doing a preliminary analysis and only needed an approximate emissions factor, we
could assume the fraction oxidized is 1 (100% or complete oxidation). In reality, the
fraction oxidized could be closer to 0.9 than 1 for some fuels. For an example of coal with a
carbon content of 25 kg C per MMBTU, and assuming perfect oxidation, the emissions
factor would be 92 kg CO2 / MMBTU.
Various emissions factors can be developed through similar methods by knowing contents
of elements (such as for SO2), however, other emissions factors are a function of the choice
of combustion and emissions control technologies used (such as for nitrogen oxide or
particulate matter emissions)
In LCA, we will also discover resource use factors, such as material input factors, that are
used and developed in similar ways as emissions factors. The main difference is that
resource use factors are made as a function of input rather than output.

Estimations vs. Calculations


"It is the mark of an instructed mind to rest satisfied with the degree of precision which
the nature of the subject permits and not to seek an exactness where only an
approximation of the truth is possible." - Aristotle
"God created the world in 4004 BC on the 23rd of October." Archbishop James
Ussher of Ireland, The Annals of the Old Testament, in 1650 AD
".. at nine o'clock in the morning." John Lightfoot of Cambridge, in 1644 AD
Most courses and textbooks teach you how to apply known equations and methods to derive
answers that are exact and consistent (and selfishly, easy to grade). These generally are
activities oriented towards teaching calculation methods. Similarly, methods as described
above can assist in finding and documenting data needed to support calculations. A simple
example of a calculation method is applying a conversion factor (e.g., pounds to kilograms).
More complex calculation methods may involve solving for distance traveled given an
equation relating distance to velocity and time. As solving LCA problems seldom requires
you to learn a completely new calculation method, we presume you have had sufficient
exposure to doing calculations.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

49

But what if all else fails and you cannot find a primary source or a needed unit conversion?
What if we are unable to locate an appropriate calculation method? An alternative method
must be found that assists in finding a quantitative answer, and which preserves a scientific
method, but is flexible enough to be useful without all needed data or equations. Such an
alternative could involve conducting a survey of experts or non-experts, or guessing the
answer. It is this idea of "guessing" the answer that is the topic of this section. Here we
assume that there is a time-critical aspect to the situation, and that you require a relative
guess in lieu of investing a substantial more amount of time looking for a source, conducting
a complete survey, etc.
Estimation methods use a mix of qualitative and quantitative approaches to yield a "ballpark", or "back of the envelope", or order of magnitude assessment of an answer. These are
not to be confused with the types of estimation done in statistical analyses that are purely
quantitative in nature (e.g., estimating parameters of a regression equation). With estimation
methods, we seek an approximately right answer that is adequate for our purpose thus the
concept that we are merely looking for an order of magnitude result, or one that we could do
in the limited space of an envelope. The quotations at the beginning of this section are given
here to represent the spectrum of the exact versus approximate methods being contrasted.
Estimation methods are sometimes referred to as educated guessing or opportunistic
problem solving. As you will see, the intent is to create educated guesses that do not sound
like guesses. The references at the end of this chapter from Koomey, Harte (both focused
on environmental issues), Weinstein and Adam, and Mahajan are popular book-length
resources and are highly recommended reading if you find this topic interesting.
Estimation methods succeed by using a structured approach of creating and documenting
assumptions relevant to the question rather than simply plugging in known values into an
equation. In this context, you need to adjust your expectations (and those of your audience)
to reflect the fact that you are not seeking a calculated value. You may be simply trying to
correctly represent the sign and/or the order of magnitude of the result. "Getting the sign
right" is fairly straightforward but still often difficult. Approximating the order of magnitude
means generating a value where only one significant figure is needed and the "power of 10"
behind it gives a sense of how large or small it is (i.e., is the value in the millions or billions?).
If you come from a "hard science" discipline such as chemistry or physics, the thought of
generating an answer without an equation may sound like blasphemy. But recall the premise
of estimation methods that you do not have access to, are unable to acquire, or unfamiliar
with the data and equations needed for a calculated result. We are not suggesting you need
to use estimations to find the force of gravity, the number of molecules per mole, etc. Many
students may have encountered these methods in the form of classroom exercises known as
"Fermi Problems". Furthermore, such estimation challenges are being used more and more
frequently as on-the-spot job interview questions for those entering technical fields.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

50

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

While the mainstream references mentioned above give many examples of applying
estimation methods, other references are useful for learning the underlying methods.
Mosteller (1977) lists several building block-type methods that can be used and intermixed to
assist in performing estimation. You are likely familiar with many or all of them, but may
not have considered their value in improving you estimation skills:

Rules of thumb Even a relative novice has various numbers and knowledge in
hand that can help to estimate other values. For example, if performing a financial
analysis it is useful to know the "rule of 72" that defines when an invested amount
will double in value. Likewise, you may know of various multipliers used in a
domain to account for waste, excess, or other issues (e.g., contingency or fudge
factors). The popular Moore's Law for increases in integrated circuit densities over
time is an example. Any of these can be a useful contributor to a good estimation.
Also realize that one person's rule of thumb may be another's conversion factor.

Proxies or similarity Proxy values in estimation are values we know in place of


one we do not know. Of course the needed assumption is that the two values are
expected to be similar. If we are trying to estimate the total BTU of energy
contained in a barrel of diesel fuel, but only had memorized data for gasoline, we
could use the BTU/gallon of gasoline as a proxy for diesel fuel (in reality the values
are quite close, as might be expected since they are both refined petroleum
products). Beyond just straight substitution of values via proxy, we can use similarity
methods to reuse datasets from other purposes to help us. For example if we
wanted to know estimates of leakage rates for natural gas pipelines in the US, we
might use available data from Canada which has similar technologies and
environmental protection policies.

Small linear models Even if we do not have a known equation to apply to an


estimation, we can create small linear models to help us. If we seek the total
emissions of a facility over the course of a year, we could use a small linear model
(e.g., of the form y = mx + b) that estimates such a value (y) by multiplying
emissions per workday (m) by number of work days (x). In a sense we are creating
shortcut equations for our needs.
Of course, these small linear models could be even more complicated, for example
by having the output of one equation feed into another. In the example above, we
could have a separate linear model to first estimate emissions per day (perhaps by
multiplying fuel use by some factor). Another way of using such models is to
incorporate growth rates, e.g., by having b as some guess of a value in a previous
year, and mx the product of an estimated growth rate and the number of years since.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

51

Factoring Factoring is similar to the small linear models mentioned above, except
in purely multiplicative terms. Factoring seeks to mimic a chain of unitized
conversions (e.g., in writing out all of the unitized numerators and denominators for
converting from days in a year to seconds in a year, which looks similar to Equation
2-1). As above, the goal here is to estimate the individual terms and then multiply
them together to get the right value with the right units. The factors in the equation
used may be comprised of constants, probabilities, or separately modeled values.

Bounding Upper and lower bounds were discussed in the context of creating
ranges for analysis purposes, but can also be used in estimations. Here, we can use
bounds to help set the appropriate order of magnitude for a portion of the analysis
and then use some sort of scaling or adjustment factor to generate a reasonable
answer. For example if we were trying to estimate how much electricity we could
generate via solar PV panels, using the entire land mass of the world would give us
an upper bound of production. We could then scale down such a number by a guess
at the fraction of land that is highly urbanized or otherwise not fit for installation.

Segmentation or decomposition In this type of analysis, we break up a single


part into multiple but distinct subparts, and then separately estimate a value for each
subpart and then report a total. If we were trying to estimate fossil-based carbon
dioxide emissions for the US, we could estimate carbon dioxide emissions separately
for fossil fueled power plants, transportation, and other industries. Each of these
subparts may require its own unique estimation method (e.g., a guess at kg of CO2
per kWh, per vehicle mile traveled, etc.) that are added together to yield the original
unknown total emissions of CO2.

Triangulation Using triangulation means that we experiment in parallel with


multiple methods to estimate the same value, and then assess whether to use one of
the resulting values or to generate an average or other combination of them.
Triangulation is especially useful when you are quite uncertain of what you are
estimating, or when the methods you are otherwise choosing have many guesses in
them. You can then control whether to be satisfied with one of your results, or to
use a range. Of course if your various parallel estimates are quite similar you could
just choose a consensus value.

While Mosteller summarized these specific building blocks, you should not feel limited by
them. Various other kinds of mathematical functions, convolutions, and principles could be
brought to bear to aid in your estimation efforts. Beyond these building blocks, you should
try to create ranges (since you are estimating unknown quantities) by assuming ranges of
constants in your methods or by using ranges created from triangulation. Do not assume
that you can never "look up" a value needed within the scope of your estimation. There may
be some underlying factor that could greatly help you find the unknown value you seek, such
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

52

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

as the population of a country, the total quantity of energy used, etc. You can use these to
help you reach your goal, but be sure to cite your sources for them. It might be useful to
avoid using these reference source values while you are first learning how to do estimation,
and then incorporate them when you are more experienced.
As expressed by several of the building block descriptions, a key part of good estimations is
using a "divide and conquer" method. This means you recursively decompose a high-level
unknown value as an expression of multiple unknown values and estimate each of them
separately. A final recommendation is that you should be creative and also to consider
"outside the box" approaches that leverage personal knowledge or experience. That may
mean using special rules of thumb or values that you already know, or attempting methods
that you have good experience in already. Now that we have reviewed the building blocks,
Example 2-2 shows how to apply them in order to create a simple estimate.
Example 2-2: Estimating US petroleum consumption per day for transportation
Question:
Given that the total miles driven for all vehicles in the US is about 3 trillion miles
per year, how many gallons of petroleum are used per day in the US for transportation?
Answer:
If we assume an average fuel economy figure of about 20 miles per gallon we can
estimate that 150 billion gallons (3 trillion miles / 20 miles per gallon) of fuel are consumed per
year. That is about 400 million gallons per day.

You might also develop estimations to serve a specific purpose of explaining a result to be
presented to a general audience. In these cases you might want to find a useful visual or
mental reference that the audience has, and place a result in that context. Example 2-3
shows how you might explain a concentration of 1 ppb (1 part per billion).
Example 2-3: Envisioning a one part per billion concentration
Question:

How many golf balls would it take to encircle the Earth?

Answer:
Assume that the diameter of a golf ball is approximately 1.5 inches, and that the
circumference of the Earth is about 25,000 miles (roughly 10x the distance to travel coast to
coast in the United States). We can convert 25,000 miles to 1.6 x 109 inches. Thus there would
be 1.6 x 109 inches / 1.5 inches, or ~1 billion golf balls encircling the Earth.
Thus, if trying to explain the magnitude of a 1 part per billion (ppb) concentration, think about
there being one red golf ball along the equator that has 1 billion white balls lined up!

Acknowledgment to "Guesstimation" book reference for motivating this example.


Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

53

Attributes of Good Assumptions


One of the key benefits of becoming proficient in estimation is that your skills in
documenting the assumptions of your methods will improve. As application of estimation
methods requires you to make explicit assumptions about the process used to arrive at your
answer, it is worth discussing the attributes of good assumptions. You may have the
impression that making assumptions is a bad thing. However, most research has at its core a
structured set of assumptions that serve to refine and direct the underlying method. Your
assumptions may refer specifically to the answer you are trying to find, as well as the
measurement technologies used, the method, or the analysis. You might think of your
assumptions as setting the "ground rules" or listing the relevant information that is believed
to be true. You should make and write assumptions with the following attributes.
1. Clarify and Simplify - First, realize that the whole point of making an assumption is
to help to clarify the analysis (or at the least to rule out special cases or
complications). Assumptions ideally also serve to refine and simplify your analysis.
It is not useful to have an assumption that makes things harder either for your
analysis or for the audience to follow your process. For example, if you were trying
to estimate the number of power plants in the US, you might first assume that you
are only considering power plants greater than 50 MW in capacity. Or you might
assume that you are only considering facilities that generate and sell electricity (which
would ignore power plants used by companies to make their own power). By
making these assumptions, you are ruling out a potentially significant number of
facilities (leading to an undercount of the actual), but you have laid out this fact
explicitly at the beginning as opposed to doing it without mention.
It is possible that an assumption may be required in order to make any estimate at all.
For example, you might need to assume that you are only estimating fossil-based
power plants, because you have no idea of the capacities, scale, or processes used in
making renewable electricity.
2. Correct, credible and feasible - If it is not obviously true (i.e., you are not stating
something that is a well known fact), your audience should read an assumption and
feel that it is valid - even if hard to believe or agree with. For example, you should
not assume a conversion factor inconsistent with reality, such as there being only
four days in a week or 20 hours in a day.
3. Not a shortcut - While assumptions help to narrow down and refine the space in
which you are seeking an answer, they should not serve to merely carve out an overly
simple path towards a trivial solution. Your audience should not be left with the
impression that you ran out of time or interest in finding the answer and that you
substituted a good analysis with a convenient analysis. For example, you might
assume that you were only counting privately owned power plants. This is a
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

54

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

narrowing of the boundaries of the problem, but does not sound like you are
purposely trying to make the problem trivial.
4. Unbiased Your assumptions should not incorporate a degree of connection to
some unrelated factor. For example, in estimating the number of power plants you
do not want to rely on a geographical representation associated with the number of
facilities that make ethanol, which are highly concentrated in areas where crops like
corn grow.
Beyond listing them, it is good practice to explicitly write a justification for your
assumptions. In the power plant example above, the justification for why you will only
count relatively large (> 50 MW) facilities might be "because you believe that the number of
plants with smaller capacities is minimal given the demands of the power grid". Since you're
looking for an order of magnitude estimate, neglecting part of the solution space should
have no practical effect. In the case of assuming only privately owned facilities, the
justification might simply specify that you are not estimating all plants, just those that are
privately owned. In Example 2-2, the 20 miles per gallon assumed fuel economy is
appropriate for passenger vehicles, but not so much for trucks or buses that are pervasive.
In that example, it would be useful to state and justify an assumption explicitly, such as
"Assuming that most of the miles traveled are in passenger vehicles, which have a fuel
economy of 20 miles per gallon, "
Writing out the thought process behind your assumptions helps to develop your professional
writing style, and it helps your audience to more comfortably follow and appreciate the
analysis you have done. Furthermore, by becoming proficient at writing up the assumptions
and process used to support back of the envelope calculations, you become generally
proficient at documenting your methods. Hopefully you will leverage these writing skills in
other tasks.
In the alternative where you do not state all of your assumptions, the readers are left to
figure them out themselves, or to create their own assumptions based on your incomplete
documentation. Needless to say either of those options raises the possibility that they make
bad assumptions about your work.

Validating your Estimates


When you have to estimate a quantity, it is important that you attempt to ensure that the
value you have estimated makes sense (see the discussion earlier in this chapter about
reasonableness tests). Even though you have estimated a quantity that you were unable to
find a good citation for originally, you should still be able to validate it by comparing it to
other similar values.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

55

As a learning experience, you might try to estimate a value with a known and available
number that you know can be found but that you do not already know the answer to (e.g.,
the number of power plants in the US, or a value that you could look up in a statistical
abstract). Doing so helps you to hone your skills with little risk, meaning that you can try
various methods and directly observe which assumptions help you arrive at values closest to
the "real answer" and track the percentage error in each of your attempts before looking at
the real answer. The goals in doing so are explicitly to learn from doing many estimates of
various quantities (not just 5 attempts at the same unknown value) and to increasingly
understand why your estimates differ from the real answers. You may not be making good
assumptions, or you might be systematically always guessing too high or too low. It is not
hard to become proficient after you have tried to estimate 5-10 different values on your own.
When doing so, try to apply all of the building block methods proposed by Mosteller.
Example 2-4: Validating Result found in Example 2-2
In Example 2-2, we quickly estimated that the transportation sector consumes 400 million
gallons per day of petroleum.
The US Energy Information Administration reports that about 7 billion barrels of crude oil and
other petroleum products were consumed in 2011. About 1 billion barrels equivalent was for
natural gas liquids not generally used in transportation. That means about 17 million barrels
per day (about 850 million gallons per day at about 50 gallons per barrel) was consumed.
That is roughly twice as high as our estimate in Example 2-2, but still in the same order of
magnitude.
Let's think more about the reasons why we were off by a factor of two. First off, we attempted
an estimate in one paragraph with two assumptions. The share of passenger vehicles in total
miles driven is not 100%, and heavy trucks represent 10% of the miles traveled and about onefourth of fuel consumed (because their fuel economies are approximately 5 mpg, not 20).
Considering these deviations our original estimate, while simplistic, was useful.
Sources:
US DOE, EIA, Annual Petroleum and Other Liquids Consumption Data
http://www.eia.gov/dnav/pet/pet_cons_psup_dc_nus_mbbl_a.htm
US Department of Transportation, Highway Statistics 2011, Table VM-1.

Beyond validation of your own estimates, you might also want to do a reasonableness test on
someone else's value. You will often find numbers presented in newspapers or magazines as
well as scholarly journals that you are curious about or fail a quick sniff test. You can use
the same estimation methods to validate those numbers. Just because something is
published does not necessarily mean it has been extensively error-checked. Mistakes happen
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

56

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

all the time and errata are sometimes (but not always) published to acknowledge them.
Example 2-5 shows a validation of values published in mainstream media pertaining to
EPA's proposed 2010 smog standard.

Example 2-5: Validating a comparative metric used in a policy discussion


Question:
Validate the number of tennis balls in the following CBS News excerpt (2010)
pertaining to the details of EPA's proposed 2010 smog standard.
"The EPA proposal presents a range for the allowable concentration of ground-level ozone,
the main ingredient in smog, from 60 parts per billion to 70 parts per billion. That's
equivalent to 60 to 70 tennis balls in an Olympic-sized swimming pool full of a billion tennis
balls."
Answer:
Suppose your sniff test fails because you realize a billion tennis balls is a very
large number of balls for this pool. A back of the envelope estimate suggests the approximate
size of an Olympic pool is 50m x 25m * 2m = 2500 cubic meters. Similarly, assume a tennis
ball occupies a 2.5 inch (70 mm or 0.07m) diameter cube so it thus has a volume of 0.00034
m^3. Such a pool holds only about 7 million tennis balls, almost three orders of magnitude less
than the 1 billion suggested in the excerpt. Of course, we could further refine our assumptions
such that the pool can be uniformly deeper, or that the tennis ball fully occupies that cube (to
consider that adjacent tennis balls could fill in some of the voids when stacked) but none would
fully account for the several orders of magnitude difference.
You cannot put a billion tennis balls in an Olympic-sized pool, thus the intended reference
point for the lay audience was erroneous. It is likely an informal reference from the original
EPA Fact Sheet was copied badly in the news article (e.g., "60-70 balls in a pool full of balls").
Thanks to Costa Samaras of Carnegie Mellon University for this example.

Now that we have built important general foundations for working with and manipulating
data, we turn our attention to several concepts more specific to LCA.

Building Quantitative Models


Given all of the principles above, you should now be prepared to build the types of models
needed for robust life cycle thinking. These models have inputs and outputs. The inputs are
the various parameters, variables, assumptions, etc., and the output is the result of the
equation or manipulation performed on the inputs. In a typical model, we have a single set
of input values and a single output value. If we have ranges, we might have various sets of

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

57

inputs and multiple output values. Beyond these typical models there are other types of
models we might choose to build that are less straightforward.
In a breakeven analysis, you solve for the input value that has a prescribed effect on your
model. A classic example, and where the name "breakeven" comes from is if you are
building a profit or loss model, where your default model may suggest that profits are
expected to be positive (i.e., the result is greater than $0). A relevant breakeven analysis may
assess the input value (e.g., price of electricity or number of units sold) needed to lead to a
no-profit outcome, i.e., a $0 (or negative) value. It is what you need to "break even" or make
profit. This is simply back-solving to find the input required to meet the specified
conditions of the result. Not all breakeven analyses need to be about monetary values, and
do not need to be set against zero. Using the example of Equation 2-1, you could backsolve for the transit time for a tugboat moving at a speed of 5 km/hr. While the math is
generally easy for such analyses, common software like Microsoft Excel have built-in tools
(Goal Seek) to automate them. Goal Seek is quite comprehensive in that it can solve for a
breakeven value across a fairly complicated spreadsheet of values.
The final quantitative skill in this chapter is about identifying how robust your results are to
changes in the parameters or inputs of your model. In a sensitivity analysis, you consider
various inputs individually into the model, and assess the degree to which changing the value
of those inputs has meaningful effects on the results (it is called a sensitivity analysis because
you are seeing how "sensitive" the output is to changes in the inputs). By meaningful, you
are, for example, assessing whether the sign of the result changes from positive to negative,
or whether it changes significantly, e.g., by an order of magnitude, etc. If small changes in
input values have a big effect on the output, you would say that your output is sensitive. If
even large changes in the inputs have modest effect on the output, then the output is not
sensitive. If any such results occur across the range of inputs used in the sensitivity analysis,
then your qualitative analysis should support that finding by documenting those outcomes.
Note that a sensitivity analysis changes each of your inputs independently (i.e., changing one
while holding all other inputs constant). You perform a sensitivity analysis on all inputs
separately and report when you identify that the output is sensitive to a given input. Again
referring to the tugboat example (Equation 2-1) we could model how the speed varies as the
time in transit varies over a range of 20 minutes to more than 4 hours. Figure 2-4 shows the
result of entering values for transit time in increments of 20 minutes into Equation 2-1. It
suggests that the speed is not very sensitive to large transit times, but changes significantly
for small transit times. We will show more examples of breakeven and sensitivity analyses in
Chapter 3.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

58

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

Figure 2-4: Sensitivity of Tugboat Speed to Transit Time

A Three-step method for Quantitative and Qualitative Assessment


We conclude the chapter with suggestions on how to qualitatively and quantitatively answer
questions. LCA is about life cycle assessment. While we have not yet demonstrated the
method itself, it is important to develop assessment skills. If you are doing quantitative work
(as you will need to do to successfully complete an LCA), a general guideline is that you
should think of each task as having three parts:
(1) A description of the method used to complete the task,
(2) The result or output (quantitative or qualitative) of the task, and
(3) A critical assessment, validation, or thought related to the result.
The amount of time and/or text you develop to document each of these 3 steps varies based
on the expectations and complexity of the task (and perhaps within the constraints of the
size of a study).
In step one, you should describe any assumptions, data sources found, equations needed, or
other information required to answer the question. In step two, you state the result,
including units if necessary. In step three, you somehow comment on, validate, or otherwise
reflect on the answer you found. This is an important step because it allows you to both
check your work (see the example about unit conversions above) and to convince the reader
that you have not only done good work but have also spent some time thinking about the
implication of the result. For example, a simple unit conversion might be documented with
the three-step method as follows:
"Inputs of plastic were converted from kg to pounds (2.2 lbs. per 1 kg) yielding 100 kg
of inputs. This value represents 20% of the mass flows into the system."
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

59

Each of the three expected steps is documented in those 2 sentences: the method (a basic
unit conversion), the result (100 kg), and an assessment (20% of the total flows). If this were
part of an assignment, you could envision the instructor deciding on how to give credit for
each part of the question, e.g., 3 points for the method, 2 points for the result, and 2 points
for the assessment. Such a rubric would emphasize the necessity of doing each part, and
could also formalize the expectations of working in this manner and forming strong model
building habits. For many types of problem solvingespecially those related to LCA, where
many answers are possible depending on how you go about modeling the problemthe
emphasis may be on parts 1 and 3, relatively de-emphasizing the result found in part 2. In
other domains, such as in a mathematics course, the result (part 2) may be the only
significant part in terms of how you are assessed. Regardless, you probably still used a
method (and may have briefly shown it by writing an equation and applying it), and
hopefully tried to quickly check your result to ensure it passed a reasonableness test, even if
you did not in detail write about each of those steps.
A way of remembering the importance of this three-step process is that your answer should
never simply be a graph or a number. There is always a need to discuss the method you
used to create it, as well as some reflection on the value. Regardless of the grading
implications and distributions, hopefully you can see how this three-step process always
exists it is just a matter of translating the question or task presented to determine how
much effort to make in each part, and how much documentation to provide as an answer.
You will find that performing LCA constitutes assembling many small building block
calculations and mini-models into an overall model. If you have mostly ignored how you
came up with these building block results, it will be difficult to follow your overall work, and
to follow how the overall result was achieved.

Chapter Summary
In LCA, any study will be composed of a collection of many of the techniques above. You'll
be piecing together emissions factors and small assumption-based estimates, generating new
estimates, and summarizing your results.
A frequently stated reason for why people enter the field of science or engineering is that
they are more comfortable with numbers or equations than they are with "writing". But
communicating your method, process, and results via writing is an especially important skill
in conducting life cycle assessment.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

60

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

References for this Chapter


CBS News, "Reversing Bush, EPA Toughens Smog Rules", via Internet,
http://www.cbsnews.com/news/reversing-bush-epa-toughens-smog-rules/, last accessed
July 20, 2014.
Harte, John , Consider a Spherical Cow: A Course in Environmental Problem Solving,
University Science Books, 1988.
Koomey, Jonathan, Turning Numbers into Knowledge, Analytics Press, 2008.
Mahajan, Sanjoy, Street-Fighting Mathematics: The Art of Educated Guessing and
Opportunistic Problem Solving, MIT Press, 2010.
Mosteller, Frederick, "Assessing Unknown Numbers: Order of Magnitude Estimation", in
Statistics and Public Policy, William Fairley and Frederick Mosteller, editors, AddisonWesley, 1977.
NOAA 2012, Surveying: Accuracy vs. Precision, via Internet,
http://celebrating200years.noaa.gov/magazine/tct/tct_side1.html
U.S. Census Bureau, Statistical Abstract of the United States: 2012 (131st Edition)
Washington, DC, 2011; available at http://www.census.gov/compendia/statab/
Weinstein, Lawrence, and Adams, John A., Guesstimation: Solving the World's Problems on
the Back of a Cocktail Napkin, Princeton University Press, 2008.

End of Chapter Questions


1. Find and reference three primary sources for the amount of energy used in
residences in the United States. Validate your findings as possible.
2. Find the fraction of the population that lives in cities versus rural areas in the US, or
in your home state. Validate your findings as possible.
3. Estimate the total weight of the population in your home state.
4. Estimate the number of hairs on your head.
5. Estimate the number of swimming pools in Los Angeles.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 2: Quantitative and Qualitative Methods Supporting Life Cycle Assessment

THIS PAGE INTENTIONALLY BLANK TO PRESERVE FORMATTING

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

61

62

Chapter 3: Life Cycle Cost Analysis

Chapter 3 : Life Cycle Cost Analysis


In this chapter, we begin our discussion of life cycle analytical methods by overviewing the
long-standing domain of life cycle cost analysis (LCCA). It is assumed that the reader
already understands the concepts of costs and benefits if not, a good resource is our
companion e-book on civil systems planning (Hendrickson and Matthews 2013). The
methods and concepts from this domain form the core of energy and environmental life
cycle assessment that we will introduce in Chapter 4. We describe the ideas of "first cost"
and "recurring costs", as well as methods to put all of the costs over the life of a product or
project into the financial-only basis of common monetary units.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:
1. Describe the types of costs that are included in a life cycle cost analysis.
2. Assess the difference between one-time (first) costs and recurring costs.
3. Select a product or project amongst alternatives based on life cycle cost.
4. Convert current and future monetary values into common monetary units.

Life Cycle Cost Analysis in the Engineering Domain


Material, labor, and other input costs have been critical in the analysis of engineered systems
for centuries. Studies of costs are important to understand and make decisions about
product designs or decisions as these will inevitably allow you to profit from a successful
one. Separate from cost is the concept of benefit, which includes the value you would
receive from an activity such as using a product. Many of the cost models used to support
engineering decisions have been relatively simple, for example, summing all input costs and
ensuring they are less than the funds budgeted for a project.
Engineers have been estimating the whole life cycle cost of built infrastructure for decades.
Life cycle cost analysis (LCCA) has been used to estimate and track lifetime costs of
bridges, highways, other structures and manufactured goods because important and costly
decisions need to be made for efficient management of social resources used by these
structures and goods. Early design and construction decisions are often affected by future
maintenance costs. Given this history, LCCA has most often been used for decision support
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 3: Life Cycle Cost Analysis

63

on fairly large-scale projects. It has also, however, been applied to individual products.
LCCA is often performed as a planning tool early in a project but is also done during a
project's lifetime. We focus on LCCA because its economic focus across the various life
cycle phases is very similar to the frameworks we will need to build for our energy and
environmental life cycle models. If you can understand the framework, and follow the
quantitative inputs and models used, you will better be able to understand LCA.
The project that is already being undertaken or is already in place is typically referred to as
the status quo. Key to the foundation of LCCA is a set of alternative designs (or
alternatives) to be considered, which may vary significantly or only slightly from one
another. These alternatives may have been created specifically in an attempt to reduce costs,
or may simply be alternatives deviating from an existing project design along non-cost
criteria.
With respect to the various costs that may be incurred across the life cycle, first (or initial)
cost refers to costs incurred at the beginning of a project. First cost generally refers only to
the expense of constructing or manufacturing as opposed to any overhead costs associated
with designing a product or project it is the "shovel in the ground" or factory costs. While
design and other overhead costs may be routinely ignored in cost analyses, and in LCA, they
are real costs that are within the life cycle. Future costs refer to costs incurred after
construction/manufacture is complete and typically occur months to years after. Recurring
costs are those that happen with some frequency (e.g., annually) during the life of a project.
In terms of accounting and organization, these costs are often built into a timeline, with first
costs represented as occurring in "year 0" and future/recurring costs mapped to subsequent
years in the future. The sum of all of these costs is the total cost. The status quo will often
involve using investments that have already been made. The original costs of these
investments are termed sunk costs and should not be included in estimation of the new life
cycle cost of alternatives from the present period.
Beyond civil engineering, LCCA also presents itself in concepts such as whole-life cost or
total cost of ownership, which consumers may be more familiar with. Total cost of
ownership (TCO) is used in the information technology industry to capture costs of
purchasing, maintaining, facilities, training users, and keeping current a hardware-software
system. TCO analyses have been popular for comparisons between proprietary software and
open source alternatives (e.g., Microsoft Office vs. OpenOffice) as well as for operating
systems (Mac vs. Windows). However not all decisions get made on the basis of knowing
the minimum TCO - despite many TCO studies showing lower costs, neither Mac nor
OpenOffice have substantial market share. Before discussing LCCA in the context of some
fairly complex settings, let us first introduce a very simple but straightforward example that
we will revisit throughout the book.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

64

Chapter 3: Life Cycle Cost Analysis

Example 3-1: Consider a family that drinks soda. Soda is a drink consisting of carbonated
water, flavoring and (usually) sweetener. The family's usual way of drinking it is buying 2-liter
bottles of soda from a store at a price of $1.50 each. An alternative is to make soda on demand
with a home soda machine. The machine carbonates a 1-liter bottle of water, and the user adds
a small amount of flavor syrup (with or without sweetener) to produce a 1-liter bottle of
flavored soda. An advantage of a home soda machine is that it can be easily stored and use of
flavor bottles removes the need to purchase in advance and store soda bottles (which are mostly
water). Soda machines cost $75 and come with several 1-liter empty bottles and a carbonation
canister for 60 liters of water. Flavor syrup bottles cost $5 and make 50 8-ounce servings (12
liters) of flavored soda. Additional carbonation canisters cost $30.
Question:
If the family drinks 2 liters of soda per week (52 per year), compare the costs of
2-liter soda bottles with the purchase of a soda maker and flavor bottles over a one-year period.
Answer:
The cost of soda from a store is $1.50 * 1 = $1.50 per week, or $78 per year. Note
that this cost excludes any cost of gasoline or time required for shopping. For the soda machine
option, we need a soda machine ($75) and sufficient flavor syrup bottles to make 104 liters of
soda (about 9 bottles or $45), and would use the entire first (free) carbonation canister and
most of a second ($30). Thus the soda machine cost for a year is $150. This cost also excludes
any cost of water, gasoline or time (as well as unused syrup or carbonation). Over a one-year
period, the life cycle cost or total cost of ownership for a soda machine is almost double that of
store-bought bottles. The soda machine provides additional benefit for those who dislike
routine shopping or have a high value of their time, which we noted has not been included.

We can use the methods of Chapter 2 to find breakeven values for the soda machine.
Example 3-2: Find the breakeven price of soda bottles in Example 3-1 compared to buying a
soda machine over one year, without considering discounting.
Answer:
The breakeven price is the price such that when we consider all of the costs for
each option, they are exactly equal. The cost of soda bottles is $72 per year less expensive than
the machine. You could either divide $72 by 52 bottles ($1.38 cents per bottle) and add it to the
current price, or about $2.88/bottle) or explicitly solve for the price per bottle using the
equation $150=52 bottles * p, where p is the price per bottle. Again we find that, at $2.88 per
bottle, purchased soda will cost the same as homemade soda over a one-year period.

Discounting Future Values to the Present


While a full discussion of the need to convert future values to present values as a common
monetary unit is beyond the scope of this chapter, this activity is shown to ensure that the
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 3: Life Cycle Cost Analysis

65

time value of money is represented. There are many resources available to better understand
the theory of such adjustments, including Au and Au's Engineering Economics book (1992).
In short, though, just like other values that are increased or decreased over time due to
growth or decay rates, financial values can and should be adjusted if some values are in
current (today's) dollars and values in the future are given in then-current values. If that is
the case, there is a simple method to adjust these values, as shown in Equation 3-1:
F = P (1 + r) n P = F (1 + r) -n

(3-1)

where P represents a value in Present (today's) dollars, F represents a value in Future dollars,
r is the percentage rate used to discount from future to present dollars, and n is the number
of years between present and future. Equation 3-1 can be used to convert any future value
into a present value. Equation 3-1 is usually used with constant dollars, which have been
adjusted for the effects of inflation (not shown in this chapter). When values are plugged
into Equation 3-1 the P or F results are referred to as present or future discounting factors.
Thus if r=5%, n=1, and a future value (F) of $100 into Equation 3-1, we would get a present
discounting factor of 0.952, which means the future value would be discounted by 4.8%.
Example 3-3: What is the present total cost over 5 years of soda made at home using the
approximated costs above (ignoring unused syrup and carbonation) at a discount rate of 5%?
Answer:

The table below summarizes the approximated costs for each of the five years.
Year 0

Year 1

Year 2

Year 3

Year 4

Year 5

$75

Flavor

$45

$45

$45

$45

$45

Carbonators

$30

$60

$60

$60

$60

$75

$75

$105

$105

$105

$105

Soda Machine

Total

The soda machine is bought at the beginning of Year 1 (a.k.a. Year 0) and costs $75. It does not
need to be discounted as that is already in present dollars. It would cost $45 for flavor bottles
in each Year 1 through 5. The first carbonator is free, but the second costs $30 in Year 1. Two
are needed ($60) in every subsequent year. Thus the present cost (rounded off to 2 significant
digits with present discounting factors of 0.952, 0.907, 0.864, 0.823, and 0.784) is:
Present Value of Cost = $75 + $75/1.05 + $105/1.052 + $105/1.053 + $105/1.054 + $105/1.055
= $75 + $71 + $95 + $91 + $86 + $82 = about $500

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

66

Chapter 3: Life Cycle Cost Analysis

Now that we have a slightly more rigorous way of dealing with costs over time, let us
consider the advanced Example 3-4 comparing life cycle costs of two different new cars.
Example 3-4: Consider a new car purchase decision for someone deciding between a small
sedan or a small hybrid electric vehicle. A key part of such a comparison is to assume that the
buyer is considering otherwise equivalent vehicles in terms of features, size, and specifications.
Given this constraint we compare a 2013 Toyota Corolla with a 2013 Toyota Prius. While the
engines are different, the seating capacity and other functional characteristics are quite similar.
Question:
What are the total costs over 5 years for the two cars assuming 15,000 miles
driven per year?
Answer:
Edmunds.com (2013) has a "True Cost to Own" calculator tool that makes this
comparison trivial. Note that the site assumes the equivalent of financing the car, and the
values are not discounted. Selecting just those vehicles and entering a zip code gives values that
should look approximately like the values listed below. Even driving 15,000 miles per year, the
Prius would be $5,000 more expensive than just buying a small, fuel-efficient Corolla.
2013 Toyota Corolla
Depreciation
Taxes & Fees
Financing
Fuel
Insurance
Maintenance
Repairs
Tax Credit
True Cost to Own

Year 1
$2,766
$1,075
$572
$1,892
$2,288
$39
$0
$0
$8,632

Year 2
$1,558
$36
$455
$1,949
$2,368
$410
$0

Year 3
$1,370
$36
$333
$2,007
$2,451
$361
$89

Year 4
$1,215
$36
$205
$2,068
$2,537
$798
$215

Year 5
$1,090
$36
$74
$2,130
$2,626
$1,041
$314

$6,776

$6,647

$7,074

$7,311

Total
$7,999
$1,219
$1,639
$10,046
$12,270
$2,649
$618
$0
$36,440

2013 Toyota Prius


Depreciation
Taxes & Fees
Financing
Fuel
Insurance
Maintenance
Repairs
Tax Credit
True Cost to Own

Year 1
$7,035
$1,919
$1,045
$1,098
$1,920
$39
$0
$0
$13,056

Year 2
$2,820
$36
$830
$1,131
$1,987
$423
$0

Year 3
$2,481
$36
$608
$1,165
$2,057
$381
$89

Year 4
$2,200
$36
$375
$1,200
$2,129
$786
$215

Year 5
$1,973
$36
$134
$1,236
$2,203
$1,784
$314

$7,227

$6,817

$6,941

$7,680

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Total
$16,509
$2,063
$2,992
$5,830
$10,296
$3,413
$618
$0
$41,721

Chapter 3: Life Cycle Cost Analysis

67

Life Cycle Cost Analysis for Public Projects


The examples above were all centered on life cycle costing for personal or individual
decisions. However, as introduced, generally LCCA is applied to public projects such as
buildings or infrastructure. Life cycle stages of infrastructure systems are similar to those we
discussed in Chapter 1. They also rely on resource extraction and assembly, although
infrastructure is generally constructed rather than manufactured. The use phase is
occupation or use by the public. The use phase also involves maintenance, repair, or
rehabilitation activities. The end of life phase is when it is demolished, either because it is no
longer needed or is being replaced.
LCCA is a useful tool to help assess how various decisions will affect cost. For example, a
particular design may be adjusted, resulting in increased initial cost, but as a means to reduce
planned maintenance costs. The design change could take the form of a planned increase in
the expected time until rehabilitation, reduction in the actual expenditure at time of
maintenance, or by changing the cost structure.
LCCA also has a fairly large scope of stakeholder costs to include, accounting for both
owner costs and user costs over the whole life cycle, as shown in Equation 3-2.
Life Cycle Costs (LCC) =

!
! !

!
! !

(3-2)

Owner costs are those incurred by the party responsible for the product or project, while
user costs are incurred by the stakeholders who make use of it. For example, a state or local
department of transportation may own a highway, but local citizens will be the users. The
owner costs are straightforward to consider they are the cost of planning and building the
highway. The user costs might include the value of drivers' time spent waiting in traffic (and
thus, we have incentive to choose options which would minimize this cost). User costs may
be quite substantial and a multiple or order of magnitude higher than the owner costs.
Figure 3-1 organizes the various types of life cycle costs in rows and columns and shows
example costs for a highway project. For products purchased by private parties, owner and
user costs are the same category and do not need to be distinguished.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

68

Chapter 3: Life Cycle Cost Analysis

Owner

Recurring
(Year n)

First
(Year 0)

Recurring
(Year 1)

Recurring
(Year 2)

Design
Construction

Financing
Maintenance

Financing
Maintenance

Financing
Rehabilitation

Vehicle Use
Tolls
Cost of Time
Driving

Vehicle Use
Tolls
Cost of Time
Driving

Vehicle Use
Tolls
Cost of Time
Driving

Category

User

Figure 3-1: Example Life Cycle Cost Table for Highway Project

LCCA focuses only on costs. While differences in costs between two alternatives may be
considered "benefits", true benefit measures are not used. In the end, LCCA generally seeks
to find the least costly project alternative over the life cycle considering both owner and user
costs. But since agency decision makers are responsible for the project over the long run,
they could be biased towards selecting projects with minimum owner life cycle costs
regardless of user costs since user costs are not part of the agency budget. This stakeholder
difference will also manifest itself when we discuss LCA later in the book, as there may be
limited benefits to a company making their product have a lower environmental impact if
the consumer is the one who will benefit from it (e.g., if it costs more for the company to
produce and perhaps reduces profits but uses less electricity in the use phase).
Various government agencies suggest and expect LCCA practices to be part of the standard
toolbox for engineers, planners, and decision makers. The US Federal Highway
Administration (FHWA) has promoted LCCA since 1991 and the US Department of
Transportation (DOT) created a Life Cycle Cost Analysis Primer (2002) to formalize their
intentions to have engineers and planners use the tool in their practice. In this document
they describe the following steps in LCCA:
1. Establish design alternatives, including status quo
2. Determine activity timing
3. Estimate costs (agency and user)
4. Compute life-cycle costs (LCC)
5. Analyze the results
While we have described most of these steps already, we emphasize them to demonstrate
that LCCA does not end with simply determining the life cycle costs of the various
alternatives. It is a multi-step process and it ends with an expected conclusion and analysis,
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 3: Life Cycle Cost Analysis

69

building upon the three-part system we introduced in Chapter 2. Such an analysis may reveal
that the LCC of one of the alternatives is merely 1% less than the next best alternative, or
that it is 50% less. It might also indicate that there is too much uncertainty in the results to
make any conclusion. In the end, the analyst's result may not be the one chosen by the
decision maker due to other factors such as budgets, politics, or different assessments of the
relative worth of the various cost categories. Regardless, the act of analyzing the results is a
critical component of any analytical framework.

Deterministic and Probabilistic LCCA


Our examples so far, as well as many LCCAs (and LCAs, as we will see later) are
deterministic. That means they are based on single, fixed values of assumptions and
parameters but more importantly it suggests that there is no chance of risk or uncertainty
that the result might be different. Of course it is very rare that there would be any big
decision we might want to make that lacks risk or uncertainty. Probabilistic or stochastic
models are built based on some expected uncertainty, variability, or chance.
Let us first consider a hypothetical example of a deterministic LCCA as done in DOT
(2002). Figure 3-2 shows two project alternatives (A and B) over a 35-year timeline.
Included in the timeline are cost estimates for the life cycle stages of initial construction,
rehabilitation, and end of use. An important difference between the two alternatives is that
Alternative B has more work zones, which have a shorter duration but that cause
inconvenience for users, leading to higher user costs as valued by their productive time lost.
Following the five-step method outlined above, DOT showed these values:

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

70

Chapter 3: Life Cycle Cost Analysis

Figure 3-2: Deterministic LCCA for Construction Project

Without discounting, we could scan the data and see that Alternative A has fewer periods of
disruption and fairly compact project costs in three time periods. Alternative B's cost
structure (for both agency and user costs) is distributed across the analysis period of 35
years. Given the time value of money, however, it is not obvious which might be preferred.
At a 4% rate, the discounting factors using Equation 3-1 for years 12, 20, 28, and 35 are
0.6246, 0.4564, 0.3335, and 0.2534, respectively. Thus for Alternative A the discounted life
cycle agency costs would be $31.9 million and user costs would be $22.8 million. For
Alternative B they would be $28.3 million and $30.0 million, respectively. As DOT (2002)
noted in their analysis, "Alternative A has the lowest combined agency and user costs,
whereas Alternative B has the lowest initial construction and total agency costs. Based on
this information alone, the decision-maker could lean toward either Alternative A (based on
overall cost) or Alternative B (due to its lower initial and total agency costs). However, more
analysis might prove beneficial. For instance, Alternative B might be revised to see if user
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 3: Life Cycle Cost Analysis

71

costs could be reduced through improved traffic management during construction and
rehabilitation."
Even though this was a hypothetical example created to demonstrate LCCA to the civil
engineering audience, presumably you are already wondering how robust these numbers are
to other factors and assumptions. DOT also noted "Sensitivity analysis could be performed
based on discount rates or key assumptions concerning construction and rehabilitation costs.
Finally, probabilistic analysis could help to capture the effects of uncertainty in estimates of
timing or magnitude of costs developed for either alternative."
While engineers have been collecting data on their products for as long as they have been
designing products, the types of data required to complete LCCA analyses are generally
much different than the usually collected data. LCCA can require planners to have estimates
of future construction or rehabilitation costs, potentially a decade or more from the time of
construction. These are obviously uncertain values (and further suggests the need for
probabilistic methods).
For big decisions like that in the DOT example, one would want to consider the ranges of
uncertainty possible to ensure against a poor decision. Building on DOT's recommendation,
we could consider various values of users' time, the lengths of time of work zone closures,
etc. If we had ranges of plausible values instead of simple deterministic values, that too
could be useful. Construction costs and work zone closure times, for example, are rarely
much below estimates (due to contracting issues) but in large projects have the potential to
go significantly higher. Thus, an asymmetric range of input values may be relevant for a
model.
We could also use probability distributions to represent the various cost and other
assumptions in our models. By doing this, and using tools like Monte Carlo simulation, we
could create output distributions of expected life cycle cost for use in LCCA studies. We
could then simulate costs of the alternatives, and choose the preferred alternative based on
combinations of factors such as the lowest mean value of cost and the lowest standard
deviation of cost. Finally, probabilistic methods support the ability to quantitatively assess
the likelihood that a particular value might be achieved. That means you might be able to
assess how likely each Alternative is to be greater than zero, or how likely it is that the cost
of Alternative A is less than Alternative B. It is by exploiting such probabilistic modeling
that we will be able to gain confidence that our analysis and recommendations are robust to
various measures of risk and uncertainty, and hopefully, support the right decisions. We will
revisit these concepts in Chapter 11 after we have learned a bit more about LCA models.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

72

Chapter 3: Life Cycle Cost Analysis

Chapter Summary
As introduced in Chapter 1, sustainability involves social, economic, and environmental
factors. We can track costs over the life cycle of products or projects and use it as a basis
for making decisions regarding comparative economic performance. There are various
methods and applications to perform life cycle cost analysis (LCCA) in support of decisions
for basic products, choices, and for infrastructure systems. Depending on the complexity of
the project, we may want to adjust for the time value of money by using discounting
methods that normalize all economic flows as if they occurred in the present. A benefit of
using such methods is that they allow incorporation of costs by both the owner as well as
other users. Beyond deterministic methods, LCCA can support probabilistic methods to
ensure we can make robust decisions that incorporate risk and uncertainty.
Now that you have been exposed to the basics of LCCA, you can appreciate how building
on the straightforward idea of considering costs over the life cycle can broaden the scope
involved in life cycle modeling. As we move forward in this textbook to issues associated
with energy or environmental life cycle assessment, concepts of life cycle cost analysis should
remain a useful part of LCA studies.

References for this Chapter


Hendrickson, Chris T. and H. Scott Matthews, Civil Infrastructure Planning, Investment and
Pricing. http://cspbook.ce.cmu.edu/ (accessed July, 2013).
Tung Au and Thomas P. Au, Engineering Economics for Capital Investment Analysis, 2nd
edition, Prentice-Hall, 1992. Available at http://engeconbook.ce.cmu.edu.
edmunds.com, website, www.edmunds.com, last accessed January 2, 2013.
US Department of Transportation Office of Asset Management, "Life-Cycle Cost Analysis
Primer", FHWA-IF-02-047 2002. Available at http:
http://www.fhwa.dot.gov/infrastructure/asstmgmt/lcca.cfm

End of Chapter Questions


1. Building on Example 3-1, find the total cost of buying 2-liter bottles of soda over a
5-year period, with and without discounting at a rate of 5%.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 3: Life Cycle Cost Analysis

73

2. How would your results change for Example 3-1 if you spent 3 minutes per
shopping trip buying soda, and that time spent had a cost of $20 per hour? What is
this breakeven cost of time per hour?
3. Building on Example 3-1, but where you must drive 5 miles to the store in a vehicle
that gets 25 miles per gallon (at a gasoline price of $3.50 per gallon) in order to buy
the soda machine, as well as the flavor bottles or to purchase two-liter bottles every
time you want to drink soda, what are the total costs in the first year? What are the
total discounted costs over 5 years at a 5% rate? Discuss qualitatively how your
model results might change if you were buying other items on your shopping trips.
4. Combine the original Example 3-1 data and assumptions, as well as the additional
information from Questions 1 through 3 above. Calculate total life cycle costs over
5 years for each option and create a visual to summarize your results. Which
alternative should be chosen over 5 years, buying soda from a store or buying a soda
machine? Which should be chosen over 10 years?
5. Compared to the result in Example 3-2, does the breakeven price of soda bottles
change over a 5-year period if you do not consider discounting? Does the breakeven
price change over 5 years if you discount at 5%?
6. Generate a life cycle cost summary table (using Figure 3-1 as a template) for the
following:
a. A privately purchased computer
b. A public airport
c. A sports arena or stadium
7. How sensitive (quantitatively and qualitatively) is the decision in Example 3-4 to the
annual cost of fuel? Create a graphic to show your result.
8. What are the total costs to own for the two vehicles in Example 3-4 with a 5%
discount rate? Which vehicle would you choose? Does your decision ever change if
the discount rate varies from 0 to 20%?
9. A household is considering purchasing a washing machine and has narrowed the
choice to two alternatives. Machine 1 is a standard top-loading unit with a purchase
cost of $500. This machine uses 40 gallons of water and 2 kilowatt-hours of
electricity per load (assuming an electric water heater). The household would do
roughly 8 loads of laundry per week with this machine. Machine 2 is a front-loading
unit; it costs $1,000, but it can wash double the amount of clothes per load, and each
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

74

Chapter 3: Life Cycle Cost Analysis

load uses half the water and electricity. Assume that electricity costs 8 cents/kWh
and water is $2 per 1,000 gals.
a. Generate a life cycle cost summary table for the two washing machines
b. Develop a life cycle cost comparison of the two machines over a 10-year life
period without discounting. Which machine should be chosen if considering
only cost?
c. Which would you choose over a 10-year period with a 3% discount rate?
10. How sensitive (quantitatively and qualitatively) is the choice of washing machines to
the discount rate, price of electricity, and price of water?
11. A recent and continuing concern of automobile manufacturers is to improve fuel
economy. One of the easiest ways to accomplish this is to make cars lighter. To do
this, vehicle manufacturers have substituted specially strengthened but lighter
aluminum for steel (they have also experimented with carbon fibers). Unfortunately,
processed aluminum is more expensive than steel - about $3,000 per ton instead of
$750 per ton for steel. Aluminum-intensive vehicles (AIVs) are expected to weigh
less by replacing half of the steel in the car with higher-strength aluminum on a 1 ton
of steel to 0.8 ton of aluminum basis. This is expected to reduce fuel use 20%.
Assume:
Current cars can travel 25 miles per gallon of gasoline and gasoline costs $3.50
per gallon
Current [steel] cars cost $20,000 to produce, of which $1,000 is currently for
steel and $250 for aluminum
AIVs are equivalent to current cars except for substitution of lighter aluminum
for steel
All cars are driven 100,000 miles
All tons are short tons (2,000 pounds)
a) Of current cars and AIVs, which is cheaper over the life cycle (given only the
information above)? Develop a useful visual aid to compare life cycle costs across
steel vehicles and AIVs.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 3: Life Cycle Cost Analysis

75

b) How uncertain would our cost estimates for steel, aluminum, and gas have to be
to reverse your opinion on which car was cheaper over the life cycle?
c) Do your answers above give you enough information to determine whether we
should produce AIVs? What other issues might be important?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Photo of nuclear electricity generation facility in France prominently showing its


certification to the ISO 14001 Environmental Management Standard.
Photo credit: By Pierre-alain dorange (Own work) [CC-BY-SA-3.0
(http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons
http://upload.wikimedia.org/wikipedia/commons/0/08/Centrale_Nucl%C3%A9ai
re_du_Blayais.jpg

Chapter 4: The ISO LCA Standard Goal and Scope

77

Chapter 4 : The ISO LCA Standard Goal and Scope


We have discussed many of the skills that are necessary to complete a rigorous LCA. Now
we present the standard framework for planning and organizing such a study. In this
chapter, we supplement information found in the official ISO Standard for LCA. We only
summarize and expand on the most critical components, thus this chapter is not intended to
be a substitute for reading and studying the entire ISO Standard (likely more than once to
gain sufficient understanding). The rationale for studying the ISO Standard is to build a
solid foundation on which to understand the specific terminology used in the LCA
community and to learn directly from the Standard and from our collective experience what
is required in an LCA, and what is optional. We use excerpts and examples from completed
LCA studies to highlight key considerations since examples are generally lacking in the
Standard. As such, the purpose of this chapter is not to re-define the terminology used but
to help you understand what the terms mean from a practical perspective.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

Describe the four major phases of the ISO LCA Standard

List all of the ISO LCA Standard study design parameters (SDPs)

Review SDPs given for an LCA study and assess their appropriateness and anticipate
potential challenges in using them

Generate specific SDPs for an LCA study of your choosing

Overview of ISO and the Life Cycle Assessment Standard


Before we specifically discuss the LCA Standard, we review standards in general. Standards
are created to make some activity or process consistent, or at least to be done using common
guidelines or methods. They might also be created to level the playing field in a particular
market by ensuring that everyone does things the same way. Standards are made for a
variety of reasons, and exist at many levels, from local building codes all the way up to global
standards. In civil engineering and construction, there are standards for concrete; for
example a request for proposals could require that the product meet "ASTM C94 concrete".
This means that any concrete used in the project must meet the testing standard defined in
ASTM C94, developed by the ASTM International organization.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

78

Chapter 4: The ISO LCA Standard Goal and Scope

There are many organizations around the world that work on developing and promoting the
use of standards. ASTM International, mentioned above, has been developing standards for
specific tests and materials for more than 100 years. ISO (the International Organization for
Standardization the acronym makes sense in French!) is an international organization that
creates standards geared more towards safety, quality, and management standards, and
various companies and entities around the world follow these standards.
The actual
processes used by each organization to create a standard vary, but for ISO the process has
the following components: it (1) responds to a market need; (2) is based on expert opinion;
(3) is developed by a multi-stakeholder team; and (4) is ratified by consensus. The actual
standard is drafted, edited, and revised by a technical committee of global experts based on
comments until consensus (75% agreement) is reached (ISO 2012).
There are various frameworks for performing life cycle assessment (LCA) but the primary
and globally accepted way of doing it follows the ISO LCA Standard (which is comprised
primarily of two related standards, 14040:2006 and 14044:2006), which we assume you have
accessed and read separately. We will refer to both underlying standards as the ISO
Standard. The notation "14040:2006" means that the ISO LCA Standard is in the "ISO
14000" family of standards, which are global standards for environmental management and
encompass various other processes to track and monitor emissions and releases. The
version current as of the time of writing this book was most recently updated in 2006. The
first version of the ISO LCA Standard was published in 1997.
One thing that you may now realize is that many of the foundational LCA studies mentioned
in Chapter 1 (e.g., by Hocking, Lave, etc.) were completed before the LCA Standard was
formalized. That does not mean they were not legitimate studies it just means that in
today's world these could not be referred to as "ISO compliant", where ISO compliant
means that the work conforms to the Standard as published. While it may seem trivial,
compliance with the many ISO standards is typically a goal of an entity looking for global
acceptance and recognition. This is not just in the LCA domain firms in the automotive
supply chain seek "ISO 9000 compliance" to prove they have quality programs in place at
their companies that meet the standard set by ISO, so that they are able to do business in
that very large global market. Chapter 13 in this book will discuss more about peer review
and assessing ISO compliance for an LCA study.
It should be obvious why a standard for LCA is desirable. Without a formal set of
requirements and/or guidelines, anyone could do an LCA according to her own views of
how a study should be done and what methods would be appropriate to use. In the end, 10
different parties could each perform an LCA on the same product and generate 10 different
answers. The LCA Standard helps to normalize these efforts. However, as we will see
below, its rules and guidelines are not overly restrictive. Simply having 10 parties
conforming to the Standard does not guarantee you would not still generate 10 different
answers! One could alternatively argue that in a field like LCA, a diversity of thoughts and
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

79

approaches is desirable, and thus, that having a prescriptive standard stifles development of
methods or findings.
As you have read separately, the ISO LCA Standard formalizes the quantitative modeling
and accounting needs to implement life cycle thinking to support decisions. ISO 14040:2006
is the current "principles and framework" of the Standard, and is written for a managerial
audience while ISO 14044:2006 gives the "requirements and guidelines" as for a practitioner.
Given that you have already read the Standard (and have their glossaries of defined terms to
help guide you), you are already familiar with the basic ideas of inputs, outputs, and flows.
At a high level, Figure 4-1 summarizes the ISO LCA Standard's 4 phases: goal and scope
definition, inventory analysis, impact assessment, and interpretation. The goal and scope
are statements of intent for your study, and part of what we will refer to as the study design
parameters (discussed below). They explicitly note the reason why you are doing the study,
as well as the study reach. In the inventory analysis phase, you collect and document the
data needed (e.g., energy use and emissions of greenhouse gases) to meet the stated goal and
scope. In the impact assessment phase you transition from tracking simple inventory
results like greenhouse gas emissions to impacts such as climate change. Finally, the
interpretation phase looks at the results of your study, puts them into perspective, and may
recommend improvements or other changes to reduce the impacts.

Figure 4-1: Overview of ISO LCA Framework (Source: ISO 14040:2006)


Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

80

Chapter 4: The ISO LCA Standard Goal and Scope

It is important to recognize that all of the double arrows mean that the four phases are
iterative, i.e., you might adjust the goal and scope after trying to collect inventory data and
realizing there are challenges in doing so. You may get to the interpretation phase and
realize the data collected does not help answer the questions you wanted and then revise the
earlier parts. You may get unexpected results that make reaching a conclusion difficult, and
need to add additional impact assessments. Thus, none of the phases are truly complete
until the entire study is complete. From experience, every study you do will be modified as
you go through it. This is not a sign of weakness or failure; it is the prescribed way of
improving the study as you learn more about the product system in question.
As ISO mentions, it is common that studies following the Standard do not include an impact
assessment phase, and these studies are simply called life cycle inventory studies (LCIs).
That is, their final results are only the accounting-like exercise of quantifying total inputs and
outputs without any consideration of impact. You could interpret this to mean that impact
assessment is not a required component, but more correctly it is required of an LCA study
but not an LCI. That said, we will generally use the phrase "LCA" to refer either to an LCA
or an LCI, as is common in the field.
The right hand side of Figure 4-1 gives examples of how LCA might be used. The first two,
for product improvement and strategic planning, are common. In this book we focus more
on "big decisions" and refer to activities such as informing public policy (e.g., what types of
incentives might make paper recycling more efficient?) and assessing marketing claims. In
these domains the basis of the study might be in comparing between similar products or
technologies.
In the rest of this chapter, we focus on the goal and scope phases of LCA. Subsequent
chapters discuss the inventory, interpretation, and impact assessment phases in greater detail.

ISO LCA Study Design Parameters


As noted above, ISO requires a series of parameters to be qualitatively and quantitatively
described for an LCA study, which in this text we refer to as the study design parameters
(SDPs), listed in Figure 4-2. In this section we provide added detail and discussion about the
underlying needs of each of these parameters and discuss hypothetical parameter statements
and values in terms of their ISO conformance.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

Goal

81

Scope Items:
Product System
System Boundary
Functional Unit
Inventory Inputs and Outputs
LCIA Methods Used

Figure 4-2: Study Design Parameters (SDPs) of ISO LCA Framework

Think of the SDPs as a summary of the most important organizing aspects of an LCA. The
SDPs are a subset of the required elements in an LCA study, but are generally the most
critical considerations and thus those that at a glance would tell you nearly everything you
needed to know about what the study did and did not seek to do. Thus, these are items that
need to be chosen and documented very well so there is no confusion. In documenting each
in your studies, you should specifically use the keywords represented in the Standard (e.g.,
"the goal of this study is", "the functional unit is", etc.) Expanding on what is written in the
ISO LCA Standard we discuss each of the items in the SDP below.
SDP 1. Goal
The goal of an LCA, like the goal of any study, must be clearly stated. ISO requires that the
goal statement include unambiguous statements about: (1) the intended application, (2) the
reasons for carrying out the study, (3) the audience, and (4) whether the results will be used
in comparative assertions released publicly. An easy way to think about the goal statement of
an LCA report is that it must fully answer two questions: "who might care about this and
why?" and "why we did it and what will we do with it?". As noted above, the main
components of an LCA are iterative. Thus, it is possible you start an LCA study with a goal,
and by going through the effort needed to complete it, the goal is changed because more or
less is possible than originally planned.
Below are excerpts of the goal statement from an LCA study comparing artificial and natural
Christmas trees bought in the US2 (PE Americas 2010).
"The findings of the study are intended to be used as a basis for educated external
communication and marketing aimed at the American Christmas tree consumer."
"The goal of this LCA is to understand the environmental impacts of both the most
common artificial Christmas tree and the most common natural Christmas tree, and to
analyze how their environmental impacts compare."
"This comparative study is expected to be released to the public by the ACTA to refute
myths and misconceptions about the relative difference in environmental impact by real
and artificial trees."
2

In the interest of full disclosure, one of the authors of this book (HSM) was a paid reviewer of this study.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

82

Chapter 4: The ISO LCA Standard Goal and Scope

From these three sentences, we clearly can understand all 4 of the ISO-required components
of the goal statement. The intended application is external marketing. The reasons are to
refute misconceptions. The audience is American tree consumers. Finally, the study was
noted to be planned for public release (and it is available on a website). We will discuss
further implications of public studies later in the book.
The examples above help constitute a good goal statement. It should be clear that skipping
any of the 4 required parts or trying to streamline the goal for readability could lead to an
inappropriate goal statement. For example, the sentence "This study seeks to find the
energy use of a power plant" is clear and simple but only addresses one of the four required
elements of a goal. It also never uses the word "goal" which could be perceived as stating no
goal.
Beyond the stated goals, we could consider what is not written in the goals. From the above
statements, there would be no obvious use of the study by a retailer, e.g., to decide whether
to stock one kind of tree over another. It is useful to consider what a reader or reviewer of
the study would think when considering your goal statement. A reviewer would be sensitive
to biases and conflicts, as well as creative use of assumptions in the SDP that might favor
one alternative over others. Likewise, they may be sensitive to the types of conclusions that
may arise from your study given your chosen goals. You want to write so as to avoid such
interpretations.
One of the primary reasons that scientists seek to use LCA is to make a comparative
assertion, which is when you compare multiple products or systems, such as two different
types of packaging, to be able to conclude and state (specifically, to make a claim) that one is
better than the other (has lower impacts). As noted above, the ISO LCA Standard requires
that such an intention be noted in the goal statement.
Scope
Although ISO simply lists "goal and scope", a goal statement is just a few sentences while
the scope may be several pages. The study scope is not a single statement but a collection of
qualitative and quantitative information denoting what is included in the study, and key
parameters that describe how it is done. Most of the SDPs are part of the scope. There are
14 separate elements listed in ISO's scope requirements, but our focus is on five of them
that are part of the SDPs: the product system studied, the functional unit(s), system
boundaries, and the inventory and/or impact assessments to be tracked. The other ten are
important (and required for ISO compliance) but are covered sufficiently either in the ISO
Standard or elsewhere in this book.
While these five individual scope SDPs are discussed separately below, they are highly
dependent on each other and thus difficult to define separately. We acknowledge that this
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

83

interdependency of terminology typically confuses most readers, as every definition of one


of the scope SDPs contains another SDP term. However, a clear understanding of these
terms is crucial to the development of a rigorous study and we recommend you read the
following section, along with the ISO Standard, multiple times until you are comfortable
with the distinctions.
SDP 2. Functional Unit
While we list only the functional unit as an SDP, the ISO Standard requires a discussion of
the function of the product system as well. A product system (as defined in ISO 14040:2006
and expanded upon below) is a collection of processes that provide a certain function. The
function represents the performance characteristics of the product system, or in layman's
terms, "what does it do?" A power plant is a product system that has a function of
generating electricity. The function of a Christmas tree product system is presumably to
provide Christmas joy and celebrate a holiday. The function of a restroom hand dryer is
drying hands. The function of a light bulb is providing light. In short, describing the
function is pretty straightforward, but is done to clarify any possible confusions or
assumptions that one might make from otherwise only discussing the product system itself.
The functional unit, on the other hand, must be a clearly and quantitatively defined
measure relating the function to the inputs and outputs to be studied. Unfortunately, that is
all the description the ISO Standard provides. This ambiguity is partly the reason why the
expressed functional units of studies are often inappropriate. A functional unit should
quantify the function in a way that makes it possible to relate it to the relevant inputs and
outputs (imagine a ratio representation). As discussed in Chapter 1, inputs are items like
energy or resource use, and outputs are items like emissions or waste produced. You thus
need a functional unit that bridges the function and the inputs or outputs. Your functional
unit should explicitly state units (as discussed in Chapter 2) and the results of your study will
be normalized by your functional unit.
Building on the examples above, a functional unit for a coal-fired power plant might be "one
kilowatt-hour of electricity produced". Then, an input of coal could be described as
"kilograms of coal per one kilowatt-hour of electricity produced (kg coal/kWh)" and a
possible output could be stated as "kilograms of carbon dioxide emitted per kilowatt-hour of
electricity produced (kg CO2/kWh)." For a Christmas tree the functional unit might be "one
holiday season" because while one family may leave a tree up for a month and another family
for only a week, both trees fulfill the function of providing Christmas joy for the holiday
season. For a hand dryer it might be "one pair of hands dried". For a light bulb it might be
"providing 100 lumens of light for one hour (a. k. a. 100 lumen-hours)". All of these are
appropriate because they discuss the function quantitatively and can be linked to study
results. Figure 4-3 summarizes the bridge between function, functional units, and possible
LCI results for the four product systems discussed. While not explicit to function, you could
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

84

Chapter 4: The ISO LCA Standard Goal and Scope

have a study where your functional unit was "per widget produced" which would encompass
the cradle to gate system of making a product.
Product
System

Function

Functional Unit

Example LCI
Results

Power Plant

Generating electricity

1 kWh of electricity
generated

kg CO2 per kWh

Christmas Tree

Providing holiday joy

1 undecorated tree
over 1 holiday season

MJ energy per
undecorated tree per
holiday season

Hand Dryer

Drying hands

1 pair of hands dried

MJ energy per pair of


hands dried

Light Bulb

Providing light

100 lumens light for 1


g Mercury per 100
hour (100 lumen-hrs)
lumen-hrs
Figure 4-3: Linkages between Function, Functional Unit, and Example LCI Results
for hypothetical LCA studies

Now that we have provided some explicit discussion of functional units, we digress to
discuss common problems with statements of functional units in studies. One common
functional unit problem is failure to express the function quantitatively or without units.
Often, suggested functional units sound more like a function description, e.g., for a power
plant "the functional unit is generating electricity". This cannot be a viable functional unit
because it is not quantitative and also because no unit was stated. Note that the units do not
need to be SI-type units. The unit can be a unique unit relevant only for a particular product
system, as in "1 pair of hands dried".
Another common problem in defining a study's functional unit is confusing it with the
inputs and outputs to be studied. For example, "tons of CO2" may be what you intend to
use in your inventory analysis, but is not an appropriate functional unit because is not
measuring the function, it is measuring the greenhouse gas emission outputs of the product
system. Likewise, it is not appropriate to have a functional unit of "kg CO2 per kWh"
because the CO2 emissions, while a relevant output, have nothing to do with the expression
of the function. Further, since results will be normalized to the functional unit, subsequent
emissions of greenhouse gas emissions in such a study would be "kg CO2 per kg CO2 per
kWh", which makes no sense. Thus, product system inputs and outputs have no place in a
functional unit definition.
For LCA studies that involve comparisons of product systems, choices of functional units
are especially important because the functional unit of the study needs to be unique and
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

85

consistent across the alternatives. For example, an LCA comparing fuels needs to compare
functionally equivalent units. It would be misleading to compare a gallon of ethanol and a
gallon of gasoline (i.e., a functional unit of gallon of fuel), because the energy content of the
fuels is quite different (gasoline is about 115,000 BTU/gallon while ethanol (E100) is about
75,000 BTU/gallon). In terms of function or utility, you could drive much further with a
gallon of gasoline than with ethanol. You could convert to gallons of gasoline equivalent
(GGE) or perhaps use a functional unit based on energy content (such as BTU) of fuel.
Likewise, if comparing coal and natural gas to make electricity, an appropriate functional unit
would be per kWh or MWh, not per MJ of fuel.
Hopefully it is clear that using an inappropriate function or functional unit could lead to lots
of wasted effort if a study were later reviewed and found to be faulty. If you were to use
functional units that, for example, had no actual units, you would create results that were not
normalized to anything. Having to go back and correct that after a study is done is
effectively an entirely new study.
SDPs 3 and 4. Product System and System Boundary
Before discussing an ISO LCA product system, we first discuss products, which can be any
kind of good or service. This could mean a physical object like a component part, or
software or services. Processes, similarly are activities that transform inputs to outputs. As
already mentioned, an ISO LCA product system is the definition of the relevant processes
and flows related to the chosen product life cycle that lead to one or more functions. Even
virtual products like software (or cloud services) have many processes needed to create
them. Products are outputs of such systems, and a product flow represents the connection
of a product between product systems (where it may be an output of one and an input of
another). For example, the product output of a lumber mill processwood planksmay be
an input to a furniture manufacturing process. Similarly, petroleum is the product output of
an oil extraction process and may be an input into a refinery process that has product
outputs like gasoline and diesel fuels.
A product system is comprised of various subcomponents as defined below, but is generally
comprised of various processes and flows. The system boundary notes which subset of
the overall collection of processes and flows of the product system are part of the study, in
accordance with the stated study goals.
While not required, a diagram is crucial in helping the audience appreciate the complexity of
the product system and its defined system boundary. The diagram is created by the study
authors (although it may be generated by the LCA software used in the study). This diagram
should identify the major processes in the system and then explicitly note the system
boundary chosen, ideally with a named box "system boundary" around the processes
included in the study. Alternatively, some color-coded representation could be used to
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

86

Chapter 4: The ISO LCA Standard Goal and Scope

identify the processes and flows contained within the boundary. Even with a great product
system diagram, the study should still discuss in detailed text the various processes and
flows. Figure 4-4 shows the generic product system and system boundary example provided
in ISO 14040:2006. If your study encompasses or compares multiple products, then you
have to define several product systems.

Figure 4-4: ISO 14040 Product System and System Boundary example

There are a few key components of a product system diagram (also called a process flow
diagram). Boxes in these diagrams represent various forms of processes, and arrows
represent flows, similar to what might be seen in a mass balance or materials flow analysis.
Boxes (or dashed lines) may represent system boundaries. At the highest level of generality
(as in Figure 4-4) the representation of a product system may be such that the process boxes
depicted correspond to entire aggregated life cycle stages (raw materials, production, use,
etc.) as discussed in Chapter 1. In reality each of these aggregated stages may be comprised
of many more processes, as we discuss below.
Before going into more detail, it is worth discussing the art of setting a system boundary in
more detail. Doing a complete LCA (one that includes every process in the product system)
of a complicated product is impossible. An automobile has roughly 30,000 components.
Tens of thousands of processes are involved in mining the ores, making the ships, trucks,
and railcars used to transport the materials, refining the materials, making the components,
and assembling the vehicle. A "complete" ISO LCA requires information on the materials
and energy flows for each of these processes. Compiling and updating such detailed
information for all of these processes and flows is all but impossible. Furthermore, each of
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

87

the processes directly involved in producing the components requires inputs from other
processes.
LCA models are able to capture direct and indirect effects of systems. In general, direct
effects are those that happen directly as a result of activities in the process in question.
Indirect effects are those that happen as a result of the activities, but outside of the process
in question. For example, steel making requires iron ore and oxygen directly, but also
electricity, environmental consulting, natural gas exploration, production, and pipelines, real
estate services, and lawyers. Directly or indirectly, making cars involves the entire economy,
and getting detailed mass and energy flows for the entire economy is impossible.
Since conducting a complete LCA is impossible, what can we do? As we will see below, the
ISO Standard provides for ways of simplifying our analyses so as not to require us to track
every possible flow. But we still need to make key decisions (e.g., about stages to include)
that can eventually lead to model simplifications. Focusing on the product itself while
ignoring all other parts of the life cycle would lead to inaccurate and biased results, as shown
in the example of the battery-powered car in Chapter 1.
An LCA of a generic American passenger automobile was undertaken by representatives of
the three major automobile manufacturing companies, aka the "big three", in the US in the
mid-1990s. This study looked carefully at the processes for extracting ore and petroleum
and making steel, aluminum, and plastic for use in vehicles. It also looked carefully at making
the major components of a car and assembling the vehicle. Given the complexity described
above, the study was forced to compromise by selecting a few steel mills and plastics plants
as "representative" of all plants. Similarly, only a few component and assembly plants were
analyzed. Whether the selected facilities were really representative of all plants cannot be
known. Finally, many aspects of a vehicle were not studied, such as much of the
transportation of materials and fuels and "minor" components. Nonetheless, the study was
two years in duration (with more than 10 person years of effort) and is estimated to have
cost millions of dollars.
Thus, system boundaries need to be justified. Beyond the visual display and description of
the boundary used in the study, the author should also explain choices and factors that led to
the boundary as finally chosen and used. As mentioned above, significant effort looking for
data could fail, and a process may have to be excluded from a study. Such an outcome
should be discussed when defining the boundary, and may lead otherwise skeptical readers
to realize that a broader boundary was originally attempted but found to be too challenging.
By justifying, you allow the audience to better appreciate some of the challenges faced and
tradeoffs made in the study. Other justifications for system boundary choices may include
statements about a process being assumed or found to have negligible impact, or in the case
of a comparative study, that identical processes existed in both product systems and thus
would not effect the comparison.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

88

Chapter 4: The ISO LCA Standard Goal and Scope

Process Flows
Product systems have elementary flows into and out of them. As defined by ISO 14044,
elementary flows are "material or energy entering the system being studied that has been
drawn from the environment without previous human transformation, or material or energy
leaving the system being studied that is released into the environment without subsequent
human transformation." Translating, that means pure flows that need no other process to
represent them on the input or output side of the model.
For the sake of discussion, assume that Figure 4-4 is the product system and boundary
diagram for a mobile phone. The figure shows that the product system for the mobile
phone as defined with its rectangular boundary has flows related to input products and
elementary flows. The input product (on the left side of the figure) is associated with
another product system and is outside of the system boundary. Likewise on the right side of
the figure, the mobile phone "product flow" is an input to another system. As an example,
the left side of the figure product flow could represent that the mobile phone comes with
paper instructions printed by a third party (but which are assumed to not be part of the
study) and on the right side could be noting that the mobile phone as a device can be used in
wireless voice and data network systems (the life cycles of such equipment also being outside
the scope of the study). That's not to say that no use of phones is modeled, as Figure 4-4
has a "use phase" process box inside the boundary, but which may only refer to recharging
of the device. The study author may have chosen the boundary as such because they are the
phone manufacturer and can only directly control the processes and flows within the
described boundary. As long as their goal and scope elements are otherwise consistent with
the boundary, there are no problems. However, if, for example, the study goal or scope
motivated the idea of using phones to make Internet based purchases for home delivery,
then the current system boundary may need to be modified to consider impacts in those
other systems, for example, by including the product system box on the right.
Figure 4-4 might be viewed as implying that the elementary flows are not part of the study
since they are outside of the system boundary. This is incorrect, however, because these
elementary flows while not part of the system are the inputs and outputs of interest that may
have motivated the study, such as energy inputs or greenhouse gas emission outputs. In
short, they are in the study but outside of the system.
Product system diagrams may be hierarchical. The high level diagram (e.g., Figure 4-4) may
have detailed sub-diagrams and explanations to describe how other lower-level processes
interact. These hierarchies can span multiple levels of aggregation. At the lowest such level,
a unit process is the smallest element considered in the analysis for which input and output
data are quantified. Figure 4-5 shows a generic interacting series of three unit processes that
may be a subcomponent of a product system.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

89

Figure 4-5: Unit Processes (Source: ISO 14040:2006)

Figure 4-6 gives an example of how one might detail the high level "Waste Treatment"
process from Figure 4-4 in the manner of Figure 4-5, where the unit processes are one of the
three basic steps of collecting, disassembling, and sorting of e-waste. Additional unit
processes (not shown) could exist for disposition of outputs.

Figure 4-6: Process Diagram for E-waste treatment

It is at the unit process level, then, that inputs and outputs actually interact with the product
system. While already defined in Chapter 1, ISO specifically considers them as follows.
Inputs are "product, material or energy flows that enter a unit process" and may include raw
materials, intermediate products and co-products. Outputs are "products, material or energy
flows that leave a unit process" and may include raw materials, intermediate products,
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

90

Chapter 4: The ISO LCA Standard Goal and Scope

products, and releases e.g., emissions, and waste. Raw materials are "primary or secondary
material that is used to produce a product" and waste is "substances or objects to be
disposed of. Intermediate products flow between unit processes (such as cumulatively
assembled components). Co-products are two or more products of the same process or
system.
The overall inputs and outputs to be measured by the study should be elementary flows.
This is why "electricity" is not typically viewed as an input, i.e., it has not been drawn from
the environment without transformation. Electricity represents coal, natural gas, sunlight, or
water that has been transformed by generation processes. "MJ of energy" on the other hand
could represent untransformed energy inputs.
In the Christmas tree LCA mentioned above, which compares artificial and natural trees, the
following text was used (in addition to a diagram): "For the artificial tree the system
boundary includes: (1) cradle-to-gate material environmental impacts; (2) the production of
the artificial tree with tree stand in China; (3) transportation of the tree and stand to a US
retailer, and subsequently a customer's home; and (4) disposal of the tree and all packaging."
SDP 5. Inventory Inputs and Outputs
The definition of your study needs to explicitly note the inputs and/or outputs you will be
focusing on in your analysis. That is because your analysis does not need to consider the
universe of all potential inputs and outputs. It could consider only inputs (e.g., an energy use
footprint), only outputs (e.g., a carbon emissions footprint), or both. The input and output
specification part of the scope is not explicitly defined in the ISO Standard. It is presumably
intended to be encompassed by the full product system diagram with labeled input and
output flows. Following the example above, your mobile phone study could choose to track
inputs of water, energy, or both, but needs to specify them. By explicitly noting which
inputs and/or outputs you will focus on, it helps the audience better understand why you
might have chosen the selected system boundary, product system, functional unit, etc. If
you fail to explicitly note which quantified inputs and outputs you will consider in your study
(or, for example, draw a generic product system diagram with only the words "inputs" and
"outputs") then the audience is left to consider or assume for themselves which are
appropriate for your system, which could be different than your intended or actual inputs
and outputs. Chapter 5 discusses the inventory analysis component of LCA in more detail.
SDP 6. Impact Assessment
ISO 14040 requires you to explicitly list "the impact categories selected and methodology of
impact assessment, and subsequent interpretation to be used". While we save more detailed
discussion of impact assessment for Chapter 12, we offer some brief discussion and

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

91

examples here so as to help motivate how and why your choice of impact assessment could
affect your other SDP choices.
As we discussed in Chapter 1, there is a big difference between an inventory (accounting) of
inputs and outputs and the types of impacts they can have. While we may track input and
output use of energy and/or greenhouse gas emissions, the impacts of these activities across
our life cycle could be resource depletion, global warming, or climate change. In impact
assessment we focus on the latter issues. Doing so will require us to use other methods that
have been developed in conjunction with LCA to help assess impacts. Specifically, there are
impact assessment methods to consider cumulative energy demand (CED) and to assess the
global warming potential (GWP) of emissions of various greenhouse gases. If we chose to
consider these impacts in our study, then we explicitly state them and the underlying
methods in the SDP. Again, the point of doing so explicitly is to ensure that at a glance a
reader can appreciate decisions that you have made up front before having to see all of your
study results.
There are other required elements for the goal and scope, as noted above, but the SDPs are
the most important and time consuming. They are the scope elements that need to be most
carefully worded and evaluated.
A Final Word On Comparative Assertions And Public Studies
Comparative studies can only be done if the life cycle models created for each compared
product use the same study design parameters, such as the same goal and scope, functional
unit, and system boundary. The ISO Standard in various places emphasizes what needs to
be done if you are going to make comparative assertions. By making such assertions you are
saying that applying the ISO Standard has allowed you to make the claim. For example, ISO
requires that for comparative assertions, the study must be an LCA and not simply an LCI,
and that a special sensitivity analysis is done. The additional rules related to when you intend
to make comparative assertions are in place both to ensure high quality work and to protect
the credibility of the Standard. If several high visibility studies were done without all of
these special considerations, and the results were deemed to be suspicious, the Standard
itself might be vulnerable to criticism.
Similarly, ISO requires an LCA to be peer reviewed if the comparative results are intended
for public release. This means that a team of experts (typically three) needs to review the
study, write a report of its merits, and assess whether it is compliant with the ISO Standard
(i.e., whether all of the goal, scope, etc., elements have been done in accordance with what is
written in the Standard). The vast majority of completed LCAs are not seen by the public,
and therefore have not been peer-reviewed. That does not mean they are not ISO
compliant, just that they have not been reviewed as such and designated as compliant. We
will discuss more issues about peer review in Chapter 13.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

92

Chapter 4: The ISO LCA Standard Goal and Scope

E-resource: On the www.lcatextbook.com website, in the Chapter 4 folder, is a spreadsheet


listing publicly available LCA studies from around the world for many different
products. Amongst other aspects, this spreadsheet shows whether studies were peer
reviewed (which is interesting because they have all been "released to the public" but not all
have been peer reviewed). PDF files of most of the studies listed are also available. The
icon to the left will be used in the remainder of the book to designate resources available on
the textbook website. Readers are urged to read one or more of these public studies that are
of interest to them as a means of becoming familiar with LCA studies.

Chapter Summary
The ISO LCA Standard is an internationally recognized framework for performing life cycle
assessment, and has been developed and revised over time to guide practitioners towards
making high-quality LCA studies. Any LCA practitioner should first read and know the
requirements of the Standard. This chapter has focused on a subset of the Standard, namely
the so-called study design parameters (SDPs) which comprise the main high level variables
for a study and which when presented allow the audience to quickly appreciate the goals and
scope of the study. The chapter focused on practical examples of SDPs from actual studies
and seeks to demonstrate the importance of the bridge between product systems and their
functional units and LCI results. When the integrity of this bridge is maintained, and
common mistakes avoided, high-quality results can be expected.

References for this Chapter


ISO 2013 http://www.iso.org/iso/home/standards_development.htm,
February 1, 2013.

last

accessed

PE Americas, "Comparative Life Cycle Assessment of an Artificial Christmas Tree and a


Natural Christmas Tree", November 2010.
http://www.christmastreeassociation.org/pdf/ACTA%20Christmas%20Tree%20LCA%20
Final%20Report%20November%202010.pdf
Life Cycle Assessment: Principles And Practice, United States Environmental Protection
Agency, EPA/600/R-06/060, May 2006.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

93

End of Chapter Questions


1. Consider the following examples of goal statements for three different hypothetical
LCA studies. Answer the questions (a-b) for each goal statement below.

"The goal of this study is to find the energy use of making ice cream."

"The goal of this study is to produce an LCA for internal purposes."

"This study seeks to do a life cycle assessment of a computer to be used for


future design efforts."

a. Briefly discuss the ISO compliance of the stated goal as written.


b. Propose revisions if needed for the hypothetical goal statement to meet ISO
requirements.
2. Consider the examples of study design parameters (SDPs) for four hypothetical LCA
studies in the table below.
Assess the partially provided entries in the table, and fill in or correct the rest of the
columns for each product system with examples of relevant SDPs that bridge the
various elements of the study using appropriate values (i.e., correct a functional unit
that seems inappropriate).
Product
System

Function

Printed book

Collect 100 pages of


printed text

Portable flash
memory drive

Storing electronic
content

Functional Unit

energy per gigabyte

E-book reader
Automobile

LCI Results

GHG per reader bought


1 mile driven

3. Draw a product system diagram for a paper clip labeling inputs, outputs,
intermediate flows, etc., as in Figure 4-4.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

94

Chapter 4: The ISO LCA Standard Goal and Scope

4. Draw a product system diagram for the purchase of an airplane ticket via an
electronic commerce website, labeling inputs, outputs, intermediate flows, etc., as in
Figure 4-4.
5. Read one of the LCA studies found by using the E-resource link at the end of the
chapter. Summarize the study design parameters of the chosen study, and discuss
any discrepancies or problems found, and how they could be improved.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 4: The ISO LCA Standard Goal and Scope

PAGE ADDED TO PRESERVE FORMATTING

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

95

96

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Chapter 5 : Data Acquisition and Management for Life


Cycle Inventory Analysis
Now that the most important elements of the LCA Standard are better understood, we can
begin to think about the work needed to get data for your study. In this chapter, we
introduce the inventory analysis phase of the LCA Standard, as well acquiring and using data
needed for the inventory phase of an LCA or an LCI study. As data collection,
management, and modeling are typically the most time-consuming components of an LCA,
understanding how to work with data is a critical skill. We build on concepts from Chapter
2 in terms of referencing and quantitative modeling. Improving your qualitative and
quantitative skills for data management will enhance your ability to perform great LCAs.
While sequentially this chapter is part of the content on process-based life cycle assessment,
much of the discussion is relevant to LCA studies in general.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:

Describe the workflow of the life cycle inventory phase

Recognize how challenges in data collection may lead to changes in study design
parameters (SDPs), and vice versa

Map information from LCI data modules into a unit process framework

Explain the difference between primary and secondary data, and when each might be
appropriate in a study

Document the use of primary and secondary data in a study

Create and assess data quality requirements for a study

Perform an interpretation analysis on LCI results

Extract data and metadata from LCI data modules and use them in support of a
product system analysis

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

97

ISO Life Cycle Inventory Analysis


After reviewing the ISO LCA Standard and its terminology in Chapter 4, you should be able
to envision the level and type of effort needed to perform an inventory analysis of a chosen
product system. Every study using the ISO Standard has an inventory analysis phase, but as
discussed above, many studies end at this phase and are called LCI studies. Those that
continue on to impact assessment are LCAs. That does not mean that LCI studies have
better inventory analyses than LCAs, in fact LCAs may require more comprehensive
inventory analyses to support the necessary impact assessment.
Figure 5-1, developed by the US EPA, highlights the types of high-level inputs and outputs
that we might care to track in our inventory analysis. As originally mentioned in Chapter 1,
we may be concerned with accounting for material, energy, or other resource inputs, and
product, intermediate, co-product, or release outputs. Recall that based on how you define
your goal, scope, and system boundary, you may be concerned with all or some of the inputs
and outputs defined in Figure 5-1.

Figure 5-1: Overview of Life Cycle Assessment (Source: US EPA 1993)

Inventory analysis follows a straightforward and repeating workflow, which involves the
following steps (as taken from ISO 14044:2006) done as needed until the inventory analysis
matches the then-current goal and scope:

Preparation for data collection based on goal and scope

Data Collection

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

98

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Data Validation (do this even if reusing someone else's data)

Data Allocation (if needed)

Translating Data to the Unit Process

Translating Data to the Functional Unit

Data Aggregation

As the inventory analysis process is iterated, the system boundary and/or goal and scope
may be changed (recall the two-way arrows in Figure 4-1). The procedure is as simple as
needed, and gets more complex as additional processes and flows are added. Each of the
inventory analysis steps are discussed in more detail below, with brief examples for
discussion. Several more detailed examples are shown later in the chapter.
Step 1 - Preparation for data collection based on goal and scope
The goal and scope definition guides which data need to be collected (noting that the goal
and scope may change iteratively during the course of your study and thus may cause
additional data collection effort or previously collected data to be discarded). A key
consideration is the product system diagram and the chosen system boundary. The
boundary shows which processes are in the study and which are not. For every unit process
in the system boundary, you will need to describe the unit process and collect quantitative
data representing its transformation of inputs to outputs. For the most fundamental unit
processes that interface at the system boundary, you will need to ensure that the inputs and
outputs are those elementary flows that pass through the system boundary. For other unit
processes (which may not be connected to those elementary flow inputs and outputs) you
will need to ensure they are connected to each other through non-elementary flows such as
intermediate products or co-products.
When planning your data collection activities, keep in mind that you are trying to represent
as many flows as possible in the unit process shown in Figure 5-2. Choosing which flows to
place at the top, bottom, left, or right of such a diagram is not relevant. The only relevant
part is ensuring inputs flow into and outputs flow out of the unit process box. You want to
quantitatively represent all inputs, either form nature or from the technosphere (defined as
the human altered environment, thus flows like products from other processes). By covering
all natural and human-affected inputs, you have covered all possible inputs. You want to
quantitatively represent outputs, either as products, wastes, emissions, or other releases.
Inputs from nature will come from resources in the ground or water. Outputs to nature will
be in the form of emissions or releases to "compartments" in the ground, air, or water.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

99

Figure 5-2: Generalized Unit Process Diagram

As a tangible example, imagine a product system like the mobile phone example in Chapter 4
where we have decided that the study should track water use as an input. Any of the unit
processes within the system boundary that directly uses water will need a unit process
representation with a quantity of water as an input and some quantitative measure of output
of the process. For mobile phones, such processes that use water as a direct input from
nature may include plastic production, energy production, and semiconductor
manufacturing. Other unit processes within the boundary may not directly consume water,
but may tie to each other through flows of plastic parts or energy. They themselves will not
have water inputs, but by connecting them all together, in the end, the water use of those
relevant sectors will still be represented. The final overall accounting of inventory inputs
and/or outputs across the life cycle within the system boundary is called a life cycle
inventory result (or LCI result).
The unit process focus of LCA drives the need for data to quantitatively describe the
processes. If data is not available or inaccessible, then the product system, system boundary,
or goal may need to be modified. Data may be available but found not to fit the study. For
example, an initial system boundary may include a waste management phase, but months of
effort could fail to find relevant disposition data for a specific product of the process. In
this case, the system boundary may need to be adjusted (made smaller) and other SDPs
edited to represent this lack of data in the study. On the other hand, data that is assumed to
not be available at first may later be found, which would allow an expansion of the system
boundary. In general, system boundaries are made smaller not larger over the course of a
study.
Step 2 - Data Collection
For each process within the system boundary, ISO requires you to "measure, calculate, or
estimate" data to quantitatively represent the process in your product system model. In
LCA, the "gold standard" is to collect your own data for the specific processes needed, called
primary data collection. This means directly measuring inputs and outputs of the process

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

100

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

on-site for the specific machinery use or transformation that occurs. For example, if you
required primary data for energy use of a process in an automobile assembly line that fastens
a component on to the vehicle with a screw, you might attach an electricity meter to the
piece of machinery that attaches the screw. If you were trying to determine the quantity of
fuel or material used in an injection molding process, you could measure those quantities as
they enter the machine. If you were trying to determine the quantity of emissions you could
place a sensor near the exhaust stack.
If you collect data with methods like this, intended to inventory per-unit use of inputs or
outputs, you need to use statistical sampling and other methods to ensure you generate
statistically sound results. That means not simply attaching the electricity meter one time, or
measuring fuel use or emissions during one production cycle (one unit produced). You
should repeat the same measurement multiple times, and perhaps on multiple pieces of
identical equipment, to ensure that you have a reasonable representation of the process and
to guard against the possibility that you happened to sample a production cycle that was
overly efficient or inefficient with respect to the inputs and outputs. The ISO Standard gives
no specific guidance or rules for how to conduct repeated samples or the number of samples
to find, but general statistical principles can be used for these purposes. Your data collection
summary should then report the mean, median, standard deviation, and other statistical
properties of your measurements. In your inventory analysis you can then choose whether
to use the mean, median, or a percentile range of values.
Note that many primary data collection activities cannot be completed as described above.
It may not be possible to gain access to the input lines of a machine to measure input use on
a per-item processed basis. You thus may need to collect data over the course of time and
then use total production during that time to normalize the unit process inventory. For the
examples in the previous paragraph, you might collect electricity use for a piece of machinery
over a month and then divide by the total number of vehicles that were assembled. Or you
may track the total amount of fuel and material used as input to the molding machine over
the course of a year. In either case, you would end up with an averaged set of inputs and/or
outputs as a function of the product(s) of the unit process. The same general principles
discussed above apply here with respect to finding multiple samples. In this case you could
find several monthly values or several yearly values to find an average, median, or range.
The ISO Standard (14044:2006, Annex A) gives examples of "data collection sheets" that
can support your primary data collection activities. Note that these are only examples, and
that your sheets may look different. The examples are provided to ensure, among other
things, that you are recording quantities and units, dates and locations of record keeping, and
descriptions of sampling done. The most likely scenario is that you will create electronic
data collection sheets by recording all information in a spreadsheet. This is a fair choice
because from our perspective, Microsoft Excel is the most popularly used software tool in
support of LCA. Even practitioners using other advanced LCA software packages still
typically use Microsoft Excel for data management, intermediate analysis, and graphing.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

101

Collecting primary data can be difficult or impossible if you do not own all the equipment or
do not have direct access to it either due to geographical or organizational barriers. This is
often the case for an LCA consultant who may be tasked with performing a study for a client
but who is given no special privileges or access to company facilities. Further, you may need
to collect data from processes that are deemed proprietary or confidential by the owner.
This is possible in the case of a comparative analysis with some long-established industry
practice versus a new technology being proposed by your client or employer. In these cases,
the underlying data collection sheets may be confidential. Your analysis may in these cases
only "internally use" the average data points without publicly stating the quantities found in
any subsequent reports. If the study is making comparative assertions, then it may be
necessary to grant to third-party reviewers (who have signed non-disclosure agreements)
access to the data collection sheets to appreciate the quality of the data and to assess the
inventory analysis done while maintaining overall confidentiality.
Beyond issues of access, while primary data is considered the "gold standard" there are
various reasons why the result may not be as good as expected in the context of an LCA
study. First, the data is only as good as the measurement device (see accuracy and precision
discussion in Chapter 2). Second, if you are not able to measure it yourself then you
outsource the measurement, verification, and validation to someone else and trust them to
do exactly as you require. Various problems may occur, including issues with translation
(e.g., when measuring quantities for foreign-owned or contracted production) or not finding
contacts with sufficient technical expertise to assist you. Third, you must collect data on
every input and output of the process relevant to your study. If you are using only an
electric meter to measure a process that also emits various volatile organic compounds, your
collected data will be incomplete with respect to the full litany of inputs and outputs of the
process. Your inventory for that process would undercount any other inputs or outputs.
This is important because if other processes in your system boundary track volatile organics
(or other inputs and outputs) your primary data will undercount the LCI results.
The alternative to primary data collection is to use secondary data (the "calculating and
estimating" referenced above). Broadly defined, secondary data comes from life cycle
databases, literature sources (e.g., from searches of results in published papers), and other
past work. It is possible you will find data closely, but not exactly, matching the required
unit process. Typical tradeoffs to accessibility are that the secondary data identified is for a
different country, a slightly different process, or averaged across similar machinery. That
does not mean you cannot use it you just need to carefully document the differences
between the process data you are using and the specific process needed in your study. While
deemed inferior given the use of the word secondary, in some cases secondary data may be
of comparable or higher quality than primary data. Secondary data is typically discoverable
because it has been published by the original author who generated it as primary data for
their own study (and thus is typically of good quality). In short, one analyst's primary data
may be another's secondary data. Again, the "secondary" designation is simply recognition

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

102

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

that it is being "reused" from a previously existing source and not collected new in your own
study. Many credible and peer reviewed studies are constructed mostly or entirely of
secondary data. More detail on identifying and using secondary data sources like LCI
databases is below.
For secondary data, you should give details about the secondary source (including a full
reference), the timestamp of the data record, and when you accessed it. In both cases you
must quantitatively maintain the correct units for the inputs and outputs of the unit process.
While not required, it is convenient to make tables that neatly summarize all of this
information.
Regardless of whether your data for a particular process comes from a primary or secondary
source, the ISO Standard requires you to document the data collection process, give details
on when data have been collected, and other information about data quality. Data quality
requirements (DQRs) are required scope items that we did not discuss in Chapter 4 as part
of the SDP, but characterize the fundamental expectations of data that you will use in your
study. As specified by ISO 14044:2006, these include statements about your intentions with
respect to age of data, geographical reach, completeness, sources, etc. Data quality
indicators are summary metrics used to assess the data quality requirements.
For example, you may have a data quality requirement that says that all data will be primary,
or at least secondary but from peer-reviewed sources. For each unit process, you can have a
data quality indicator noting whether it is primary or secondary, and whether it has been
peer-reviewed. Likewise, you may have a DQR that says all data will be from the same
geographical region (e.g., a particular country like the US or a whole region like North
America). It is convenient to summarize the DQRs in a standardized tabular form. The first
two columns of Figure 5-3 show a hypothetical DQR table partly based on text from the
2010 Christmas tree study mentioned previously. The final column represents how the
requirements might be indicated as a summary in a completed study. The indicated values
are generally aligned with the requirements (as they should be!).
Data Quality Category
Temporal
Geographical
Technological

Requirement

Data Quality Indicator


Artificial trees: 2009 data
Natural trees: 2002-2009 data
Data within 10 years of study
Artificial trees: China
Natural trees: US
Data matches local production
All processes used in study are
Most common production process representative of most common
practices
basis
Figure 5-3: Sample Data Quality Requirements (DQR) Table

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

103

Beyond using primary or secondary data, you might need to estimate the parameters for
some or all of the input and outputs of a unit process using methods as introduced in
Chapter 2. Your estimates may be based on data for similar unit processes (but which you
deem to be too dissimilar to use directly), simple transformations based on rules of thumb,
or triangulated averages of several unit processes. From a third-party perspective, estimated
data is perceived as lower quality than primary or secondary sources. However when those
sources cannot be found, estimating may be the only viable alternative.
Example 5-1: Estimating energy use for a service
Question:
Consider that you are trying to generate a unit process associated with an
internal corporate design function as part of the life cycle "overhead" of a particular product
and given the scope of your study need to create an input use of electricity. Your company is all
located in one building. There is no obvious output unit for such a process, so you could define
it to be per 1 product designed, per 1 square foot of design space, etc., as convenient for your
study.
Answer:
You could estimate the input electricity use for a design office over the course of
a year and then try to normalize the output. If you only had annual electricity use for the entire
building (10,000 kWh), and no special knowledge about the energy intensity of any particular
part of the building as subdivided into different functions, you could find the ratio of the total
design space in square feet (2,000 sf) as compared to the total square feet of the building
(50,000 sf), and use that ratio (2/50) to scale down the total consumption to an amount used
for design over the course of a year (400 kWh). If your output was per product, you could then
further normalize the electricity used for the design space by the unique number of products
designed by the staff in that space during the year.

You could add consideration of non-electricity use of energy (e.g., for heating or cooling)
with a similar method. Note that such ancillary support services like design, research and
development, etc., generally have been found to have negligible impacts, and thus many
studies exclude these services from their system boundaries.
Step 3 - Data Validation
Chapter 2 provided some general guidance on validating research results. With respect to
validating LCI data, you generally need to consider the quantitative methods used and ensure
that the resulting inventories meet your stated DQRs. Data validation should be done after
data is collected but before you move on to the actual inventory modeling activities of your
LCA.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

104

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

As an example of validation, it may be useful to validate energy or mass balances of your


processes. Using the injection molding process example from Step 2, one would expect that
the total input mass of material to be greater than (but approximately equal to) the output
mass of molded plastic. You can ensure that the total mass input of plastic resin, fuels, etc.,
is roughly comparable to the mass of molded plastic (subject to reasonable losses). If the
balances are deemed uneven, you can assess whether the measured process is merely
inefficient or whether there is a problem in your data collection, and thus resample.
You can use available secondary data to validate primary data collection. If you have chosen
to collect your own data for a process that is similar to processes for which there is already
secondary data available, you can quantitatively compare your measured results with the
published data. Again, if there are significant differences then you will need to determine the
source of the discrepancy. You can validate secondary data that you have chosen to use
against other sources in similar ways.
The results of validation efforts can be included in the main text of your report or in an
included Appendix, depending on the level of detail and explanation needed. If you
collected primary data and compared it to similar data from the same industry, the following
text might be included to show this:
"Collected data from the year 2012 on the technology-specific process used in this study
was compared to secondary data on the similar injection molding process from 2005
(Reference). The mean of collected data was about 10% lower than the secondary data.
This difference is not significant, and so the collected data is used as the basis for the
process in the study."
If validation suggests the differences are more substantial, that does not automatically mean
that the data is invalid. It is possible that there are no good similar data sources to compare
against, or that the technology has changed substantially. That too could be noted in the
study, such as:
"Collected data from the year 2012 on the technology-specific process used in this study
was compared to secondary data on the similar injection molding process from 2005
(Reference). The mean of collected data was about 50% lower than the secondary data.
This difference is large and significant, but is attributed to the significant improvements
in the industry since 2005, and so the collected data is still chosen as the basis for the
process in the study."
As noted above, the validation step is where you re-assess whether the quantitatively sound
data you want to use also is within the scope of your DQRs. Many studies state DQRs to
use all primary data at the outset, but subsequently realize it is not possible. Likewise studies
may not be able to find sufficient geographically focused data. In both cases, the DQRs
would need to be iteratively adjusted as the study continues. This constant refining of the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

105

initial goal and scope may sound like "cheating", but the purpose of the DQRs is as a single
and convenient summary of the study goals for data quality. It allows a reader to quickly get
a sense of how relevant the study results are given the final DQRs. While not required, you
can state initial goal DQRs alongside final DQRs upon completion of the study.
Step 4 - Data Allocation (if needed)
Allocation will be discussed more in Chapter 6, but in short, allocation is the quantitative
process done by the study analyst to assign specific quantities of inputs and outputs to the
various products of a process based on some mathematical relation between the products.
For example, you may have a process that produces multiple outputs, such as a petroleum
refinery process that produces gasoline, diesel, and other fuels and oils. Refineries use a
significant amount of energy. Allocation is needed to quantitatively connect the energy input
to each of the refined products. Without specified allocation procedures, the connections
between those inputs and the various products could be done haphazardly. The ISO
Standard suggests that the method you use to perform the allocation should be based on
underlying physical relationships (such as the share of mass or energy in the products) when
possible. For example, if your product of interest is gasoline, you will need to determine
how much of the total refinery energy was used to make the gasoline. For a mass allocation,
you could calculate it by using the ratio of the mass of the gasoline produced to the total
mass of all of the products. You may have to further research the energetics of the process
to determine what allocation method is most appropriate.
If physical relationships are not possible, then methods such as economic allocationsuch
as by eventual sale price could be used. ISO also says that you should consistently choose
allocation methods as much as possible across your product system, meaning that you
should try not to use a mass-based allocation most of the time and an energy-based
allocation some of the time. This is because mixing allocation methods could be viewed by
your audience or reviewers as a way of artificially biasing the results by picking allocations
that would provide low or high results. Allocation is conceptually similar to the design space
electricity Example 5-1. Most allocations are just linear transformations of effects.
When performing allocation, the most important considerations are to fully document the
allocation method chosen (including underlying allocation factors) and to ensure that total
inputs and outputs are equal to the sum of the allocated inputs and outputs. It is possible
that none of your unit processes have multiple products, and thus you do not need to
perform allocation. You might also be able to avoid allocation entirely, as we will see later.
Step 5 - Translating Data to the Unit Process
In this step you convert the various collected data into a representation of the output of the
unit process. Regardless of how you have defined the study overall, this step requires you to
collect all of the inputs and outputs as needed for 1 unit output from that process. From

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

106

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Example 5-1, you would ensure that the electricity input matched the unit basis of your
product flow (e.g., per 1 product designed). This result also needs to be validated.
Step 6 - Translating Data to the Functional Unit
The reason why this step is included in the ISO LCA Standard is to remind you that you are
doing an overall study on the basis of 1 functional unit of product output. Either during the
data collection phase, or in subsequent analysis, you will need to do a conversion so that the
relative amount of product or intermediate output of the unit process is related to the
amount needed per functional unit. Eventually, all of your unit process flows will need to be
converted to a per-functional unit basis. If all unit processes have been so modified, then
finding the total LCI results per functional unit is a trivial procedure. From Example 5-1,
the design may be used to eventually produce 1 million of the widgets. The electricity use for
one product design must be distributed to the 1 million widgets so that you will then have
the electricity use for a single widget in the design phase (a very small amount). This result
also needs to be validated.
Step 7 - Data Aggregation
In this step, all unit process data in the product system diagram are combined into a single
result for the modeled life cycle of the system. What this typically means is summing all
quantities of all inputs and outputs into a single total result on a functional unit basis.
Aggregation occurs at multiple levels. Figure 4-4 showed the various life cycle stages within
the view of the product system diagram. A first level of aggregation may add all inputs and
outputs under each of the categories of raw material acquisition, use, etc. A second level of
aggregation may occur across all of these stages into a final total life cycle estimate of inputs
and outputs per functional unit. Aggregated results are often reported in a table showing
total inputs and outputs on per-process, or per stage, values, and then a sum for the entire
product system. Example 5-2 shows aggregated results for a published study on wool from
sheep in New Zealand. The purpose of such tables is to emphasize category level results,
such as that half of the life cycle energy use occurs on farm. Results could also be graphed.
Example 5-2: Aggregation Table for Published LCA on Energy to Make Wool
(Source: The AgriBusiness Group, 2006)
Life Cycle Stage

Energy Use
(GJ per tonne wool)

On Farm

22.6

Processing

21.7

Transportation

1.5

Total

45.7

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

107

Beyond such tables, product system diagrams may be annotated with values for different
levels of aggregation by adding quantities per functional unit. Example 5-3 shows a diagram
for a published study on life cycle effects of bottled water and other beverage systems
performed for Nestle Waters. Such values can then be aggregated into summary results.
Example 5-3: Aggregation Diagram for Bottled Water (Source: Quantis, 2010)

We have above implied that aggregation of results occurs over a relatively small number of
subcomponents. However, a product system diagram may be decomposed into multiple sets
of tens or hundreds of constituent pieces that need to be aggregated. If all values for these
subcomponents are on a functional unit basis, the summation is not difficult, but the
bookkeeping of quantities per subcomponent remains an issue. If the underlying
subcomponent values are not consistently on a per functional unit basis, units of analysis
should be double checked to ensure they can be reliably aggregated.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

108

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Life Cycle Interpretation


Because some studies only include an inventory (LCI), we discuss Interpretation, the final
step for all LCAs and LCIs, now. For those studies (LCAs) that also include an impact
assessment, the procedures for the assessment will be discussed in Chapter 10). There is
little detail provided in the ISO Standard on what must be done in this phase, but in short,
interpretation is similar to the last step of the "three step" method introduced in Chapter 2.
The interpretation phase refers to studying the results of the goal and scope, inventory
analysis, and impact assessment, in order to make conclusions and recommendations that
can be reported. As shown in Figure 4-1, interpretation is iterative with the three other
phases. As this chapter is focused on inventory analysis, much of the discussion and
examples provided relate to interpreting inventory results, but the same types of
interpretation can be done with impact assessment results (to be discussed in Chapter 10).
A typical first task in interpretation is to study your results to determine whether conclusions
can be made based on the inventory results that are consistent with the goal and scope. One
of the most common and important interpretation tasks involves discussing which life cycle
stage leads to the largest share of LCI results. A high-level summary helps to set the stage
for subsequent analyses. For example, an LCA of a vehicle will likely show that the use
phase (driving the car) is the largest energy user, as compared to manufacturing and
recycling. An interpretation task could involve creating a tabular or graphical summary
showing the energy use contributions for each of the stages.
Part of your goal statement may have been to do a comparison between two types of
products and assess whether the life cycle energy use of one is significantly less than the
other. If your inventory results for the two products are nearly identical (say only 1%
different) then it may be difficult to scientifically conclude that one is better than the other
given the various uncertainties involved. Such an interpretation result could cause you to
directly state that no appreciable difference exists, or it may cause you to change the system
boundary in a way that ends up making them significantly different.
A key part of interpretation is performing relevant sensitivity analyses on your results. The
ISO Standard does not require specific sensitivity analysis scenarios as part of interpretation,
but some consideration of how alternative parameters for inputs, outputs, and methods used
(e.g., allocation) would affect the final results is necessary. As discussed in Chapter 2, a main
purpose of sensitivity analysis is to help assess whether a qualitative conclusion is affected by
quantitative changes in the parameters of the study. For example, if your general qualitative
conclusion is that product A uses significantly less energy than product B, the sensitivity
analysis may test whether different quantitative assumptions related to A or B lead to results
where energy use of A is roughly equal to B, or where A is greater than B. Any of the latter
two outcomes is qualitatively different than the initial conclusion, and it would be important
for the sensitivity results to be stated so that it was clear that there is a variable that if
credibly changed by a specified amount, has the potential to alter the study conclusions.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

109

While on the subject of assessing comparative differences, it is becoming common for


practitioners in LCA to use a "25% rule" when testing for significant differences. The 25%
rule means that the difference between two LCI results, such as for two competing products,
must be more than 25% different for the results to be deemed significantly different, and
thus for one to be declared as lower than the other. While there is not a large quantitative
framework behind the choice of 25% specifically, this heuristic is common because it
roughly expresses the fact that all data used in such studies is inherently uncertain, and by
forcing 25% differences, then relatively small differences would be deemed too small to be
noted in study conclusions. We will talk more about modeling and assessing uncertainties in
Chapter 11 on uncertainty.
Interpretation can also serve as an additional check on the goal and scope parameters. This
is where you could assess whether a system boundary is appropriate. As an example, while
the ISO Standard encourages full life cycle stage coverage within system boundaries, it does
not require that every LCA encompass all stages. One could try to defend the validity of a
life cycle study of an automobile that focused only on manufacturing, or only on the use
stage. The results of the interpretation phase could then internally weigh in on whether such
a decision was appropriate given the study goal. If a (qualified) conclusion can be drawn, the
study could be left as-is, if not, a broader system boundary could be chosen, with or without
preliminary LCI results.
Regardless, the real purpose of interpretation is to improve the quality of your study,
especially the quality of the written conclusions and recommendations that arise from your
quantitative work. As with other quantitative analysis methods, you will need to also
improve your qualitative skills, including documentation, to ensure that your interpretation
efforts are respected.

Identifying and Using Life Cycle Data Sources


In support of modeling the inputs and outputs associated with unit processes, you will need
a substantial amount of data. Even studies of simple product systems may require data on
10 different unit processes. While this may sound like a small amount of effort, as you will
see below, the task of finding, documenting, manipulating, validating and using life cycle data
is time consuming. The text above gave a fair amount of additional detail related to
developing your own primary data via collection and sampling efforts. This section is related
to the identification and use of secondary data.
One prominent source of secondary data is the thousands of peer-reviewed journal papers
done over time by the scientific community, also known as literature sources. Some of
these papers have been explicitly written to be a source of secondary data, while authors of
other papers developed useful data in the course of research (potentially on another topic)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

110

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

and made the process-level details available as part of the paper or in its supporting
information. Sometimes the study authors are not just teams of researchers, but industry
associations or trade groups (e.g., those trying to disseminate the environmental benefits of
their products). Around the world, industry groups like Plastics Europe, the American
Chemistry Council, and the Portland Cement Association have sponsored projects to make
process-based data available via publications. It is common to see study authors citing
literature sources, and doing so requires you to simply use a standard referencing format like
you would for any source. Unfortunately, data from such sources is typically not available in
electronic form, and thus there are potentials for data entry or transcription errors as you try
to make use of the published data. It is due to issues like these that literature sources
constitute a relatively small share of secondary data used in LCA studies.
There is a substantial amount of secondary data available to support LCAs in various life
cycle databases. These databases are the main source of convenient and easy to access
secondary data. Some of the data represented in these databases are from the literature
sources mentioned above. Since the first studies mentioned in Chapter 1, various databases
comprised of life cycle inventory data have been developed. The original databases were
sold by Ecobilan and others in the mid-1990s. Nowadays the most popular and rigorously
constructed database is from ecoinvent, developed by teams of researchers in Switzerland
and available either by paying directly for access to their data website or by an add-on fee to
popular LCA system tools such as SimaPro and GaBi (which in turn have their own
databases). None of these databases are free, and a license must be obtained to use them.
On the other hand, there are a variety of globally available and publicly accessible (free) life
cycle databases. In the US, LCI data from the National Renewable Energy Laboratory
(NREL)'s LCI database and the USDA's LCA Digital Commons are popular and free3.
Figure 5-4 summarizes the major free and paid life cycle databases (of secondary data) in the
world that provide data at the unit process level for use in life cycle studies. Beyond the
individual databases, there is also an "LCA Data Search Engine," managed by the United
Nations Environmental Programme (UNEP), that can assist in finding available free and
commercial unit process data (LCA-DATA 2013). All of the databases have their own user's
guides that you should familiarize yourself with before searching or using the data in your
own studies.

Data from the US NREL LCI Database has been transferred over to the USDA LCA Digital Commons as of 2012. Both datasets can
now be accessed from that single web database.
3

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Database

Approximate
Cost

Number of
processes

Notes

ecoinvent

2,500 Euros
($3,000 USD)

4,000+

Has data from around the world, but majority is


from Europe. Available directly, or embedded
within LCA software.

US NREL LCI
Database

Free
(companies,
agencies pay to
publish data)

600+

US focused. Now hosted by USDA LCA Digital


Commons.

USDA LCA
Digital Commons

Free
(manufacturers
and agencies
pay to publish
data)

300+

Focused on agricultural products and processes.


Geographically specific unit processes for specific
US states.

ELCD

Free

300+

Relatively few processes, spread across various


sectors. Additional data being added rapidly.

BEES

Free

GaBi

$3,000 USD

111

Focused on building and construction materials.


5,000+

Database made by PE International. Global, but


heavily focused on European data.
Figure 5-4: Summary of Data Availability for Free and Licensed LCA Databases
(Sources provided at end of chapter)

These databases can be very comprehensive, with each containing data on hundreds to
thousands of unique processes, with each process comprised of details for potentially
hundreds of input or output flows. Collecting the various details of inputs and outputs for a
particular unit process (which we refer to as an LCI data module but which are referred to
as "datasets" or "processes" by various sources) requires a substantial amount of time and
effort. This embedded level of effort for unit process data is important because even though
it represents a secondary data source, to create a superior set of primary data for a study, you
might need to collect data for 100 or more input and output flows for the process. Of
course your study may have a significantly smaller scope that includes only 5 flows, and thus
your data collection activities would only need to measure those. The databases do highlight
an ongoing conundrum in the LCA community the nave stated preference for primary
data when substantial high-quality secondary data is pervasive. Another benefit of these
databases is that subsets of the data modules are created and maintained consistently, thus a
common set of assumptions or methods would be associated with hundreds of processes.
This is yet another difference to primary data which could have a set of ad-hoc assumptions
used in its creation.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

112

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Now that the availability of vast secondary data sources has been introduced, we discuss the
data structures typical of these LCI data modules. As with many facets of LCA, there is a
global standard for storing information in LCI data modules, known as EcoSpold. The
EcoSpold format is a structured way of storing and exchanging LCI data, where details such
as flows and allocation methods are classified for each process. There is no requirement that
LCA tools use the EcoSpold format, but given its popularity and the trend that all of the
database sources in Figure 5-4 use this format, it is worth knowing. Instead of giving details
on the format (which is fairly technical and generally only useful for personnel involved in
creating LCA software) we instead will demonstrate the way in which LCI data modules are
typically represented in the database and allow you to think about the necessary data
structures separately.
In the rest of this chapter we consider an LCI of the CO2 emitted to generate 1 kWh of coalfired electricity in the United States. Our system boundary for this example (as in Figure 5-5)
has only three unit processes: mining coal, transporting it by rail, and burning it at a power
plant. The refinery process that produces diesel fuel, an input for rail, is outside of our
boundary, but the effects of using diesel as a fuel are included. We can assume, beyond the
fact that this is an academic example, that such a tight boundary is realistic because these are
known to be significant parts of the supply chain of making coal-fired power. We will
discuss the use of screening methods to help us set such boundaries in Chapter 8.

Figure 5-5: Product System Diagram for Coal-Fired Electricity LCI Example

To achieve our goal of the CO2 emitted per kWh, we will need to find process-level data for
coal mining, rail transportation, and electricity generation. In the end, we will combine the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

113

results from these three unit processes into a single estimate of total CO2 per kWh. This way
of performing process-level LCA is called the process flow diagram approach.
We will focus on the US NREL LCI database (2013) in support of this relatively simple
example. This database has a built-in search feature such that typing in a process name or
browsing amongst categories will show a list of available LCI data modules (see the
Advanced Material at the end of this chapter for brief tutorials on using the LCA Digital
Commons website, that hosts the US LCI data, as well as other databases and tools).
Searching for "electricity" yields a list of hundreds of processes, including these LCI data
modules:

Electricity, diesel, at power plant

Electricity, lignite coal, at power plant

Electricity, natural gas, at power plant

Electricity, anthracite coal, at power plant

Electricity, bituminous coal, at power plant

The nomenclature used may be confusing, but is somewhat consistent across databases. The
constituents of the module name can be deciphered as representing (1) the product, (2) the
primary input, and (3) the boundary of the analysis. In each of the cases above, the unit
process is for making electricity. The inputs are various types of fuels. Finally, the boundary
is such that it represents electricity leaving the power plant (as opposed to at the grid, or at a
point of use like a building). Once you know this nomenclature, it is easier to browse the
databases to find what you are looking for specifically.
Given the above choices, we want to use one of the three coal-fueled electricity generation
unit processes in our example. Lignite and anthracite represent small shares of the
generation mix, so we choose bituminous coal as the most likely representative process and
use the last data module in the list above (alternatively, we could develop a weighted-average
process across the three types that would be useful). Using similar brief search methods in
the US NREL website we would find the following unit processes as relevant for the other
two pieces of our system:

Bituminous coal, at mine

Transport, train, diesel powered

These two processes represent mining of bituminous coal and the transportation of generic
product by diesel-powered train.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

114

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-6 shows an abridged excerpt of the US NREL LCI data module for Electricity,
bituminous coal, at power plant. The entire data module is available publicly4. Within the US
NREL LCI database website, such data is found by browsing or searching for the process
name and then viewing the "Exchanges". These data modules give valuable information
about the specific process chosen as well as other processes they are linked to. While here
we discuss viewing the data on the website, it can also be downloaded to a Microsoft Excel
spreadsheet or as XML.
It is noted that this is an abridged view of the LCI data module. The complete LCI data
module consists of quantitative data for 7 inputs and about 60 outputs. For the sake of the
example in this section, we assume the abridged inventory and ignore the rest of the details.
Most of the data modules in databases have far more inputs and outputs than in this
abridged module; it is not uncommon to find data modules with hundreds of outputs (e.g.,
for emissions of combustion processes). If you have a narrow scope that focuses on a few
air emission outputs, many of the other outputs can be ignored in your analysis. However if
you plan to do life cycle impact assessment, the data in the hundreds of inputs and/or
outputs may be useful in the impact assessment. If your study seeks to do a broad impact
assessment, collecting your own primary data can be problematic as your impact assessment
will all but require you to broadly consider the potential flows of your process. If you focus
instead on just a few flows you deem to be important, then the eventual impact assessment
could underestimate the impacts of your process. This is yet another danger of primary data
collection (undercounting flows).

Data from the NREL US LCI database in this chapter are as of July 20, 2014. Values may change in revisions to the database that cannot
be expressed here.
4

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Flow

Category

Type

Unit

Amount

bituminous coal, at mine

root/flows

ProductFlow

kg

4.42e-01

transport, train, diesel


powered

root/flows

ProductFlow

t*km

4.61e-01

electricity, bituminous coal, at


power plant

root/flows

ProductFlow

kWh

1.00

carbon dixoide, fossil

air/unspecified

ElementaryFlow

kg

9.94e-01

115

Comment

Inputs

Transport from mine


to power plant

Outputs

Figure 5-6: Abridged LCI data module from US NREL LCI Database for bituminous coal-fired
electricity generation. Output for functional unit italicized. (Source: US LCI Database 2012)

Figure 5-6 is organized into sections of data for inputs and outputs. At the top, we see the
abridged input flows into the process for generating electric power via bituminous coal.
Recalling the discussion of direct and indirect effects from Chapter 4, the direct inputs listed
are bituminous coal and train transport. The direct outputs listed are fossil CO2 emissions
(which is what results when you burn a fossil fuel) and electricity. Before discussing all of
the inputs and outputs, we briefly focus on the output section to identify a critical
component of the data module the electricity output is listed as a product flow, with units
of 1 kWh. Every LCI process will have one or more outputs, and potentially have one or
more product flows as outputs, but this module has only one. That means that the
functional unit basis for this unit process is per (1) kWh of electricity. All other inputs and
outputs in Figure 5-6, representing the US NREL LCI data module for Electricity, bituminous
coal, at power plant are presented as normalized per 1 kWh. You could think of this module as
providing energy intensities or emissions factors per kWh. Thinking back to the discussion
above on data collection, its unlikely that the study done to generate this LCI data module
actually measured the inputs and outputs needed to make just 1 kWh of electricity at a power
plant it is too small a value. In reality, it is likely that the inputs and outputs were
measured over the course of a month or year, and then normalized by the total electricity
generation in kWh to find these normalized values. It is the same process you would do if
you were making the LCI data module yourself. We will discuss how to see the assumptions
and boundaries for the data modules later in this chapter.
We now consider the abridged data module in more detail. In Figure 5-6, each of the input
flows are a product flow from another process (namely, the product of bituminous coal
mining and the product of train transportation). The unit basis assumption for those inputs
is also given kg for the coal and ton-kilometers (t*km) for the transportation. A tonLife Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

116

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

kilometer is a compound unit (like a kilowatt-hour) that expresses the movement of 1 ton of
material over the distance of 1 kilometer. Both are common SI units. Finally the amount of
input required is presented in scientific notation and can be translated into 0.442 kg of coal
and 0.46 ton-km of train transport. Likewise, the output CO2 emissions to air are estimated
at 0.994 kg. All of these quantities are normalized on a per-kWh generated basis. The
comment column in Figure 5-6 (and which appears in many data modules) gives brief but
important notes about specific inputs and outputs. For example, the input of train
transportation is specified as being a potential transportation route from mine to power
plant, which reminds us that the unit process for generating electricity from coal is already
linked to a requirement of a train from the mine.5
Now that we have seen our first example of a secondary source LCI data module, Figure 5-7
presents a graphical representation of the abridged unit process similar to the generic
diagram of Figure 5-2. The direct inputs, which are product flows from other man made
processes, are on the left side as inputs from the technosphere. The abridged unit process
has no direct inputs from nature. The direct CO2 emissions are at the top. The output
product, and functional unit basis of the process, of electricity is shown on the right. All
quantitative values are representative of the functional unit basis of the unit process.

Figure 5-7: Unit Process Diagram for abridged electricity generation unit process

Returning to our example LCA problem, we now have our first needed data point, that the
direct CO2 emissions are 0.994 kg / kWh generated. Given that we have only three unit
processes in our simple product system, we can work backwards from this initial point to get
estimated CO2 emissions values from mining and train transport. Again using the NREL
LCI database, Figure 5-8 shows abridged data for the data module bituminous coal, at mine.
The unabridged version of the module has several other averaged transport inputs in ton-km, such as truck, barge, etc. Overall, the
module gives a "weighted average" transport input to get the coal from the mine to the power plant. Since we are only using the abridged
(and unedited) version, we will otherwise undercount the upstream CO2 emissions from delivering coal since we are skipping the weighted
effects from those other modes.
5

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

117

The output and functional unit is 1 kg of bituminous coal as it leaves the mine. Two
important inputs are diesel fuel needed to run equipment, and coal. It may seem odd to see
coal listed as an input into a coal mining process, but note it is listed as a resource and as an
elementary flow. As discussed in Chapter 4, elementary flows are flows that have not been
transformed by humans. Coal trapped in the earth for millions of years certainly qualifies as
an elementary flow by that definition! Further, it reminds us that there is an elementary flow
input within our system boundary, not just many product flows. This particular resource is
also specified as being of a certain quality, i.e., with energy content of about 25 MJ per kg.
Finally, we can see from a mass balance perspective that there is some amount of loss in the
process, i.e., that every 1.24 kg of coal in the ground leads to only 1 kg of coal leaving the
mine.
Flow

Category

Type

Unit

Amount

Coal, bituminous, 24.8 MJ per kg

resource/ground

ElementaryFlow

kg

1.24

Diesel, combusted in industrial boiler

root/flows

ProductFlow

8.8e-03

Comment

Inputs

Outputs
Bituminous coal, at mine
root/flows
ProductFlow
kg
1.00
Figure 5-8: Abridged LCI data module from US NREL LCI Database for bituminous coal mining.
Output for functional unit italicized. (Source: US LCI Database 2012)

Figure 5-9 shows the abridged NREL LCI data module for rail transport (transport, train, diesel
powered). The output / functional unit of the process is 1 ton-km of rail transportation
service provided. Providing that service requires 0.00648 liters of diesel fuel and emits .0189
kg of CO2, both per ton-km.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

118

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Flow

Category

Type

Unit

Amount

root/flows

ProductFlow

6.48e-03

air/unspecified

ElementaryFlow

kg

1.89e-02

Comment

Inputs
Diesel, at refinery
Outputs
Carbon dixoide, fossil

transport, train, diesel


root/flows
ProductFlow
t*km 1
powered
Figure 5-9: Abridged LCI data module from US NREL LCI Database for rail transportation. Output
for functional unit italicized. (Source: US LCI Database 2012)

To then find the total CO2 emissions across these three processes, we can work backwards
from the initial process. We already know there are 0.994 kg/kWh of CO2 emissions at the
power plant. But we also need to mine the coal and deliver it by train for each final kWh of
electricity. The emissions for those activities are easy to associate, since Figure 5-6 provides
us with the needed connecting units to estimate the emissions per kWh. Namely, that 0.442
kg of coal needs to be mined and 0.461 ton-km of rail transport needs to be used per kWh
of electricity generated. We can then just use those unit bases to estimate the CO2 emissions
from those previous processes. Figure 5-8 does not list direct CO2 emissions from coal
mining, although it does list an input of diesel used in a boiler6. If we want to assume that
we are only considering direct emissions from each process, we can assume the CO2
emissions from coal mining to be zero7, or we could expand our boundary and acquire the
LCI data module for the diesel, combusted in industrial boiler process. Our discussion below
follows the assumption that direct emissions are zero.
Figure 5-9 notes that there are 0.0189 kg of CO2 emissions per ton-km of rail transported.
Equation 5-1 summarizes how to calculate CO2 emissions per kWh for our simplistic
product system. Other than managing the compound units, it is a simple solution: about 1
kg CO2 per kWh. If we were interpreting this result, we would note that the combustion of
coal at the power plant is about 99% of the total emissions.
0.994 kg CO2 /kWh + 0.442 kg * 0 + (0.461 ton-km / kWh)*(0.0189 kg CO2 / ton-km) =
0.994 kg CO2 / kWh + 0.0087 kg CO2 / kWh = 1.003 kg CO2 / kWh

(5-1)

The estimated CO2 emissions for coal-fired electricity of 1 kg / kWh was obtained relatively
easily, requiring only three steps and queries to a single database (US NREL LCI). As always
This particular input of "diesel, combusted in industrial boiler" may not be what you would expect to find in an LCI data module, since it is a
description of how an input of diesel is used. Such flows are fairly common though.
7 Also, the unabridged LCI data modules list emissions of methane to air, which could have been converted to equivalent CO emissions.
2
Doing so would only change the result above by about 10%.
6

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

119

one of our first questions should be "is it right?" We can attempt to validate this value by
looking at external references. Whitaker et al (2012) reviewed 100 LCA studies of coal-fired
electricity generation and found the median value to be 1 kg of CO2 per kWh, thus we
should have reasonable faith that the simple model we built leads to a useful result. Of
course we can add other processes to our system boundary (such as other potential
transportation modes) but we would not appreciably change our simple result of 1 kg/kWh.
Note that anecdotally experts often refer to the emissions from coal-fired power plants to be
2 pounds per kWh, which is a one significant digit equivalent to our 1 kg/kWh result.
Process-based life cycle models are constructed in this way. For each unit process within the
system boundary, data (primary or secondary) is gathered and flows between unit processes
are modeled. Since you must find data for each process, such methods are often referred to
as "bottom up" studies because you are building them up from nothing, as you might
construct a building on empty land.
Beyond validating LCI results, you should also try to validate the values found in any unit
process you decide to use, even if sourced from a well-known database. That is because
errors can and do exist in these databases. It is easy to accidentally put a decimal in the
wrong place when creating a digital database. As an example, the US NREL LCI database
had an error in the CO2 emissions of its air transportation process, of 53 kg per 1000 ton-km
(0.053 kg per ton-km) for several years before it was fixed. This error was brought to their
attention because observant users noted that this value was less than the per-ton-km
emissions for truck transportation, which went against common sense. Major releases of
popular databases are also imperfect. It is common to have errors found and fixed, but this
may happen months after licenses have been purchased, or worse, after studies have been
completed. These are additional reasons why despite being of high quality, you need to
validate your data sources.

Details for Other Databases


The discussion above was focused on the US NREL LCI Database, which contains only
process data for US-based production, yet there are other considerations both for data
access and metadata for the other databases. As noted in Figure 5-4, the ecoinvent database
is far more geographically diverse. While generally focused on Europe, data can be found in
ecoinvent for other regions of the world as well. This fact creates a new challenge in
interpreting available process data modules, namely, determining the country of production
basis assumption for the data. While examining the metadata can be useful, ecoinvent and
other databases typically summarize the country used within the process naming convention.
For example, a process you might find within ecoinvent might be called electricity, hard coal, at
power plant, DE, where the first part is the process name formatted similar to the NREL
database, and at the end is an abbreviated term for the country or region to which that

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

120

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

process is representative. Figure 5-10 summarizes some of the popular abbreviations used
for country basis within ecoinvent.
Country or Region

Abbreviation

Country or Region

Abbreviation

Norway

NO

Japan

JP

Australia

AU

Canada

CA

India

IN

Global

GLO

China

CN

Europe

RER

Germany

DE

Africa

RAF

United States

US

Asia

RAS

Netherlands

NL

RU

Hong Kong

HK

France

FR

Russian Federation
Latin America and the
Caribbean
North America

RLA
RNA

United Kingdom
GB
Middle East
RME
Figure 5-10: Summary of abbreviations for countries and regions in ecoinvent

Ecoinvent has substantially more available metadata for its data modules, including primary
sources, representative years, and names of individuals who audited the datasets. While
ecoinvent data are not free, the metadata is freely accessible via the database website. Thus,
you could do a substantial amount of background work verifying that ecoinvent has the data
you want before deciding to purchase a license.
A particular feature of ecoinvent data is its availability at either the unit process or system
process level. Viewing and using ecoinvent system processes is like using already rolled-up
information (and computations would be faster), while using unit processes will be more
computationally intensive. This will be discussed more in Chapter 9.

LCI Data Module Metadata


Our example using actual LCI data modules from the US NREL LCI database jumped
straight into extracting and looking at the quantitative data. However, all LCI data modules
provide some level of metadata, which is information regarding how the data was collected,
how the modules were constructed, etc. Metadata is also referred to as "data about data".
The metadata that we care about for our unit processes are elements such as the year the
data was collected, where it was collected, whether the values are single measurements or
averages, and whether it was peer reviewed. To understand metadata more, we can look at
the metadata available for the processes we used above. The US NREL LCI Database has
three different metadata categories as well as the Exchanges information shown above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

121

Figure 5-11 shows metadata from the Activity metadata portion of the US NREL LCI
database for the Electricity, bituminous coal, at power plant process used above. This metadata
notes that the process falls into the Utilities subcategory (used for browsing on the website)
and that it has not yet been fully validated. It applies to the US, and thus it is most
appropriate for use in studies looking to estimate impacts of coal-fired electricity generation
done within the United States. Note that this does not mean that you can only use it for that
geographical region. A process like coal-fired generation is quite similar around the world;
although factors such as pollution controls may differ greatly by region. However, since
capture of carbon is basically non-existent, if we wanted to use this process to estimate CO2
emissions from coal-fired generation in other regions it might still be quite useful.
The metadata field for "infrastructure process" notes whether the process includes estimated
infrastructure effects. For example, one could imagine two parallel unit processes for
electricity generation, where one includes estimated flows from needing to build the power
plant and one does not (such as the one referenced above). In general, infrastructure
processes are fairly rare, and most LCA study scopes exclude consideration of infrastructure
for simplicity.
Name

Electricity, bituminous coal, at power plant

Category

Utilities - Fossil Fuel Electric Power Generation

Description

Important note: although most of the data in the US LCI database


has undergone some sort of review, the database as a whole has not
yet undergone a formal validation process. Please email comments to
lci@nrel.gov.

Location

US

Geography Comment

United States

Infrastructure Process

False

Quantitative Reference
Electricity, bituminous coal, at power plant
Figure 5-11: Activity metadata for Electricity, bituminous coal, at power plant process

Figure 5-12 shows the Modeling metadata for the coal-fired generation unit process. There is
no metadata provided for the first nine categories of this category, but there are ten
references provided to show the source data used to make the unit process. While a specific
"data year" is not dictated by the metadata, by looking at the underlying data sources, the
source data came from the period 1998-2003. Thus, the unit process data would be most
useful for analyses done with other data from that time period. If we wanted to use this
process data for a more recent year, we would either have to look for an LCI data module

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

122

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

that was newer, or verify that the technologies have not changed much since the 1998-2003
period.
LCI Method
Modelling constants
Data completeness
Data selection
Data treatment
Sampling procedure
Data collection period
Reviewer
Other evaluation
Sources

U.S. EPA 1998 Emis. Factor AP-42 Section 1.1, Bituminus and Subbituminus Utility
Combustion
U.S. Energy Information Administration 2000 Electric Power Annual 2000
Energy Information Administration 2000 Cost and Quality of Fuels for Electric
Utility Plants 2000
Energy Information Administration 2000 Electric Power Annual 2000
U.S. EPA 1998 Study of Haz Air Pol Emis from Elec Utility Steam Gen Units V1
EPA-453/R-98-004a
U.S. EPA 1999 EPA 530-R-99-010
unspecified 2002 Code of Federal Regulations. Title 40, Part 423
Energy Information Administration 9999 Annual Steam-Electric Plant Operation and
Design Data
Franklin Associates 2003 Data Details for Bituminous Utility Combustion
Figure 5-12: Modeling metadata for Electricity, bituminous coal, at power plant process

Finally, Figure 5-13 shows the Administrative metadata for the Electricity, bituminous coal, at
power plant process. There are no explicitly-defined intended applications (or suggested
restrictions on such applications), suggesting that it is broadly useful in studies. The data are

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

123

not copyrighted, are publicly available, and were generated by Franklin Associates, a
subsidiary of ERG, one of the most respected life cycle consulting business in the US. The
"Data Generator" is a significant piece of information. You may opt to use or not use a data
source based on who created it. A reputable firm has a high level of credibility. A listed
individual with no obvious affiliation or reputation might be less credible. Finally, the
metadata notes that it was created and last updated in October 2011, meaning that perhaps it
was last checked for errors on this date, not that the data is confirmed to still be valid for the
technology as of this date.
Intended Applications

"

Copyright

false

Restrictions

All information can be accessed by everybody.

Data Owner
Data Generator

Franklin Associates

Data Documentor

Franklin Associates

Project
Version
Created

2011-10-24

Last Update
2011-10-24
Figure 5-13: Administrative metadata for Electricity, bituminous coal, at power plant process

Our metadata examples have focused on the publicly available US NREL LCI Database, but
other databases like ELCD and ecoinvent have similar metadata formats. These other
databases typically have more substantive detail, in terms of additional fields and more
consistent entries in these fields. Since these other data sources are not public, we have not
used examples here.
You should browse through the available metadata for any of the databases that you have
access to, so that you can better appreciate the records that may exist within various
metadata records. Remember that the reason for better appreciating the value of the
metadata is to help you with deciding which secondary data sources to use, and how
compatible they are with your intended goal and scope.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

124

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Referencing Secondary Data


When you use secondary data as part of your study it must be appropriately referenced, as
with any other source. Referencing data sources was first mentioned in Chapter 2, but here
we discuss several important additions for referencing data from LCA databases. As an
example, the US NREL LCI database explicitly suggests the following referencing style for
use of its data modules:
When referencing the USLCI Database, please use the following format: U.S. Life Cycle
Inventory Database. (2012). National Renewable Energy Laboratory, 2012. Accessed
November 19, 2012: https://www.lcacommons.gov/nrel/search
However, this is the minimum referencing you should provide for process data. First of all,
you can not simply reference the database. You need to ensure that the specific unit process
from which you have used data is clear to the reader, for example if they would like to
validate your work. That means you need to explicitly reference the name of the process
(either obviously in the text or in the reference section). In the US NREL database and
other sources, there may be hundreds of LCI data modules for electricity. Thus, the danger
is that in the report you loosely reference data for coal-fired electricity generation as being
from "the NREL database", but do not provide enough detail for the reader to know which
electricity process was used. Unfortunately, this is a common occurrence in LCA reports.
This situation can be avoided by explicitly noting the name of the process used in the
reference, such as:
U.S. Life Cycle Inventory Database. Electricity, bituminous coal, at power plant unit process
(2012). National Renewable Energy Laboratory, 2012. Accessed Nov. 19, 2012:
https://www.lcacommons.gov/nrel/search
A generic reference to the database, as given at the top of this section, may be acceptable if
the report separately lists all of the specific processes used in the study, such as in an
inventory data source table listing all of the processes used.
You will likely use multiple unit processes from the same database. You can either create
additional references like the one above for each process, or use a combined reference that
lists all processes as part of the reference, such as:
U.S. Life Cycle Inventory Database. Electricity, bituminous coal, at power plant; bituminous coal,
at mine; transport, train, diesel powered unit processes (2012). National Renewable Energy
Laboratory, 2012. Accessed Nov. 19, 2012: https://www.lcacommons.gov/nrel/search
The greater the number of similar processes, the greater the need to specify which specific
data module you used in your analysis. This becomes especially important if you are using
LCI data modules from several databases.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

125

A final note about referencing is that the LCA databases are generally not primary sources,
they are secondary sources. Ideally, sources would credit the original author, not the
database owner who is just providing access. If the LCI data module is taken wholesale
from another source (i.e., if a single source were listed in the metadata), it may make sense to
also reference the primary source, or to add the primary source to the database reference. In
this case the reference might look like one of the following:
RPPG Of The American Chemistry Council 2011. Life Cycle Inventory Of Plastic
Fabrication
Processes:
Injection
Molding
And
Thermoforming.
http://plastics.americanchemistry.com/Education-Resources/Publications/LCI-ofPlastic-Fabrication-Processes-Injection-Molding-and-Thermoforming.pdf. via U.S. Life
Cycle Inventory Database. Injection molding, rigid polypropylene part, at plant unit process
(2012). National Renewable Energy Laboratory, 2012. Accessed November 19, 2012:
https://www.lcacommons.gov/nrel/search
U.S. Life Cycle Inventory Database. Injection molding, rigid polypropylene part, at plant unit
process (2012). National Renewable Energy Laboratory, 2012. Accessed November 19,
2012: https://www.lcacommons.gov/nrel/search (Primary source: RPPG Of The
American Chemistry Council 2011. Life Cycle Inventory Of Plastic Fabrication
Processes:
Injection
Molding
And
Thermoforming.
http://plastics.americanchemistry.com/Education-Resources/Publications/LCI-ofPlastic-Fabrication-Processes-Injection-Molding-and-Thermoforming.pdf)
As noted in Chapter 2, ideally you would identify multiple data sources (i.e., multiple LCI
data modules) for a given task. This is especially useful when using secondary data because
you are not collecting data from your own controlled processes. Since the data is secondary,
it is likely that there are slight differences in assumptions or boundaries than what you would
have used if collecting primary data. By using multiple sources, and finding averages and/or
standard deviations, you could build a more robust quantitative model of the LCI results.
We will discuss such uncertainty analysis for inventories in Chapter 10.

Additional Considerations about Secondary Data and Metadata


Given the types and classes of data we are likely to find in life cycle studies, we introduce in
this subsection a few more considerations to ensure you are finding and using appropriate
types of data to match the needs of your study. These considerations are in support of the
data quality requirements.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

126

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Temporal Issues
In creating temporal data quality requirements, you will set a target year (or years) for data
used in your study. For example, you might have a DQR of "2005 data" or "data from
2005-2007" or "data within 5 years of today". After setting target year(s) you then must do
your best to find and use data that most closely matches the target. It is likely that you will
not be able to match all data with the target year(s). When setting and evaluating temporal
DQRs, the following issues need to be understood.
You may need to do some additional work to guarantee you know the basis year of the data
you find, but this is time well spent to ensure compatibility of the models you will build.
You will need to distinguish between the year of data collection and year of publication. In
our CBECS example in Chapter 2, the data were collected in the year 2003 but the study was
not published by DOE until December 2006 (or, almost 2007). It is easy to accidentally
consider the data as being for 2006 because the publication year is shown throughout the
reports. But the data were representative of the year 2003. If your temporal DQR was set at
"2005", you might still be able to justify using the 2003 CBECS data, but would need to
assess whether the electricity intensity of buildings likely changed significantly between 2003
and 2005. The same types of issues arise when using sources such as US EPA's AP-42 data,
which are compilations of (generally old) previously estimated emissions factors. Other
aspects of your DQRs may further help decide the appropriateness of data newer or older
than your target year.
The same is true of dates given in the metadata of LCI data modules. You don't care about
when you accessed the database, or when it was published in the database. You care about
the primary source's years of analysis. Figure 5-12 showed metadata on the coal-fired
electricity generation process where the underlying data was from 1998-2003, and which was
put in the US LCI database in 2011. An appropriate "timestamp" for this process would be
1998-2003.
While on the topic of temporal issues, we revisit the point about age of data in databases.
The US LCI database project started in the mid-2000s. Looking at the search function in
that database, you can find a distribution of the "basis year" of all of the posted data
modules. This is a date that is not visible within the metadata, but is available for
downloaded data modules and summarized in the web server. Figure 5-14 shows a graph of
the distribution of the years. In short, there is a substantial amount of relatively old data,
and a substantial amount of data where this basis year is not recorded (value given as '9999').
Half of the 200 data modules updated in 2010 are from an update to the freight
transportation datasets. These could be key considerations when considering the suitability
of data in a particular database.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

127

Figure 5-14: Frequency Distribution of Data Years in US NREL LCI Database


(as of August 15, 2013)

Geographical Issues
You must try to ensure that you are using data with the right geographical scope to fit your
needs. If you are doing a study where you want to consider the emissions associated with
producing an amount of electricity, then you will find many potential data sources to use.
The EIA has data that can give you the average emissions factors for electricity generation
across the US. E-GRID (a DOE-EPA partnership) can give you emissions factors at fairly
local levels, reflecting the types of power generation used within a given region. The
question is the context of your study. Are you doing a study that inevitably deals with
national average electricity? Then the EIA data is likely suitable. Or are you doing a study
that needs to know the impact of electricity from a particular factory's production? In that
case you likely want a fairly local data source, e.g., from E-GRID. An alternative is to
leverage the idea of ranges, presented in Chapter 2, to represent the whole realm of possible
values for electricity generation, including various local or regional averages all the way up to
the national average.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

128

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Uncertainty and Variability


Sadly, in the field of LCA there are many practitioners who actively or passively ignore the
effects of uncertainty or variability in their studies. They treat all model inputs as single
values and generate only a single result. The prospect of uncertainty or variability is lost in
their model, and typically then that means those effects are lost on the reader of the study.
How can we support a big decision (e.g., paper vs. plastic?) if there is much uncertainty in
the data but we have completely ignored it? We are likely to end up supporting poor
decisions if we do so. We devote Chapter 11 to methods of overcoming and structuring
uncertainty in LCA models.

Chapter Summary
Typically, the most time consuming aspect of an LCA (or LCI) study relates to the data
collection and management phase. While the LCA Standard encourages practitioners to
collect primary data for the product systems being studied, typically secondary data is used
from prior published studies and databases. Using secondary data requires being
knowledgeable and cognizant of issues relating to the sources of data presented and also
requires accurate referencing. Data quality requirements help to manage expectations of the
study team as well as external audiences pertaining to the goals of your data management
efforts. Studies done with robust LCI data management methods lead to excellent and wellreceived studies.

References for this Chapter


BEES LCA Tool, website, http://ws680.nist.gov/Bees/Default.aspx, last accessed August
12, 2013.
ecoinvent website, www.ecoinvent.ch, last accessed August 12, 2013.
ELCD LCA Database, website, http://lca.jrc.ec.europa.eu/lcainfohub/, last accessed
August 12, 2013.
Environmental Protection Agency. 1993. Life Cycle Assessment: Inventory Guidelines and
Principles. EPA/600/R-92/245. Office of Research and Development. Cincinnati, Ohio,
USA.
Gabi Software, website, http://www.gabi-software.com/, last accessed August 12, 2013.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

129

LCA-DATA, UNEP, website, http://lca-data.org:8080/lcasearch, last accessed August 12,


2013.
Quantis, "Environmental Life Cycle Assessment of Drinking Water Alternatives and
Consumer Beverage Consumption in North America", LCA Study completed for Nestle
Waters
North
America,
2010,
http://www.beveragelcafootprint.com/wpcontent/uploads/2010/PDF/Report_NWNA_Final_2010Feb04.pdf,
last
accessed
September 9, 2013.
The Agribusiness Group, "Life Cycle Assessment: New Zealand Merino Industry Merino
Wool
Total
Energy
Use
and
Carbon
Dioxide
Emissions",
2006,
http://www.agrilink.co.nz/Portals/Agrilink/Files/LCA_NZ_Merino_Wool.pdf,
last
accessed September 1, 2013.
US NREL LCI Database, website, http://www.nrel.gov/lci/, last accessed August 12, 2013.
U.S. Life Cycle Inventory Database. Electricity, bituminous coal, at power plant, bituminous coal, at
mine, and transport, train, diesel powered unit processes (2012). National Renewable Energy
Laboratory, 2012. Accessed August 15, 2013: https://www.lcacommons.gov/nrel/search
USDA LCA Digital Commons, website, http://www.lcacommons.gov, last accessed August
12, 2013.
Whitaker, Michael, Heath, Garvin A., O'Donoughue, Patrick, and Vorum, Martin, "Life
Cycle Greenhouse Gas Emissions of Coal-Fired Electricity Generation: Systematic Review
and Harmonization", Journal of Industrial Ecology, 2012.
DOI: 10.1111/j.15309290.2012.00465.x

Questions for Chapter 5


1. Using the US NREL LCI Database (from the USDA Digital Commons) or another
LCI database, search or browse amongst the available categories. For each of the
following broadly defined processes in the list below, discuss how many different
LCI data modules are available and qualitatively discuss what different assumptions
have been used to generate the data modules.
a. Refining of petroleum
b. Generating electricity from fossil fuel
c. Truck transportation

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

130

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

2. If you had data quality requirements stating that you wanted data that was national
(US) in scope, and from within 5 years of today, how many of the LCI data modules
from Question 1 would be available? Which others might still be relevant? Justify
your answer.
3. The data identified in part 1c above would be secondary data if you were to use it in
a study. If you instead wanted primary data for a study on trucking, discuss what
methods you might use in order to get the data.
4. Using an LCI database available to you, search for one LCI data module in each of
the following broad categories - energy, agriculture, and transportation. For each of
the three, do the following:
a. List the name of the process.
b. Identify the functional unit.
c. Draw a unit process diagram.
d. Try to do a brief validation of the data reported.
e. Comment briefly on an example LCA study that this process might be
appropriate for, and one where it would not be appropriate.
f. Show how to appropriately reference the LCI data module in a study.
5. Redo the Figure 5-5 example but include the diesel, combusted in industrial boiler process
within the system boundary. What is your revised estimate of CO2 emissions per
kWh? How different is your estimate compared to Equation 5-1?
6. Redo the Figure 5-5 example but include within the system boundary refining of the
diesel used in the coal mining and rail transportation processes (and assume you have
LCI flow data that there are 2.5 E-04 kg CO2 emissions per liter of diesel fuel). How
is your revised estimate of CO2 emissions per kWh compared to Equation 5-1?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

131

Advanced Material for Chapter 5


The advanced material in this chapter will demonstrate how to find and access LCI data
modules from various popular databases and software tools, and how to use the data to
build simple models like the main model presented in the chapter related to coal-fired
electricity.
Not all databases and software tools are discussed; however, access methods are generally
very similar across tools. For consistency, we will demonstrate how to find the same process
data as used in the chapter so that you can learn about the different options and selections
needed to find equivalent data and metadata across tools. Specifically, we will demonstrate
how to find data from the US LCI database by using the LCA Digital Commons Website,
SimaPro (a commercial LCA tool) and openLCA (a free LCA tool).
The databases and tools use different terminology, categories, etc., to organize LCI data, but
can all lead to the same data. Seeing how each of the tools categorizes and refers to the data
is an important concept to understand.

Section 1 - Accessing Data via the US LCA Digital Commons


The LCA Digital Commons is a free, US government-sponsored and hosted web-based data
resource. Given that all of its data are publicly available, it is a popular choice for
practitioners. Thus, it is also a great resource for learning about what LCI data looks like,
how to access it, and how to build models.
The main purpose of the Digital Commons is to act as a resource for US Department of
Agriculture (USDA) agricultural data and, as a result, accessing the home page (at
https://www.lcacommons.gov/discovery) will filter access to those datasets. However, the
US LCI database previously hosted by NREL (at http://www.nrel.gov/lci/), and mentioned
extensively in Chapter 5, is also hosted via the Digital Commons website (at
https://www.lcacommons.gov/nrel/search). Given its comprehensiveness, most of the
discussion in this book is related to use of the NREL data. The examples provided below are
for accessing the NREL data source, which has slightly different metadata and contents than
the USDA data but a similar method for searching and viewing.
The LCI data modules on the Digital Commons website can be accessed via searching or
browsing. Brief overviews are provided for both options, followed by how to view and
download selected modules. Before following the tutorial below, you should consider
registering for an account on the Digital Commons website (you will need separate accounts
for the USDA and NREL data). While an account is not required to view all of the data, it is
required if you wish to download the data. You can copy and paste the data from a web
browser instead of downloading but this sometimes leads to formatting errors.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

132

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Browsing for LCI Data Modules on the Digital Commons (NREL)


Figure 5-15 shows the NREL Digital Commons home page, where the left hand side shows
how the data modules are organized, including dataset type (elementary flows or unit
processes), high-level categories (like transportation and utilities), and year of data8.

Figure 5-15: Excerpt of LCA Digital Commons Website Home Page

Clicking on the + icon next to the categories generally reveals one or more additional subcategories. For example, under the Utilities category there are fossil-fired and other
generation types. Clicking on any of the dataset type, category/subcategory or year
checkboxes will filter the overall data available. The "order by" box will sort the resulting
modules. Filtering by (checking) Unit processes and the Fossil fuel electric power generation category
under Utilities, and ordering by description will display a subset of LCI data modules, as
shown in Figure 5-16. A resulting process module can be selected (see below for how to do
this and download the data).

Figure 5-16: Abridged View of LCA Digital Commons Browsing Example Results
8

The examples of the NREL US LCI Database in this section are as of July 2014, and may change in the future.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

133

Searching for an LCI data module via keyword


The homepage has a search feature, and entering a keyword such as electricity and pressing the
Go button on the right hand side, as shown in Figure 5-17, will return a list of data modules
within the NREL LCI database that have that word in the title or category, as shown in
Figure 5-18.

Figure 5-17: Keyword search entry on homepage of NREL engine of LCA Digital Commons Website

Figure 5-18: Abridged Results of electricity keyword search

Figure 5-18 indicates that the search engine returns more than 100 LCI data modules
(records) that may be relevant to "electricity". Some were returned because electricity is in
the name of the process and others because they are in the Electric power distribution data
category. When searching, you can order results by relevance, description, or year. Once a

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

134

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

set of search results is obtained, results can be narrowed by filtering via the options on the
left side of the screen. For example, you could choose a subset of years to be included in the
search results, which can help ensure you use fairly recent instead of old data (as discussed
along with Figure 5-14). You can also filter based on the LCI data categories available, in
this case by clicking on the + icon next to the high-level category for Utilities, which brings
up all of the subcategories under utilities. Figure 5-19 shows the result of a keyword search
for 'electricity', ordered by relevance, and filtered by the Utilities subcategory of Fossil fuel
electric power generation and by data for year 2003. The fifth search result listed is the same one
mentioned in the chapter that forms the basis of the process flow diagram example.

Figure 5-19: Abridged Results of electricity keyword search, ordered and filtered

Selecting and viewing an LCI data module


When you have searched or browsed for a module and selected by clicking on it, the module
detail summary is displayed, as in Figure 5-20.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

135

Figure 5-20: Details for Electricity, bituminous coal process on LCA digital commons

The default result is a view of the Activity tab, which was shown in Figure 5-11. The
information available under the Modeling and Administrative tabs was presented in Figure 5-12
and Figure 5-13. Finally, an abridged view of the information available on the Exchanges tab
was also shown in Figure 5-6. Not previously mentioned is that the module can be
downloaded by first clicking on the shopping cart icon in the top right (adjacent to the
"Next item" tag). This adds it to your download cart. Once you have identified all of the
data you are interested in, you can view your current cart (menu option shown in Figure
5-21) and request them all to be downloaded (Figure 5-22).

Figure 5-21: Selection of Current Cart Download Option on LCA Digital Commons

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

136

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-22: Cart Download Screen on LCA Digital Commons

After clicking download, you will be sent a link via the e-mail in your account registration.
As noted, the format will be an Ecospold XML file. For novices, viewing XML files can be
cumbersome, especially if just trying to look at flow information. While less convenient, the
download menu (All LCI datasets submenu) will allow you to receive a link to a ZIP file
archive containing all of the NREL modules in Microsoft Excel spreadsheet format (or you
can receive all of the modules as Ecospold XML files). You can also download a list of all
of the flows and processes used across the entire set of about 600 modules.
A spreadsheet of all flows and unit processes in the US LCI database (and their
categories) is on the www.lcatextbook.com website in the Chapter 5 folder.
When uncompressed the Electricity, bituminous coal, at power plant module file has four
worksheets, providing the same information as seen in the tabs of the Digital
Commons/NREL website above. The benefit of the spreadsheet file, though, is the ability
to copy and paste that values into a model you may be building. We will discuss building
spreadsheet models with such data in Section 4 of this advanced material.

Section 2 Accessing LCI Data Modules in SimaPro


As mentioned in the chapter, SimaPro is a popular commercial software program specifically
aimed at building quantitative LCA models. Its value lies both in these model-building
support activities as well as in being able to access various datasets from within the program.
Commercial installations of SimaPro cost thousands of dollars, but users may choose
commercial databases (e.g., ecoinvent) to include in the purchase price. Regardless of which
databases are chosen, SimaPro has the ability to use various other free datasets (e.g., US
NREL, ELCD, etc.). This tutorial assumes that such databases have already been installed
and will demonstrate how to find the same US NREL-based LCI data as in Section 1.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

137

This tutorial also does not describe any of the initial steps needed to purchase a license for
or install SimaPro on your Windows computer or server. It will only briefly mention the
login and database selection steps, which are otherwise well covered in the SimaPro guides
provided with the software.
Note that SimaPro refers to the overall modeling environment of data available as a
"database" and individual LCI data sources (e.g., ecoinvent) as "libraries". After starting
SimaPro, selecting the database (typically called "Professional"), and opening or creating a
new project of your choice, you will be presented with the screen in Figure 5-23. On the left
side of the screen are various options used in creating an LCA in the tool. By default the
"processes" view is selected, showing the names and hierarchy of all processes in the
currently selected libraries of the database. This list shows thousands of processes (and
many of those will be from the ecoinvent database given its large size).

Figure 5-23: Default View of Processes in Libraries When Starting SimaPro

You can narrow the processes displayed by clicking on "Libraries" on the left hand side
menu, which will display Figure 5-24. Here you can select a subset of the available libraries
for use in browsing (or searching) for process data. You can choose "Deselect all" and then
to follow along with this tutorial, click just the "US LCI" database library in order to access
only the US NREL LCI data.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

138

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-24: List of Various Available Libraries in SimaPro

If you then click the "Processes" option on the left hand side, you return to the original
screen but now SimaPro filters and shows only processes from the selected libraries, as in
Figure 5-25. Many of the previously displayed processes are no longer displayed.

Figure 5-25: View of Processes and Data Hierarchy for US-LCI Library in SimaPro

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

139

Now that you have prepared SimaPro to look for the processes in a specific database library,
you can browse or search for data.
Browsing for LCI Data Modules in SimaPro
Looking more closely at Figure 5-25, the middle pane of the window shows the categorized
hierarchy of data modules (similar to the expandable hierarchy list in the Digital Commons
tool). However, these are not the same categories used on the NREL LCA Digital
Commons website. Instead, they are the standard categories used in SimaPro for processes
in any library. Clicking on the + icon next to any of the categories will expand it and show
its subcategories. To find the Electricity, bituminous coal process, expand the Energy category
then expand Electricity by fuel, then expand coal, resulting in a screen like Figure 5-26.
Several of the other processes burning coal to make electricity and mentioned in the chapter
would also be visible.

Figure 5-26: Processes Shown by Expanding Hierarchy of Coal-Sourced Electricity in SimaPro

The bottom pane shows some of the metadata detail for the selected process. By browsing
throughout the categories (and collapsing or expanding as needed) and reading the metadata
you can find a suitable process for your model. The tutorial will demonstrate how to view or
download such data after briefly describing how to search for the same process.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

140

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Searching for a process in SimaPro


Once libraries have been specified as noted above, clicking on the magnifying glass icon in
the toolbar brings up the search interface as shown in Figure 5-27. You enter your search
term in the top box, and then choose from several search options. If you are just looking
for process data (as in this tutorial) then you would want to restrict your choice of where to
look for the data to only libraries you have currently chosen (i.e., via the interface in Figure
5-24) rather than all libraries. This will also make your search return results more quickly.
Note the default search only looks in the names of processes, not in the metadata (the "all
fields" option changes this behavior).

Figure 5-27: Search Interface in SimaPro

Figure 5-28 shows the result of a narrowed search on the word "electricity" in the name of
processes only in "Current project and libraries" and sorted by the results column "Name".
Since we have already selected only the US LCI database in libraries, the results will not
include those from ecoinvent, etc. One of the results is the same Electricity, bituminous coal, at
power plant process previously discussed.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

141

Figure 5-28: Results of Modified Search for Electricity in SimaPro

By clicking "Go to" in the upper right corner of the search results box, SimaPro "goes to"
the same place in the drill-down hierarchy as shown in Figure 5-26.
Viewing process data in SimaPro
To view process data, choose a process by clicking on it (e.g., as in Figure 5-26) and then
click the View button on the right hand side. This returns the process data and metadata
overview shown in Figure 5-29. Similar to the Digital Commons website, the default screen
shows high-level summary information for the process. Full information is found in the
documentation and system description tabs.

Figure 5-29: Process Data and Metadata Overview in SimaPro

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

142

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Clicking on the input-output tab displays the flow data in Figure 5-30, which for this process
is now quite familiar. If you need to download this data, you can do so by choosing
"Export" in the File menu, and choosing to export as a Microsoft Excel file.

Figure 5-30: View of Process Flow Data (Inputs and Outputs) in SimaPro

Section 3 Accessing LCI Data Modules in openLCA


openLCA is a free LCA modeling environment (available at http://www.openlca.org/)
available for Windows, Mac, and Linux operating systems. While installation and
configuration can be quite complicated (and is not detailed here), various datasets are
available. The tutorial assumes you have access to a working openLCA installation with the
US LCI database, and discusses how to find the same US NREL-based LCI data as in
Section 1.
After launching openLCA and connecting to your data source you should see a list of all of
your databases, as shown in Figure 5-31. If you do not see the search and navigation tabs,
you may add them under the "Window menu -> Show views option" to add them. If you
have installed the US LCI database, it should be one of the options available.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

143

Figure 5-31: List of Data Connections in openLCA

Browsing for process data in openLCA


Clicking on the triangle to the left of the folder allows you to open it and see the standard
hierarchy of information for all data sources in openLCA, like in Figure 5-32. This is where
you could see the process data, types of flows, and units.

Figure 5-32: Hierarchical Organization of Information for openLCA Databases

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

144

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

If you double click on the "Processes" folder it will display the same sub-hierarchy of
processes (not shown here) that we saw in the NREL/Digital Commons website in Section
1. All of the data for unit processes are contained under that folder. If you click on the
"Utilities" subcategory folder, then the "Fossil Fuel Electric Power Generation" folder, you
will see the Electricity, bituminous coal, at power plant seen above, as shown in Figure 5-33.
Several of the other processes burning coal to make electricity and mentioned in the chapter
would also be visible.

Figure 5-33: Expanded View of Electricity Processes in Fossil Fuel Generation Category

Searching for a process in openLCA


Instead of using the Navigation tab, a search for process data can be done using the Search
tab. Clicking on the search tab brings up the search interface, as shown in Figure 5-34.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

145

Figure 5-34: Default Search Interface in openLCA

In the first search option, you may search in all databases or narrow the scope of your search
to only a single database (e.g., to the US-LCI database). In the second option, you may
search all object types, or narrow the scope of your search to just "Processes", etc. Finally,
you can enter a search term, such as "electricity". If you choose to search for "electricity"
only in your US LCI database (note you may have named it something different), and only in
processes, and click search you will be presented with the results as in Figure 5-35. Note
that these results have been manually scrolled down to show the same Electricity, bituminous
coal, at power plant process previously identified.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

146

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-35: Search Results for Electricity in US-LCI Database in OpenLCA

Unlike the other tools, there is no quick and easy way to skim metadata to ensure which
process you want to use.

Viewing process data in openLCA


To view process data, choose a process by double-clicking on it from either the browse or
search interface. This opens a new pane of the openLCA environment and returns the
process data and metadata overview, as shown in Figure 5-36. Similar to the Digital
Commons website, the default screen shows high-level summary information for the process
(not all of the information is shown in the Figure). Additional information is available in the
Inputs/Outputs, Administrative information, other tabs at the bottom of this pane.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-36: Process Data and Metadata Overview in SimaPro

Clicking on the Inputs/Outputs tab displays the flow data in Figure 5-37, which for this
process is now quite familiar.

Figure 5-37: View of Process Flow Data (Inputs and Outputs) in openLCA

If you need to download this data, you can do so by choosing "Export" in the File menu,
but you cannot export it as a Microsoft Excel file.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

147

148

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Section 4 Spreadsheet-based Process Flow Diagram Models


Now that process data has been identified, quantitative process flow diagram-based LCI
models can be built. Amongst the many tools to build such models, Microsoft Excel is one
of the most popular. Excel has many built-in features that are useful for organizing LCI data
and calculating results, and is already familiar to most computer users.
To make these examples easy to follow, we repeat the core example from Chapter 5 (and
shown in Figure 5-5) involving the production of coal-fired electricity via three unit
processes in the US LCI database. The US LCI database is used since it is freely available
and indicative of many other databases (e.g., ELCD). To replicate the structure of the core
model from Chapter 5, we need to manage our process data in support of our process flow
diagram. The following steps illustrate the quantitative structure behind a process-flow
diagram based LCI model.
1) Find all required process data
In the first few sections of the advanced material for this chapter, we showed how to find
the required process data from the US LCI database via several different tools. Using similar
browse and search methods, you can find the LCI data for the other two processes so that
you have found US LCI data for these three core processes:

Electricity, bituminous coal, at power plant

Bituminous coal, at mine

Transport, train, diesel powered

Depending on which tool you used to find the US LCI process data, it may be easy to export
the input and output flows for the functional unit of each process into Excel. If not, you
may need to either copy/paste, or manually enter, the data. Recall that accessing the US LCI
data directly from the LCA Digital Commons can yield Microsoft Excel spreadsheet files.
2) Organize the data into separate worksheets
A single Microsoft Excel spreadsheet file can contain many underlying worksheets, as shown
in the tabs at the bottom of the spreadsheet window. For each of the downloaded or
exported data modules, copy / paste the input/output flows into a separate Microsoft Excel
worksheet. If you downloaded the US LCI process data directly from the lcacommons.gov
website, the input/output flow information is on the "X-Exchange" worksheet of the
downloaded file (the US LCI data in other sources would be formatted in a similar way).
The Transport, train, diesel powered process has 1 input and 9 outputs (including the product
output), as shown in Figure 5-38.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

149

Figure 5-38: Display of Extracted flows for Transport, train, diesel powered process from US LCI

3) Create a separate "Model" worksheet in the Microsoft Excel file


This Model worksheet will serve as the primary workspace to keep track of the relevant
flows for the process flow diagram. This sheet uses cell formulas to reference the flows on
the other worksheets that you created from the process LCI datasets.
Beyond just referencing the flows in the other worksheets, the Model worksheet must scale
the functional unit-based results as needed based on the process flow diagram. For example,
in Equation 5-1, results were combined for 1 kWh of electricity from bituminous coal, 0.46
ton-km of train transportation, and from 0.44 kg of coal mining. Since the process LCI data
modules are generally normalized on a basis of a functional unit of 1, we need to multiply
these LCI results by 1, 0.46, or 0.44.
Basic LCI Spreadsheet Example
In this example, a basic cell formula is created on the Model worksheet to add the output
flows of CO2 from the three separate process worksheets. We first make a summary output
result cell for each of the three processes where we multiply the CO2 emissions value from
each worksheet (e.g., the rounded value 0.019 in cell G8 of Figure 5-38) by the functional
unit scale factor listed above. Then we find the sum of CO2 emissions across the three
processes by typing = into an empty cell and then successively clicking on the three scaled
process emissions values.
The Chapter 5 folder has a "Simple and Complex LCI Models from US LCI"
spreadsheet file following the example as shown in the Chapter (which only tracked
emissions of fossil CO2). Figure 5-39 shows an excerpt of the "Simple Model" worksheet in
the file. The same result as shown in the chapter (not rounded off) is visible in cell E8, with
the cell formula =B8+C8+D8.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

150

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-39: Simple Spreadsheet-Based Process LCI Model

This simple LCI model shows a minimal effort result, such that using a spreadsheet is
perhaps overkill. Tracking only CO2 emissions means that we only have to add three scaled
values, which could be accomplished by hand or on a calculator. However this spreadsheet
motivates the possibility that a slightly more complex spreadsheet could be created that
tracks all flows, not just emissions of CO2.
Complex LCI Spreadsheet Example
Beyond the assumptions made in the simple model above, in LCA we often are concerned
with many (or all) potential flows through our product system. Using the same underlying
worksheets from the simple spreadsheet example, we can track flows of all of the outputs
listed in the various process LCI data modules (or across all potential environmental flows).
This not only allows us a more complete representation of flows, but better prepares us for
next steps such as impact assessment.
In this complex example, we use the same three underlying input/output flow worksheets,
but our Model worksheet more comprehensively organizes and calculates all tracked flows
from within a dataset. Instead of creating cell formulas to sums flows for each output (e.g.,
CO2) by clicking on individual cells in other worksheets, we can use some of Excel's other
built-in functions to pull data from all listed flows of the unit processes into the summary
Model worksheet. An example file is provided, but the remaining text in this section
describes in a bit more detail how to use Excel's SUMPRODUCT function for this task.
The SUMPRODUCT function in Microsoft Excel, named as such because it finds the sum
of a series of multiplied values, is typically used as a built-in way of finding a weighted
average. Each component of the function is multiplied together. For example, instead of
the method shown in the Simple LCI spreadsheet above, we could have copied the CO2

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

151

emissions values from the three underlying worksheets into the row of cells B8 through D8,
and then used the function =SUMPRODUCT(B4:D4*B8:D8) to generate the same result.
The "Simple and Complex LCI Models" file has a worksheet "Simple Model (with
SUMPRODUCT)" showing this example in cell E8, yielding the same result as above.
However the SUMPRODUCT function can be more generally useful, because of how Excel
manages TRUE and FALSE values and the fact that the "terms" of SUMPRODUCT are
multiplied together. In Excel, TRUE is represented as 1 and FALSE is represented as 0
(they are Booleans). So if we have "terms" in the SUMPRODUCT that become 1 or 0, we
can use SUMPRODUCT to only yield results when all expressions are TRUE, else return 0.
This is like achieving the mathematical equivalent of if-then statements on a range of cells.
The magic of this SUMPRODUCT function for our LCI purposes is that if we have a
master list of all possible flows, compartments, and sub-compartments, we can find whether
flow values exist for any or all of them. On the US LCI Digital Commons website, a text file
can be downloaded with all of the nearly 3,000 unique compartment flows present in the US
LCI database. This master list of flows can be pasted into a Model worksheet and then used
to "look up" whether numerical quantities exist for any of them.
A representative cell value in the complex Model worksheet, which has similar cell formulas
in the 3,000 rows of unique flows, looks like this (where cells A9, B9, and C9 are the flow,
compartment, and subcompartment values we are trying to match in the process data):
=E$4*SUMPRODUCT((Electricity_Bitum_Coal_Short!$A$14:$A$65=A
9)*(Electricity_Bitum_Coal_Short!$C$14:$C$65=B9)*(Electrici
ty_Bitum_Coal_Short!$D$14:$D$65=C9)*Electricity_Bitum_Coal_
Short!$G$14:$G$65)
This cell formula multiplies the functional unit scale factor in cell E4 by the
SUMPRODUCT value of:

whether the flow name, compartment, and subcompartment in the unit flows for the
coal-fired electricity process match every item in the master list of flows.

and, if the flow/compartment/subcompartment values match, the inventory value


for the matched flow.

Within the SUMPRODUCT, if the flow/compartment/subcompartment in the unit process


data doesn't match the flow/compartment/subcompartment on the row of the Model
worksheet, the Boolean values are all 0's and the result is 0. If they all match, the Boolean
results are 1, and the final part of the SUMPRODUCT expression (the actual flow quantity)
is returned.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

152

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

Figure 5-40: Complex Spreadsheet-Based Process LCI Model

The Chapter 5 folder on the textbook website has spreadsheets with all of the flows and
processes in the US LCI database, as downloaded from the LCA Digital Commons website.
The "Simple and Complex LCI Models" file has a worksheet "Complex Model" which
shows how to use the SUMPRODUCT function to track all 3,000 flows present in the
US LCI database (from the flow file above). Of course the results are generally zero for
each flow due to data gaps, but this example model expresses how to broadly track all
possible flows. You should be able to follow how this spreadsheet was made and, if needed,
add additional processes to this spreadsheet model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 5: Data Acquisition and Management for Life Cycle Inventory Analysis

(the rest of these will be done, in order shown, but in no hurry to finish yet)

Section Ecoinvent website

Section Accessing LCI Data Modules in ILCD?

Reorder these Sections (e.g., ILCD after NREL since so similar)?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

153

154

Chapter 6: Analyzing Multifunctional Product Systems

Photo Credit: Chris Goldberg, 2009, via Creative Commons license (CC BY-NC 2.0)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

155

Chapter 6 : Analyzing Multifunctional Product Systems


In Chapter 5, we showed the relatively simple steps of building a process flow diagram-based
LCA model where there was only one product in the system. However, product systems in
LCA studies may have multiple products, providing multiple functions. Analyzing these
systems introduces new complexities, and this chapter demonstrates various methods
(referenced in the Standard) for overcoming or addressing these challenges. The methods
described herein modify either the systems studied or the input and output flow values so
that the multifunction systems can be quantitatively assessed.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:
1. Discuss the challenges presented by processes and systems with multiple
products and functions.
2. Perform allocation of flows to co-products from unallocated data.
3. Replicate results from database modules containing unallocated and allocated
flows.
4. Estimate inventory flows from a product system that has avoided allocation via
system expansion.

Multifunction Processes and Systems


Many processes and product systems are simple enough that they have only a single product
output that provides a single function. However, even when tightly scoped, there are also
many processes and systems that will have multiple products that each provide their own
function. A good example is a petroleum refinery that has outputs of gasoline, diesel, and
other products. LCA studies typically have function and functional unit definitions related to
the life cycle effects of only one product. As such, a method is needed to connect input and
output flow data with a desired functional unit, subject to the data associated with multiple
products. The method chosen can have a significant effect on the results, and thus, the
choice of method is controversial. How to deal with such systems is subject to much debate.
Building on the example figures and discussion in Chapters 4 and 5, Figure 6-1 shows a
generic view of a unit process with multiple product outputs that each provides their own
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

156

Chapter 6: Analyzing Multifunctional Product Systems

function. In this case, there are three Products, A, B, and C, associated with functions 1, 2,
and 3, respectively.

Figure 6-1: Generic Illustration of a Unit Process with Multiple Products or Co-Products

Co-products exist when a process has more than one product output which is a fairly
common outcome, given the complexity of many industrial processes. If the goal of our
study is to assess the effects associated with Product A, which provides Function 1, we need
to find a way to deal with the provision of Products B and C, which provide Functions 2 and
3. In the context of a particular study, typically the product of primary interest (i.e., Product
A above) is referred to as the product, and any other products (i.e., Products B and C above)
as co-products, but this is not a standard terminology.
The Standard suggests two ways of approaching this problem: either by partitioning the
process so that a set of quantitative connections are derived between the inputs and outputs
and the various products (known as allocation), or by changing the way in which we have
defined our system so that we can clearly show just the effects associated with Product A
and its associated function (known as system expansion). While system expansion is the
preferred method, we discuss allocation first because it is simpler to understand and also
helps to frame the broader discussion.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

157

Allocation of Flows for Processes with Multiple Products


For a unit process, the goal of allocation is to assign a portion of each input and nonproduct (e.g., emission) output to each of the various products, such that the sum of all
product shares equals the total input and output flows for the process. Allocation is also
referred to as partitioning.
In Chapter 5, we accessed the US LCI database information and directly used all of the data
without modification in our model for the bituminous coal-fired electricity process flow
diagram. We even used some of the information in the process data to decide the
multipliers needed in using our other process data sources. The reason we would directly use
all of the data is because the Electricity, bituminous coal, at power plant process listed only one
product: electricity (see Figure 5-6).
For other LCA models, we may have to manipulate the data in some way to make it fit the
needs of our study. However, one could envision an alternative process where aside from
generating electricity, the process also produced heat (e.g., a combined heat and power, or
CHP system). Such a process has multiple products, heat and power, each of which has a
different function. Furthermore, we might want to derive a mathematical means of
associating a relevant portion of the quantified inputs and outputs to each of the products
(i.e., to know how much pollution we associate to each product of the system). This
association is called allocation.
The data associated with processes or systems having multiple product outputs may be
organized in several ways. Their most raw form (i.e., as collected) will be an inventory of
unallocated inputs and outputs, and relative quantities of co-product outputs. These
unallocated flows represent a high level view of the process, representing all flows as
measured but without concern for how those flows may connect to specific co-products.
An example would be process data for an entire refinery that tracks all inputs (crude oil,
energy, etc.) and quantifies all outputs (e.g., diesel, gasoline, etc.). Alternatively, process data
may consist of already allocated flow estimates of inputs and outputs for each co-product.
For instance, the refinery process data would contain estimates of crude oil and energy
inputs used for each unit of gasoline, diesel, etc.
In allocation, the key concern is determining the appropriate mathematical relationship to
transform the unallocated flows to allocated flows. The Standard gives specific, yet
somewhat vague directions on the appropriate methods of allocation. First off, ISO says
that, if possible, allocation should be avoided, which we will assume has been deemed not
possible. But, for the sake of discussion, if allocation is needed, then the Standard says that
the inputs and outputs of the process should be partitioned between its products based on a
physical relationship. It states that the physical relationship "should reflect the way in which the
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

158

Chapter 6: Analyzing Multifunctional Product Systems

inputs and outputs are changed by quantitative changes in the products or functions
delivered by the system." Further, if the physical relationship alone is not sufficient to
perform the allocation, then other relationships should be used, such as economic factors.
Commonly used allocation methods include mass-based, energy-based, and economic-based
methods. Not all systems will be able to be allocated in these ways, as some products have
no mass, may differ in utility of energy (e.g., heat versus electricity), and economic features
like market value may fluctuate widely.
It is important to observe that the Standard does not prescribe which allocation method to
use, i.e., it does not say to always use a mass-basis or an energy-basis for allocation, or to
never use an economic basis. The only specifications provided pertain to reuse and
recycling, where the Standard gives an ordering of preference for allocation methods,
specifically, physical properties (e.g., mass), economic values, and number of uses.
Practitioners often use this same ordering for allocating processes other than reuse and
recycling, which may be a useful heuristic, but it is not prescribed by the Standard.
As with other work, choices and methods behind allocation should be justified and
documented. In addition, the Standard requires that when several possible allocations seem
reasonable, sensitivity analysis should be performed to show the relative effects of the
methods on the results (see section at the end of the chapter). For example, we might
compare the results of choosing a mass-based versus an energy-based allocation method.
The following example does not use a traditional production process with various coproducts, but it will help to motivate and explain allocation. In this example, consider a
truck transporting different fruits and vegetables. The truck is driven from a farm to a
market, as in the photo at the beginning of this chapter. For this one-way trip, the truck
consumes 5 liters of diesel fuel, and it emits various pollutants (not quantified in this
example). If apples, watermelons, and lettuce were the only three produce items delivered,
the collected data might show that the truck delivered produce in your measured trip with
the values shown in Figure 6-2. "Per type" values are the per item values multiplied by the
number of items for each type of fruit or vegetable.

Apples
Watermelon
Lettuce
Total

Items
Number
100
25
50
175 items

Per Item
0.2 kg
2 kg
0.4 kg
-

Mass
Per Type
20 kg
50 kg
20 kg
90 kg

Market Value
Per Item
Per type
$0.40
$40
$4
$100
$1
$50
$190

Figure 6-2: Summary Information of Fruits and Vegetables Delivered (per item and per type)

If we focus on determining how to allocate the diesel fuel, our LCA-supported question
becomes, "how much of the diesel fuel use is associated with each item of fruit and
vegetable?" To answer this, we need to do an allocation, which requires only simple math.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

159

The allocation process involves determining the allocation factor, or the quantitative share
of flows to be associated with each unit of product, and then multiplying the unallocated
flow (in this case, 5 liters of diesel fuel) by these allocation factors to find the allocated
flows. Before we show the general equation for doing this, we continue with the produce
delivery truck example. We use significant digits casually here to help follow the methods.
If the allocation method chosen was based on number of items in the truck, from Figure
6-2, there are 175 total items, so the flow per unit (items) is 1/175 items. Each item of
produce in the truck, regardless of the type of produce, would be allocated (1/175 items)*(1
item) *(5 liters) of diesel, or 0.029 liters. The value (1/175 items)*(1 item) is the allocation
factor, 5 liters is the unallocated flow, and 0.029 liters is the allocated flow. Alternatively, in
a mass-based allocation, the total mass transported was 90 kg. The flow per unit of mass is
1/90 kg, and each apple would be allocated 0.2 kg*(1/90 kg)*5 liters, or 0.011 liters of diesel.
Finally, the total market value of all produce on the truck is $190. The flow per dollar is
1/$190, and each apple would be allocated $0.4*(1/$190)*5 liters, or 0.01 liters of diesel.
The allocation factors and allocated flows of diesel fuel for each fruit and vegetable are
shown in Figure 6-3. The results show that the diesel fuel allocated is quite sensitive to the
type of allocation chosen for apples and watermelon the diesel fuel allocated varies for
apples from 0.01 to 0.029 liters (a factor of 3), and for watermelon, from 0.029 to 0.11 liters
(a factor of 4). The allocated flow of diesel for lettuce is much less sensitive varying from
0.022 to 0.029 liters (only about 30%).
Item
Allocation
Allocated
factor
flow
(liters)
Apples
Watermelon
Lettuce

1 item *
1/175 item

0.029

Mass
Allocation
Allocated
factor
flow
(liters)
0.2 kg * 1/90 kg
0.011
2 kg * 1/90 kg
0.11
0.4 kg * 1/90 kg
0.022

Economic
Allocation
Allocated
factor
flow
(liters)
$0.40 *1/$190
0.01
$4 *1/$190
0.11
$1 *1/$190
0.026

Figure 6-3: Allocation Factors and Allocated Flows Of Diesel Fuel per Type of Produce

To validate that our math is correct, we check that the sum of the allocated flows equals the
unallocated value (5 liters). For allocation by items, 0.029 l/item * 175 items = 5.075 liters.
By mass, the check is 0.011*100 + 0.11*25 + 0.022*50 = 4.95 liters. For price, the check is
0.01*100 + 0.11*25 + 0.026*50 = 1+2.75+1.3 = 5.05 liters. The allocations appear correct,
and the slight discrepancies from 5 liters are due to rounding.
The estimates from Figure 6-3 could be used to support a cradle to consumer LCI of energy
use for bringing fruit to market. If you had process data on energy use for producing
(growing) an apple, for instance, you could expand the scope of your study by adding one of
the allocated flows, i.e., 0.029, 0.011, or 0.01 liters of diesel fuel for transport. As noted
above, a key concern is the choice of allocation method (or methods) in support of such a
study. While the Standard says the first priority is to use a physical relationship-based factor,
the larger issue is whether any of the allocation methods would individually lead to a
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

160

Chapter 6: Analyzing Multifunctional Product Systems

different result. If in the apple LCI, for instance, you chose the economic allocation over
the item-based allocation, you would be choosing a factor that represents 3 times less
estimated transport energy. If the energy used to grow the apple was otherwise comparable
in magnitude to the energy required for transport, then the choice of transportation
allocation method could have a significant effect on the overall result. In this case, the choice
of allocation may be construed as biasing the overall answer. Since the Standard suggests
using sensitivity analyses, the best option may be to show the cradle to consumer results
using all three types of allocation.
The same math could be used to allocate other flows if available, such as data on an
unallocated output flow of 10 kg of CO2 emissions from the truck. Since the allocation
factors represent flow shares of individual pieces of fruit, we use the same allocation factors
as we did for diesel fuel to allocate the CO2 emissions. In this case, the item-based allocation
flow would be 1/175th of the 10 kg of CO2, or 0.057 kg of CO2 for any type of produce. A
mass-based allocation for each apple would distribute 0.2 kg/90 kg * 10kg = 0.022 kg of
CO2. Of course, all of the allocated flows of CO2 would have a value exactly double those
of diesel (since there are 10 kg versus 5 liters of unallocated flow). The relative sensitivities
of the various allocation choices would be the same.
Note that in the delivery truck example, it was implicitly assumed that all of the produce
were sold at market - we expected to get $190 in revenue. Aside from being a convenient
assumption, it also implies that the truck would return back to the farm with no produce.
One could argue that this empty return trip (referred to as backhaul in the transportation
industry) requires additional consumption of fuel and generates additional air emissions that
should be allocated to the produce sold at market. Given the weight of the fruit compared
to the total weight of the truck, it's likely the backhaul consumed a similar amount of fuel,
and thus, adding the backhaul process might double the allocated flows of diesel for delivery
in an updated cradle to consumer LCA. For larger trucks or ocean freighters, an empty
backhaul may consume significantly less fuel. Regardless, these considerations represent
potential additions to the system boundary compared to the delivery alone.
The delivery truck example is not just an example chosen to simplify the discussion of
allocation. Indeed, similar process data would have to be allocated to support different LCIs
and LCA studies. For example, a study on the LCA of making purchases online versus in
retail stores might allocate the energy required for driving a UPS or FedEx delivery truck
amongst the packages delivered that day. It is not obvious which of the allocation methods
is best, nor is it obvious how the allocated results might change with the change of the
method. The mass of the boxed products is potentially a bigger factor in how much fuel is
used, and the variation in the value of the boxes is likely much higher (especially on a per
unit mass basis!) than in our simple produce delivery truck example.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

161

Now that the general quantitative methods of allocation have been discussed, Equation 6-1
represents a general allocation equation, recalling that unit processes generally have multiple
unallocated flows (in this case, indexed by i). Every allocated flow can be associated with
each of the n co-products (indexed by j) using the product of the unallocated flow and the
allocation factor for each co-product. The fraction on the right hand side of Equation 6-1 is
the previously defined allocation factor, which divides the unit parameter of the co-product
j, wj (e.g., mass per unit), for the chosen allocation method by the product of the number of
units (m) and the unit parameters for all n co-products (with the sum indexed by k).
!,! = !

!
(6
!
!!! ! !

1)

Applying this equation to the truck delivery example, the unallocated flow of diesel fuel is 5
liters, the mass-based allocation factor for apples is the mass per apple divided by the sum of
mass of all of the produce in the truck, or 0.2 kg / (0.2 kg*100+2 kg*25+0.4 kg*50) =
0.00222, so the allocated flow per apple is 0.011 liters. Using these values in Equation 6-1
generates all of the results in Figure 6-3.
While this transportation example is useful for explaining allocations, the equation and
general method are useful in deriving allocations for other unit processes. In a subsequent
section, we discuss the ways in which allocated flows are implemented and documented in
existing LCI data modules by looking at actual data modules.
Allocation methods within the scope of a study should be as consistent as possible, and
comparative studies should use the same allocation methods (e.g., allocating all flows on a
mass basis) for each system or process. Due to challenges associated with data acquisition,
this may prove difficult, so at least analogous processes should be allocated in the same way
(e.g., all refining processes on a mass basis). Regardless, all allocation methods and
deviations from common allocation assumptions should be documented.

Allocation in Resource Recovery Systems


The Standard provides additional detail for allocation in processes and systems where
resources are recovered. Resource recovery processes are those where resources are reused,
recycled, composted, etc. Additional detail is provided in the Standard because recovery
systems have inputs and output flows that are shared across multiple systems. For example,
virgin plastic may be manufactured and then recovered and used in various iterations of
recycled plastic products, such as plastic fasteners, bottles, and packaging. This view may
imply that the flows from the virgin production of plastic are allocated across all subsequent
product systems. However, resource recovery systems lead to various effects, such as

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

162

Chapter 6: Analyzing Multifunctional Product Systems

potential changes in material properties (e.g., the durability of the material may be adversely
affected due to reprocessing) that need to be considered.
One way that recovery systems can be characterized is as to whether the resource is
recovered into the same kind of product system or not, i.e., whether open or closed loop
recovery occurs. In closed loop systems, the recovered resource is recovered into the same
product system (e.g., aluminum cans are recycled into aluminum cans). In open loop
systems, the initial material or resource is subsequently used in alternative systems that, over
time, are different than the initial system (e.g., plastic is recovered into different products, or
in various recovery systems related to paper and paperboard products). Such systems are
often associated with changes in material properties, as motivated above. A popular form of
open loop procedure is cascade recycling, where an initial virgin material is continually
used in various processes, losing some quality in each process, until it eventually can no
longer be recovered into another system. At the end of life, such materials must be
landfilled, burned for energy, etc.
The Standard provides guidance for both open and closed loop systems. For closed loop
systems (or open loop systems where material properties do not change), allocation is not
needed because there are no processes whose flows must be shared across the systems in the
extended life cycle. For open loop systems, on the other hand, allocation is needed, and the
set of processes whose flows need to be shared are explicitly identified and allocated (e.g.,
the material recovery process from the initial product life to the second use of the material).
As mentioned above, in open-loop systems, allocation occurs, and in order of preference the
method should consider mass, economic, and the number of uses as the basis.
Recall that the Standard defines a product as, "any good or service," which implies each
product has value, else it might be classified more generally as a release or waste. There are
interesting implications when the perspective changes regarding an output of a process from
a product to a waste, or vice versa. For example, fly ash releases from electricity generation
have historically been impounded but now may have value as a feedstock for alternative
materials like cement or wallboard.
Another example pertains to zinc production. Historically, the heavy metals lead and
cadmium have been co-products of zinc mining, and LCA studies allocated flows across the
three products. Increasing global regulation has suppressed demand for lead and cadmium,
and they are now generally waste products. Results of past LCAs would need to be updated
to assign no flows to the wastes, since only products are subject to allocation.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

163

Allocation Example from LCI Databases


Existing LCI databases have information on many unit processes with co-product outputs.
When used with search engines and LCA software, the unallocated and allocated flows may
be found, and comparing them helps to understand the link between the unallocated and
allocated data.
The US LCI database provides various process data modules where there is already available
information on products and co-products, and thus excellent examples to learn about
allocation. Two prime examples are for the two refinery processes in the US LCI database,
named Petroleum refining, at refinery and Crude oil, in refinery9. These two unit processes provide
very similar, yet different, LCIs for a refinery. If you search for "refinery" or "refining" in
the Digital Commons/NREL website (as demonstrated in Chapter 5), various unit processes
and allocated co-products are returned, including:

Petroleum refining, at refinery

Diesel, at refinery (Petroleum refining, at refinery)

Gasoline, at refinery (Petroleum refining, at refinery)

Crude oil, in refinery

Diesel, at refinery (Crude oil, in refinery)

Gasoline, at refinery (Crude oil, in refinery)

The co-products in the returned result can be identified because the name of the co-product
as well as the name of the unallocated refinery process model (in parentheses) is given. The
connection between these two types of data is discussed in more detail below.
Following the format of Chapter 5, Figure 6-4 shows the data available for the US LCI unit
process Petroleum refining, at refinery. The table has been abridged by removing various
elementary flow inputs to save space and reduced to 3 significant figures. The 'category' and
'comment' fields have also been removed. This crude oil refining process shows 9 italicized
product flows representing various fuels and related refinery outputs, e.g., diesel fuel,
bitumen and refinery gas. The last row also shows a functional unit basis for the unit
process, i.e., per 1 kg of petroleum refining, which is just a bookkeeping reference entry and
does not represent an additional product. Unlike other process data modules we discovered
in Chapter 5, the outputs of this crude oil process are not all in a singular unit such as 1
gallon or 1 kg. Instead, the product flow quantities are 0.252 liters of diesel, 0.57 liters of
gasoline, etc. The reason for this difference should be clear there are multiple products! It
9

Other quite accessible examples in the US LCI database include agricultural and forestry products.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

164

Chapter 6: Analyzing Multifunctional Product Systems

is not possible to have a single set of raw inputs and outputs all be normalized to '1 unit of
product' when more than one product exists. This further motivates the notion that the
refinery inputs and outputs would need to be allocated.
Flow

Type

Unit

Amount

Electricity, at grid, US, 2008

ProductFlow

kWh

0.143

Natural gas, combusted in industrial boiler

ProductFlow

m3

0.011

Residual fuel oil, combusted in industrial boiler

ProductFlow

0.027

Liquefied petroleum gas, combusted in industrial boiler

ProductFlow

0.001

Transport, barge, diesel powered

ProductFlow

tkm

0.000

Transport, barge, residual fuel oil powered

ProductFlow

tkm

0.001

Transport, ocean freighter, diesel powered

ProductFlow

tkm

0.490

Transport, ocean freighter, residual fuel oil powered

ProductFlow

tkm

4.409

Transport, pipeline, unspecified petroleum products

ProductFlow

tkm

0.652

Crude oil, extracted

ProductFlow

kg

1.018

Dummy_Disposal, solid waste, unspecified, to sanitary landfill

ProductFlow

kg

0.006

Benzene

ElementaryFlow

kg

1.08E-06

Carbon dioxide, fossil

ElementaryFlow

kg

2.51E-04

Carbon monoxide

ElementaryFlow

kg

4.24E-04

Methane, chlorotrifluoro-, CFC-13

ElementaryFlow

kg

2.18E-08

Methane, fossil

ElementaryFlow

kg

3.70E-05

Methane, tetrachloro-, CFC-10

ElementaryFlow

kg

1.36E-09

Particulates, < 10 um

ElementaryFlow

kg

3.15E-05

Particulates, < 2.5 um

ElementaryFlow

kg

2.31E-05

SO2

ElementaryFlow

kg

2.47E-04

Diesel, at refinery

ProductFlow

0.252

Liquefied petroleum gas, at refinery

ProductFlow

0.049

Gasoline, at refinery

ProductFlow

0.57

Residual fuel oil, at refinery

ProductFlow

0.052

Bitumen, at refinery

ProductFlow

kg

0.037

Kerosene, at refinery

ProductFlow

0.112

Petroleum coke, at refinery

ProductFlow

kg

0.060

Refinery gas, at refinery

ProductFlow

m3

0.061

Petroleum refining coproduct, unspecified, at refinery

ProductFlow

kg

0.051

Petroleum refining, at refinery

ProductFlow

kg

Inputs

Outputs

Figure 6-4: US LCI Database Module for Petroleum refining, at refinery

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

165

The comment fields in the Petroleum refining, at refinery process, which were removed in Figure
6-4, are excerpted in Figure 6-5. They contain notes related to how the input and output
flows could be allocated to the co-products, and how the creators of this data module
derived the converted allocated values for the specific co-products on a 'per unit of product'
basis. The result of this procedure is the various co-product based LCI modules listed
above.
Product
Diesel, at refinery
Liquefied petroleum gas, at refinery
Gasoline, at refinery
Residual fuel oil, at refinery
Bitumen, at refinery
Kerosene, at refinery
Petroleum coke, at refinery
Refinery gas, at refinery
Petroleum refining co-product, at refinery

Comment
Mass (0.2188 kg/kg output) used for allocation.
Mass (0.0266 kg/kg output) used for allocation.
Mass (0.4213 kg/kg output) used for allocation.
Mass (0.0489 kg/kg output) used for allocation.
Mass (0.0372 kg/kg output) used for allocation.
Mass (0.0910 kg/kg output) used for allocation.
Mass (0.0596 kg/kg output) used for allocation.
Mass (0.0451 kg/kg output) used for allocation.
Mass (0.0515 kg/kg output) used for allocation.

Figure 6-5: Comments Related to Allocation for Co-Products Of Petroleum refining, at refinery

The available US LCI database Microsoft Excel spreadsheet for the Petroleum refining, at
refinery process module shows these allocation factors in the X-Exchange worksheet, in
columns to the right of the comment fields.
The summary product flow of "per kg of petroleum refining" and the comment fields make
clear that allocation is based on the physical relationship of mass of the various products.
For example, the first row of Figure 6-5 shows the data needed to create an allocation factor
for the diesel co-product. The comment in Row 1 says that 0.2188 kg diesel is produced per
kg refinery output, or in other words, 21.88% of the refinery product represented in the data
for this unit process becomes diesel on a mass basis. The value 21.88% is the allocation
factor for diesel. Likewise, 2.66% by mass becomes LPG, and 42.13% becomes gasoline.
The sum of all of the mass fractions provided is 1 kg, or 100% of the mass of total refinery
output. With these values, you could use the information in Figure 6-5 to transform the
unallocated inputs and outputs into allocated flows for your desired co-product.
Management of different units and conversions can be a complicating factor in transforming
from unallocated to allocated values in data modules. Figure 6-4 shows all of the inputs and
outputs connected to the refinery on a basis of a functional unit of 1 kg of refined petroleum
product (which we noted was just a book-keeping entry). Note that the co-products have
multiple units liters, cubic meters, and kilograms. But the product flow of diesel given in
the unit process is 0.252 liter, rather than 1 liter, or rather than 1 kg. We are likely interested
in an allocated flow per 1 unit of co-product, which means our previous allocation equation
needs an additional term, as in the generalized Equation 6-2.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

166

Chapter 6: Analyzing Multifunctional Product Systems

!,! =

!
!
!!! ! !


(6 2)
!

Since the allocation factor for diesel in the refinery on a mass basis is 0.2188 (21.88%), that
amount of each of the inputs of the refinery process would be associated with producing
0.252 liters of diesel. Using Equation 6-2, the allocated flow of crude oil (row 10 of Figure
6-4) needed to produce 1 liter of diesel fuel is:
!"#$%,!"#$#% =

1.018
1 0.884
0.2188
=
1
0.252

Other comments in the US LCI data module (not shown here) note that the assumed density
of diesel is 1.153 liter/kg, so this allocated result means 0.884 kg crude oil is needed to
produce 1 liter /(1.153 liter/kg) = 0.867 kg of diesel. This mass-based ratio is comparable to
the 1.018 kg crude oil needed to produce the overall 1 kg of refinery product10. While the
1.153 liter/kg value may or may not be consistent with a unit conversion factor you might
find on your own, it is important to use the same ones as used in the study, else you may
derive odd results, such as requiring less than 1 kg of crude to produce 1 kg of diesel.
Similarly, the amount of crude oil needed to make 1 liter of gasoline would be
1.018
1 0.752
0.4213
=
1
0.57

or, using the US LCI module's assumed density of 1.353 liter/kg, we need 0.752 kg of crude
oil to produce 1/1.353 = 0.739 kg of gasoline. The same allocation factors are used to
transform the other unallocated flows into allocated flows (e.g., the many other inputs and
outputs listed in Figure 6-4) per unit of fuel.
Some of the results above may be unintuitive i.e., that you started with a process making
0.252 liters of diesel, but ended up needing 0.884 kg of crude petroleum to produce 1 liter of
diesel. Or, if the refinery has multiple products, why is more than 1 unit of crude oil (rather
than just a fraction of 1) needed to produce a unit of refined fuel?
Figure 6-6 tries to rationalize the potential sources of confusion above by showing how
much crude oil is needed to produce varying quantities and units of the 9 co-products. This
information is based on the input flow of crude oil into the unit process (1.018 kg crude / kg
refinery product), the allocation factors from Figure 6-5, and the unit conversions provided
in the US LCI data module. The results show the allocated mass of crude oil per the flows
given in the original unit process from Figure 6-4 (e.g., per 0.252 liters of diesel as above),

10

This could also be represented as adding yet another unit conversion at the end of Equation 6-2, from liters to kg of diesel.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

167

per the varying unitized flow of each product (e.g., per 1 liter of diesel or 1 kg of bitumen),
and with all flows converted on a per kg of co-product basis.
Allocated Crude oil (kg) per
Process Units

Norm Process
Units

kg
product

Diesel (liters)

0.219 / .252 l

0.884 / l

1.018

Liquefied petroleum gas (liters)

0.027 / .049 l

0.552 / l

1.016

Gasoline (liters)

0.421 / .57 l

0.752 / l

1.018

0.049 / .052 l

0.962 / l

1.019

0.037 / .037 kg

1.018 / kg

1.018

Co-product (units given in US LCI module)

Residual fuel oil (liters)


Bitumen (kg)
Kerosene (liters)

0.091 / .112 l

0.824 / l

1.018

Petroleum coke (kg)

0.060 / .060 kg

1.018 / kg

1.018

Refinery gas (m3)

0.045 / .061 m3

0.751 / m3

1.019

Petroleum refining coproduct, unspec. (kg)

0.051 / .051 kg

1.018 / kg

1.018

Figure 6-6: Comparison of Allocated Quantities of Crude Oil for Nine Co-products of Petroleum
refining, at refinery Process, for Various Co-product Units (l = liter).

The first two result columns summarize results using the methods just demonstrated for
gasoline and diesel. The main difference between them is whether the unit basis of the coproduct is the fractional unit value given in the process data, or whether it has been
converted to a per 1 unit of co-product basis. There were three different units of product
presented (liter, kg, and m3), which may otherwise distort a consistent view of the effects of
allocation. The final column of Figure 6-6 may be surprising, as all of the co-products have
the same requirement of crude oil per kg of product (about 1.018, the original unit process
flow). If the whole point of allocation was to assign flows to the different products, why
does it appear that they all have the same allocation value? In this case, it is because a massbased allocation was used, thus the appearance of the crude oil per kg is constant (the same
effect would be have been seen in Figure 6-3 the mass based allocation factor was 0.0055
liters per kg for all produce). Since we were looking at an energy system where the default
units were liters or m3, it disguised this result. It is pervasive, though, in LCA, and is an
expected result. If, for instance, we performed an economic-based allocation, the column
'crude oil per dollar of product' would have constant values.
Regardless, you can follow the use of allocated input and output flows for a co-product
based on data modules in the US LCI model by exploring their data and metadata in
openLCA, SimaPro, etc., using the same methods shown in the Advanced Material for
Chapter 5.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

168

Chapter 6: Analyzing Multifunctional Product Systems

Avoiding Allocation
When allocation was introduced, it was stipulated that the Standard says that the main goal
should be to avoid it. More specifically, the Standard says that allocation should be avoided
in one of two ways, each of which has the goal of making more direct connections between
the input and output flows of a process and its products, so as to remove the need for
allocating those flows.
The first recommended alternative to allocation is to sub-divide or disaggregate the unit
process into a sufficient number of smaller sub-processes, until none of these processes have
multiple products. Thus the link can be made between inputs and outputs and a single
product for each process (and thus the system). While this solution is very attractive, it
requires being able to collect additional data for all of the new sub-processes. It may also be
a puzzling suggestion because the Standard elsewhere defines a unit process as "the smallest
element in the inventory analysis for which input and output data are quantified." If a unit
process can be broken into sub-processes, then it was not the smallest possible element in
the first place. However, this reminds the analyst to create processes as distinct and small in
boundary as possible given access to data, so as to avoid allocation issues. In this case, it
means trying to collect data at sufficient resolution as to be able to have processes and
systems with singular product outputs.
Figure 6-7 illustrates a simple case of disaggregation, where a process similar to Figure 6-1
with multiple product outputs is subdivided into multiple processes (in this case, 1 and 2),
with distinct Products A and B. In reality, the disaggregated system may need to have more
than 2 processes, and some processes may have only intermediate flows leading to an
eventual product output. Generally, creating process models at a lower level (alternatively
called a higher level of resolution) is no different than creating a higher-level process model
that requires more effort and data. In short, the goal of this 'divide and conquer' style
approach is to ensure the result is a set of unit processes with single outputs and no coproducts.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

169

Figure 6-7: Disaggregation into Sub-processes

There are other possible ways of avoiding allocation in this way without disaggregating
processes, such as by drawing different explicit boundaries around the system, so that coproducts are not included. For example, an overall oil refinery process has hundreds of
processes and many products. If only a particular product is of interest, then it may be
possible to draw a smaller and more explicit boundary around the inputs, outputs, and
processes in a refinery needed just for that product (and that has no connection to other
products). Thus, allocation may be avoided. Such an exercise might be impossible in a
complex system as a refinery since simultaneous production of outputs is common.
The second alternative to allocation recommended in the Standard is system expansion, or,
"expanding the product system to include the additional functions related to the coproducts." The Standard offers little detail on this method, so practitioners and researchers
have developed various interpretations and demonstrations of system expansion, such as
Weidema (2000). System expansion leverages the facts that systems with multiple product
outputs are typically multifunctional, and that LCA requires definition and comparison of
systems on a functional unit basis. System expansion adds production of outputs to product
systems so that they can be compared on the basis of having equivalent function.
Going back to the earlier example of a system producing heat and electric power, each of
these products provides a different function the ability to provide warmth and the ability
to provide power. In a hypothetical analysis based only on a functional unit of electricity,
comparing a CHP system with a process producing only electricity would be unequal. Figure
6-8 generalizes the unequal comparison of the two different systems, one with a single
product and function and one with two products and functions. Expansions would be
similar for more than two products.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

170

Chapter 6: Analyzing Multifunctional Product Systems

Figure 6-8: Initial Comparison of Multifunctional Product Systems (Source: Tillman 1994)

Note that while the two systems provide the identical Function 1 (providing electricity in our
example), the technological process behind providing that function is not generally assumed
to be identical (one is Product A and the other is Product C). If the functional unit of a study
is, for instance, 'moving a truck 1 km', the product to support that function might be
gasoline or diesel fuel. Likewise, the product could be the same, but the technology behind
making it different. In such an example, the product could be electricity, but generated using
renewable or non-renewable generation technology.
In a comparative LCA, consideration of functions is important. Product systems with a
different number of functions can be analyzed by adding functions and products across
systems as needed until the systems are equal (i.e., by 'expanding the system boundary' for
systems that do not have enough function outputs). Considering Function 1 to be providing
power and Function 2 to be providing heat, for instance, system expansion allows the
product systems in Figure 6-8 to be compared by adding processes representing the
production of heat to the system (making Product C). Figure 6-9 shows the result of such a
system expansion.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

171

Figure 6-9: System Expansion by Adding Processes (Source: Tillman 1994)

The system expansion focuses on ensuring that various systems provide the same functions.
However, the additional functions may be provided via various products, and thus a variety
of alternative technologies or production processes. When considering how to model
additional function in the expanded process, it may be done by modeling the identical
product as in the multifunction products (e.g., natural-gas fired electricity), or with an
alternative product and/or technology providing the same function (e.g., gasoline or diesel,
renewable or non-renewable electricity). Of course, using different production technologies
in system expansion may to lead to significantly different results, but the reality of markets
and practice may support this case. Alternative technology assumptions for the expanded
system should be considered in a sensitivity analysis.
Alternative technologies may be hard to determine or justify. They should be reasonable,
typical in a market, and not overly bias the results. For example, in expanding a system,
electricity produced as a co-product by burning lignin in a biofuel production plant may be
alternatively produced by US average electricity. Likewise, solar cells may be a poor choice
to expand a system in comparison of a fossil-based process that produces electricity.
The general example presented in Figure 6-9 is straightforward, and system expansion is not
necessarily more complex than disaggregation in fact, it can often be quite simple. One
should not view either disaggregation or system expansion as the preferred alternative to
allocation, and likewise not generalize one or the other as being more difficult. In a
particular study, performing disaggregation could be time and/or data prohibitive and thus
system expansion is the only alternative to allocation. On the other hand, system expansion
may be hard to motivate or justify given challenges in identifying alternative technologies for
the expanded system, which leads to spending time and effort in disaggregating the
processes. The refinery example discussed earlier is a useful example. The high-level
refinery process data module shown is highly aggregated (and as a result it has many product
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

172

Chapter 6: Analyzing Multifunctional Product Systems

outputs). Far more detailed process models of refineries have been developed and could be
used if needed for a study in lieu of using the allocated refinery module presented. While
this may take substantial effort, it avoids the need for the system expansion approach that
would require creating larger comparative product systems with added production for many
more refinery outputs.
It is hopefully intuitive that the math behind the system expansion example in Figure 6-9
that 'adds to process 2' would lead to the same relative result as 'subtracting' (i.e., crediting)
the same process data from the results of Process 1. This equivalent method is shown in
Figure 6-10, and it is referred to as the avoided burden approach. This is still considered
system expansion because the boundary was expanded for one of the systems (but the
numerical results are credited instead of added).

Figure 6-10: System Expansion of a Multifunction Process via Subtraction

The result of either allocation or system expansion methods may cause other issues in your
study, since modifying the original model may mean your study scope changes (and
potentially your goal as well). In the heat and power example, a study originally seeking to
compare only the effects of 'generating 1 kWh of electricity' in two different systems (i.e.,
Product A vs. Product C) would need to have its study design parameters adjusted to
consider systems providing power and an equivalent amount of heat (via Product B), e.g.,
'generating 1 kWh of electricity and producing 100 MJ of heat'.
For the heat and power example, the US LCI database has various data to support a system
expansion effort. Figure 6-11 shows abridged US LCI data for CHP.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

Flow

173

Type

Unit

Amount

ProductFlow

kg

4E-02

ProductFlow

m3

0.004

Carbon dioxide, fossil

air/unspecified

Carbon dioxide, biogenic

Inputs
Wood, NE-NC hardwood
Natural gas, combusted in industrial boiler
Outputs
air/unspecified

70

Heat, onsite boiler, hardwood mill average, NE-NC

ProductFlow

MJ

Electricity, onsite boiler, hardwood mill average, NE-NC

ProductFlow

kWh

5.0E-05

Figure 6-11: Process Data for Hardwood Combined Heat and Power (CHP)
(abridged, adapted from various US LCI database modules)

The US LCI database also has process data for gas-fired electricity, as in Figure 6-12.
Flow

Type

Unit

Amount

ProductFlow

m3

0.3

air/unspecified

kg

0.6

ProductFlow

kWh

Inputs
Natural gas, processed, at plant
Outputs
Carbon dioxide, fossil
Electricity, natural gas, at power plant

Figure 6-12: Process Data for Gas-Fired Electricity Generation


(abridged, adapted from US LCI database module Electricity, natural gas, at power plant)

Biogenic emissions occur as a result of burning or decomposing bio-based products. The


'biogenic' CO2 emissions in the CHP process data arise from burning the wood, and the
fossil CO2 emissions come from burning the gas. The carbon in the wood comes during its
growth cycle via natural uptake of carbon (in this case through photosynthesis). In lieu of
crediting the natural product for this same uptake of carbon, which would lead to a net of
zero emissions, the biogenic emissions of CO2 are considered to be neutral. As such,
biogenic carbon emissions are accounted for but not generally added to fossil or other
sources.
Example 6-1 demonstrates how to use the US LCI data for a system expansion involving
CHP.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

174

Chapter 6: Analyzing Multifunctional Product Systems

Example 6-1: Compare the direct CO2 emissions of the electricity from a hardwood mill
combined heat and power (CHP) process with US natural gas-fired electricity.
Answer: As motivated above, the requested comparison is problematic, as the CHP process
has two outputs with two functions and the second has only one with one function. This is a
good case for using system expansion, and alternative process data is needed for producing heat
to add to the existing gas-fired electricity process, as in Figure 6-9. Using an alternative
technology assumption, we can say that heat is often provided by burning natural gas, e.g., in a
furnace, using abridged US LCI data in Figure 6-13.
Flow

Type

Unit

Amount

ProductFlow

m3

Carbon dioxide, fossil

air/unspecified

kg

Heat, natural gas, in boiler

ProductFlow

MJ

30

Inputs
Natural gas, processed, at plant
Outputs

Figure 6-13: Process Data for Generating Heat (from US LCI Heat, natural gas, in boiler)

Using the hardwood mill boiler as the baseline, our goal would be to compare the two systems
based on a common functional unit of 'producing 1 MJ of heat and 5E-05 kWh of electricity' (or,
alternatively, to the scaled functional unit 'producing 1 kWh of electricity and 20,000 MJ of
heat').
Tracking the CO2 emissions, the CHP process emits 78 grams of CO2 for the 1 MJ heat/5E-05
kWh electricity functional unit, of which 70 g are biogenic emissions and 8 g are fossil-based.
For the system expanded electricity and heat processes, generating 5E-05 kWh of electricity
would emit 0.6 kg CO2 / kWh * 5E-05 kWh = 0.03 g CO2, and producing 1 MJ of heat would
emit 2 kg/30 MJ = 67 g CO2. However, all of the CO2 in this expanded system is fossil-based.
In fact, the systems are roughly comparable in total CO2 emissions, but the CHP unit using
wood input has 59 g, or about 90%, less fossil CO 2 emissions (8 g versus 67.03 g).
Instead of adding the heat process to the electricity process, we could credit the CHP process
for avoided production of heat, as in Figure 6-10. The 2 kg/30 MJ = 67 g of fossil CO 2 from
Figure 6-13 would be subtracted from the CHP system. In this case, it means the CHP system
has 8 g 67 g = -59 g of fossil CO2 emissions per 5E-05 kWh of electricity! The relative
difference between the systems is the same (59 g less fossil CO2).
It is worth noting that by using the alternative process data for heat production, the CHP
system effectively has the same emissions factor (2 kg fossil CO2 / 30 MJ) as the alternate
process. While we have not 'allocated' the CHP system's emissions, we have assigned the same
share of emissions as in the alternate process.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

175

The final result of 'negative emissions' when using the subtraction method, as shown in
Example 6-1, has generally caused some stakeholders to suggest not using this method, but
again, the relative difference is the important result, regardless of the sign convention used.
This negative result is an odd outcome, especially as compared to allocation, which only
leads to positive flows. But, the Standard suggests avoiding allocation.
If, for the sake of discussion, an energy-based allocation was done instead in Example 6-1,
then the CHP system produces 1 MJ of heat and 0.00018 MJ of electricity (at 3.6 MJ / kWh)
for a total of 1.00018 MJ. Thus the fossil CO2 would have been allocated 99.98% (1
MJ/1.00018 MJ) to the 1 MJ heat and 0.02% (0.00018 MJ/1.00018 MJ) to the 5E-05 kWh of
electricity produced. Comparing the allocated CHP process and its explicit estimate of
emissions for electricity to the natural-gas fired electricity process would be 0.0016 g CO2
versus 0.03 g per 5E-05 kWh, or 95% less, a larger difference than before but not
qualitatively different.
The CHP example motivates the argument of which alternate production process is used in
the expansion, and additionally whether the alternative production process chosen is
representative of the average, some best or worst case, or merely based on the only data
point available. While there is no explicit suggestion of which process should be chosen, the
choice should be sufficiently documented. Further discussion on using average or other
processes follows in the next section.

Expanding Systems in the Broader Context


For the additional production added via system expansion to make the various product
systems equivalent, so far discussions and examples have addressed what, on average, is an
appropriate additional production. This is because, so far, only an attributional or
descriptive approach has been discussed for LCA studies. Attributional LCAs seek to
determine the effects now, or in the past, which inevitably means that our concerns are
restricted to average effects. However, emerging practice and need in LCA often seeks to
consider the consequences of product systems or changes to them. In consequential LCA
studies, marginal, instead of average, effects are considered (Finnveden et al. 2009).
Marginal effects are those effects that happen 'at the margin', and in economics refer to
effects associated with the next additional unit of production. Furthermore, consequential
analyses seek to determine what would change or need to change given the influence of
changing product systems on markets. Using the CHP example, heat is currently, on
average, produced by burning average domestic natural gas. But on the margin, it is likely
that such gas may be shale gas from unconventional wells. In the future, perhaps some other
alternative fuel would be the marginal source. Certain resources, products, and flows may be
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

176

Chapter 6: Analyzing Multifunctional Product Systems

quite scarce, and a significant demand for one resource in a product system could, on the
margin, lead to an alternative that is radically different. Note that the average and marginal
technologies used in an analysis could be identical they do not need to be different but
when different, the effects associated with them could be substantially different.
Consequential studies often use separate economic models to aid in assessing changes. For
example, increased exploration and production of unconventional natural gas (e.g., shale gas)
has been stated through attributional LCA studies as leading to 50% reductions in the
carbon intensity of electricity generation, since natural gas on average would replace coalfired electricity generation which emits 50% less CO2 per kWh. Some of these studies were
done when coal was the source of 50% of our electricity generation. But on the margin,
increasing the supply of low-priced natural gas has the prospect of replacing not just coalfired generation but also electricity from nuclear power plants, which have very low carbon
intensities. Using such marginal assumptions, and an economically driven electricity dispatch
model, Venkatesh (2012) suggests that a consequence of cheap natural gas on regional
electricity CO2 emissions could be reductions of only 7-15%, far lower than the 50%
expected on average.
Another consequential effect seen in LCA studies relates to how land is used and managed.
Historically, studies related to activities such as biofuel production modeled only the effects
of plowing and managing existing cropland for corn or other crops (known as direct land
use). That means that the studies assumed that production would, on average, occur in
places where the same, or similar, crop was already being grown, and thus, the impacts of
continuing to do so are relatively modest. Recent studies (Searchinger 2008, Fargione 2008)
highlighted the fact that increased use of land for crops used to make bio-based fuels in one
part of the world can lead to conversion of other types of land, e.g., forests, into cropland in
other parts of the world (a phenomenon called indirect land use change). In such cases,
the carbon emissions and other effects of converting land into cropland are far higher. This
consequence of increasing use of cropland for biofuels has been quantitatively estimated and
added to the other LCA effects, and leads to results substantially different than those that do
not consider this effect.
In LCA, considering the market-based effects of a substitution typically leads to a discussion
of displacement of products. Displacement occurs when the production of a co-product of
a system displaces (offsets) production of another product in the market. The quantitative
effect of displacement is that the flows from what would occur when producing the
alternative product are 'credited' to the main product system because it is assumed that the
displacement results in less production of the alternative product. A traditional example of
displacement is for a system where electricity is produced as a co-product. In such
situations, usually the electricity co-product is assumed to displace alternative production of
electricity, typically assumed to be grid electricity. The effect of displacement is thus
crediting (subtracting from) the inventory of the product system with the inventory of an
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

177

equivalent amount of grid electricity. More practically, a co-product will often displace a
different product. An often-stated displacement assumption in the biofuels domain is that
the ethanol co-product dried distillers grains with solubles (DDGS) displaces animal feed.
This is because DDGS are assumed to be a viable alternative food for livestock, and so the
inventory flows of producing a functionally equivalent amount of animal feed are credited to the
product system. Here that displacement is not one-to-one, instead DDGS typically displaces
only about 60% of animal feeds on a mass basis given differences in protein content.
Production of heat may not be functionally equivalent either, and a displacement ratio may
be needed to make a relevant comparison in those cases. Both of these are examples of the
displacement ratio (or displacement efficiency) that would be expected, and it may be
greater than, less than, or equal to one based on cost, quality, or other factors.
Displacement is a consequence of production and market availability of a co-product;
however, even attributional LCAs can consider effects from displacement. Consideration of
the price and quantity differences resulting from displacement would be an appropriate
addition for a consequential LCA.
These "ripple-through" market considerations are at the heart of marginal analysis and
economics, and thus consequential LCA. Whether a study is attributional or consequential
in nature should be specified along with other parameters of a study. Of course, a study
could be interested in both average and marginal effects, so as to consider the relative
implications of the introduction of a product system to the market. Considering
consequences or marginal effects is not exclusively the domain of consequential LCA. It
could be useful in an attributionally-scoped LCA, e.g., by assuming an offset associated with
marginal electricity production, or one that requires system expansion to consider alternative
and substitute production of both average and marginal products. In such a study, both the
average and marginal results could be presented.
The core of this textbook will continue to be aligned with attributional analysis and methods,
and relevant notes and guides towards consequential methods will be included where
applicable. Additional sources on consequential LCA and differences from attributional
LCA are provided in the references section at the end of this chapter.

Sensitivity Analysis for Allocation and System Expansion


Previous chapters have defined sensitivity analysis and discussed its place within the
Standard. While the Standard clearly states that allocation should be avoided in favor of
system expansion, as consistent with other LCA study issues, a primary concern is whether a
particular assumption has a significant effect on the study results. In this case, we are
concerned as to whether the qualitative and quantitative conclusions change based on our
choice of allocation method, and/or whether we should perform system expansion instead
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

178

Chapter 6: Analyzing Multifunctional Product Systems

of allocation. The study results should explicitly note whether the specific choice of
allocation method, system expansion, etc. has a significant effect on the results (i.e., whether
other choices would have led to quantitatively and qualitatively different results).
To perform a sensitivity analysis of allocation methods, the results are found using each of
the alternative allocations. It is common to see a table or graph comparing the effect at the
process or whole product system level. Figure 6-14 shows a graphical comparison of the
three different allocation methods for the three different fruits in the delivery truck example
at the beginning of the chapter.

Allocated Flow (liters)

0.12
0.1
0.08
0.06
0.04
0.02
0
Item

Mass
Apples

Watermelon

Economic
Le<uce

Figure 6-14: Sensitivity Comparison of Allocation Methods

As discussed earlier, but perhaps made more visually clear above, the choice of allocation
method has a fairly substantial effect on how much fuel is allocated to each fruit or
vegetable. The item-based method gives exactly the same allocation, while the mass and
economic-based methods allocate far more to watermelon than the other fruits. In such
cases, the choice of allocation has a significant effect on the overall results, and thus, the
implications should be noted in the study.
Beyond the example above, other life cycle studies have performed similar comparisons.
Jaramillo et al (2009) studied the life cycle carbon emissions of injecting CO2 from power
plants underground to enhance oil recovery operations. The study showed the intermediate
carbon emission factors for each of the products (oil and electricity) depending on various
allocation and/or system expansion assumptions. Figure 6-15 shows the various emission
factors for electricity. Again, such an analysis suggests that the choice of allocation method
and/or the assumptions made in support of system expansion significantly effect the results.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

179

Figure 6-15: Sensitivity Analysis of Allocation Methods and System Expansion for Electricity Related
Emissions from Enhanced Oil Recovery (Source: Jaramillo et al 2009)

These examples demonstrate ways in which studies can be documented in order to support
allocation or system expansion choices, and that the choices made can lead to results that are
significantly different.

Chapter Summary
Managing the data and assumptions associated with multifunctional systems is a general
challenge in LCA. While allocation is an often-used method, the Standard recommends
avoiding it by using system expansion. Allocation is a straightforward quantitative exercise
that partitions flows across products based on established relationships between them.
System expansion avoids allocation by creatively expanding the boundaries of analysis to
include alternate production and functions. As with other elements of LCA, choices in
performing either method need to be sufficiently documented, and where relevant,
supported by sensitivity analysis. Managing such systems well ensures studies will be of high
quality and can be more readily reviewed and compared to other studies.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

180

Chapter 6: Analyzing Multifunctional Product Systems

References for this Chapter


Fargione, J., Hill, J., Tilman, D., Polasky, S., Hawthorne, P., 2008. Land Clearing and the
Biofuel Carbon Debt. Science 319, 1235.
Finnveden G, Hauschild MZ, Ekvall T, Guine J, Heijungs R, Hellweg S, Koehler A,
Pennington D, and Suh S, Recent developments in Life Cycle Assessment, Journal of
Environmental Management, 2009, 91(1), pp.1-21.
Goldberg, Chris, license - https://creativecommons.org/licenses/by-nc/2.0/legalcode
Searchinger, T., Heimlich, R., Houghton, R.A., Dong, F., Elobeid, A., Fabiosa, J., Tokgoz,
S., Hayes, D., Yu, T.H., 2008. Use of US Croplands for Biofuels Increases Greenhouse
Gases Through Emissions from Land-Use Change. Science 319, 1238.
Tillman, Anne-Marie, Ekvall, Tomas, Baumann, Henrikke, and Rydberg, Tomas, Choice of
system boundaries in life cycle assessment, Journal of Cleaner Production, Vol, 2, Issue 1,
pp. 21-29, 1994.
Venkatesh, Aranya; Jaramillo, Paulina; Griffin, W.; Matthews, H. S., Implications of
changing natural gas prices in the United States electricity sector for SO2, NOX and life
cycle GHG emissions, Environmental Research Letters, 7, 034018, 2012
Weidema, Bo, Avoiding Co-Product Allocation in Life-Cycle Assessment, Journal of Industrial
Ecology, Volume 4, Issue 3, pp. 1133, July 2000.

Further Reading
Wang, M.; Lee, H.; Molburg, J. Allocation of energy use in petroleum refineries to petroleum
products. The International Journal of Life Cycle Assessment 2004, 9, 3444.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 6: Analyzing Multifunctional Product Systems

181

Questions for Chapter 6


1. Using the fruit truck allocation example from the chapter as a basis, derive the allocated
flows for each of the following unallocated flows not shown in the inventory for the fruit
delivery truck process for each of the three allocations (per item, by mass, and by economic
value):

10 kg fossil CO2 emissions to air

MORE HERE

2. Redo Figure 6-3 if


2. Redo refinery allocation but on XX basis (volume? Energy?) give parameters needed.
Practice allocation calculations using mass, energy content, and/or price of fuels. (Have
students determine whether selecting the lower or higher heating value has an effect on
allocation results if used consistently for all fuels?)
Bitumen from refinery?
Tweak sys expansion example to do.. make it consequential, etc?
Qualitative discuss potential products to be displaced..
FedEX delivery truck?
Advanced Material
Allocation in the software programs?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

182

Chapter 7: TBA

Chapter 7 Another chapter TBA?


Other topics for this chapter or leave them for later?
Add stuff in chapter 9 on doing allocation that way (matrix math)?
Boundaries
Circularity
Cut-off criteria? Brief mention before going in to more depth in Chapter 9?

Inputs in the Product Life-Cycle Inventory Analysis


The decision on which raw/intermediate material requirements to include in a life-cycle
inventory is complex, but several options are available:
Incorporate all requirements, no matter how minor, on the assumption that it is not possible
a priori to decide to exclude anything.
Within the defined scope of the study, exclude inputs of less than a predetermined and
clearly stated threshold.
Within the defined scope of the study, exclude inputs determined likely to be negligible,
relative to the intended use of the information, on the basis of a sensitivity analysis.
Within the defined scope, consistently exclude certain classes or types of inputs, such as
capital equipment replacement.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 7: TBA

PAGE HERE TO KEEP FORMATTING INTACT BELOW

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

183

184

Chapter 8: LCA Screening via Economic Input-Output Models

Classroom Exercise To Motivate Input-Output and Matrix Based Methods


The next two chapters motivate quantitatively-driven methods to support life cycle studies.
Both input-output and process matrix methods rely on linear algebra and matrix-based
methods to assist computational efforts. Armed with the introduction to life cycles and process
flow diagram approaches, the reader is prepared to learn about these matrix-based methods.
A team of researchers at Carnegie Mellon University has created a game-like simulation to
assist with this effort. It involves small groups simulating the production of four goods, each of
which has a small number of input and output flows. However the four goods have
interdependent flows.
A key learning objective of the simulation is to realize how
interdependent flows lead to process flow diagrams which are dependent on each other, and
how addressing that dependency requires additional demand and estimation of effects
upstream. Through the exercise, the underpinnings of the matrix approach are revealed. In the
end, using matrices to solve such problems is shown to be much more straightforward, and
avoids various potential math errors.
This exercise has been designed and tested with audiences ranging from middle school students
through corporate executives, all of which are in the process of learning about LCA. Ideal group
sizes are about 4 persons, although slightly smaller or larger groups work well. We strongly
suggest that this exercise be done, ideally in a classroom or small group setting,
before proceeding with subsequent chapters.
Post-assessments have shown a
significant increase in understanding of these topics amongst those exposed to this simulation.
Photos of kids doing it?

The whole exercise, including the introduction and motivation, as well as blank copies of the
various purchase order and tracking forms, was previously published (Hawkins 2009) and has
been made freely available through the generous support of the Journal of Industrial Ecology.
A direct link is available via the textbook website, under E-resources for Chapter 8.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

185

Chapter 8 : LCA Screening via Economic Input-Output


Models
The construction and use of economic input-output based LCA models, and the powerful
screening capabilities they provide are introduced in this chapter. As described in Chapter 5,
process-based LCA models are "bottom up" in type, and are defined by the scope laid out by
the analyst. Economic input-output LCA models can be thought of as being a "top-down"
type because they give a holistic estimate of resources needed for a product across the
economy. We recommend our separately published classroom simulation on input-output
LCA models (Hawkins 2009). This simulation exercise has been developed explicitly to
make these kinds of models understandable and to help you appreciate their strengths and
limitations. We will also describe the mathematical structure of the EIO-LCA model. Those
intending to use economic input-output tables in their own LCA studies are highly
encouraged to read the Advanced Material at the end of this chapter.
Learning Objectives for the Chapter
At the end of this chapter, you should be able to:
1. Describe how economic sector data are organized into input-output tables
2. Compute direct, indirect, and total effects using an input-output model
3. Assess how a process-based model system boundary might be adjusted by using
an input-output based screening tool.
Input-Output Tables and Models
In the 1930s, economist Wassily Leontief developed an
economic inputoutput table of the United States
economy and a system of equations to use them in models
(Leontief, 1986). His model represented the various inputs
required to produce a unit of output in each economic
sector based on surveyed census data of purchases and
sales of industries. By assembling a table describing all of
the major economic sectors, he was able to trace all of the
economic purchases needed to produce outputs in each
sector, all the way back to the beginning when raw
materials are extracted. The result was a comprehensive
model of the U.S. economy. For this work, Leontief
received the Nobel Prize in Economics in 1973.
An economic inputoutput (EIO, or just IO) table divides

What are economic sectors?


Sectors are groups of companies
with similar products. Given the
model, there may be a single
sector for all manufactured
goods, or many separate sectors,
for everything from electricity
(large economic output) to
tortillas (relatively small output).
There are various national and
global systems for categorizing
sectors, leading to differences in
the number of sectors used and
reports in the IO tables.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

186

Chapter 8: LCA Screening via Economic Input-Output Models

an entire economy into distinct economic sectors. The tables can represent total sales from
one sector to others, purchases from one sector, or the amount of purchases from one
sector to produce a dollar of output.
Input-output models were popular in the mid-20th century for high-level economic planning
purposes. They were used so that governments could better understand the requirements
of, and plan for, activities like war planning, procurement, effects of disarmament, or
economic requirements for building infrastructure such as roads. As will be seen below,
economic input-output models are vital for developing national economic accounts, and can
also be used for environmental life cycle assessment.
Vectors, matrices, and notation
A vector is a one-dimensional set of values (called elements) referenced by an index. If a vector
is named X then X1 is the first element, X2 is the second element, ... , and Xn is the last element.
If implemented in a spreadsheet, a vector could be arranged as elements in a row or in a
column. In this book, we use upper case italicized letters to represent vectors. Indexes of rows
and columns are italicized. Individual elements (a row/column entry) are upper case and italic.
A matrix is a two-dimensional array of values referenced by both a row and column index. (The
plural of matrix is matrices.) In a spreadsheet, a matrix would have rows and columns. One of
the most popular matrices is an identity matrix (I), which has the number one for all elements
on the diagonal (where the row and column indices are equal), and zeroes in all other cells. We
use upper case bold letters to represent matrices. A two-dimensional identity matrix is defined
as having these elements:

1
= !
0

0
!
1

Note in equations multiplying vectors and matrices together, the multiplication sign is omitted.

Figure 8-1 shows the structure of an IO transactions table. Each entry Zij represents the
input to column sector j's production process from row sector i. The final column, total
output of each sector X, has n elements, each the sum across the inputs from the other
sectors (a.k.a. the intermediate output, Oi) plus the output supplied by the sector directly to
the final demand Yi of consumers. To help explain these different components, the
intermediate outputs O are being sold to other producers to make other goods, while final
demand is sales to users of the product as is. For example, a tire manufacturer might sell
some of its output as intermediate product to an automobile producer for new cars and as
final product to consumers as replacement tires. For each of the n columns of the
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

187

transactions table, the column sum represents the total amount of inputs X to each sector
from other sectors. The two X values for each sector are equal. A transactions table is like a
spreadsheet of all purchases, thus values for a large economy could be in billions or trillions
of currency units.
Input to sectors
Output from sectors

1
2
3
n
Intermediate input I

Z11
Z21
Z31
Zn1
I1

Z 12
Z22
Z32
Zn2
I2

Z 13
Z23
Z33
Zn3
I3

Z 1n
Z2n
Z3n
Znn
In

Value added V
Total output X

V1
X1

V2
X2

V3
X3

Vn
Xn

Intermediate
output
O

Final
demand
Y

Total
output
X

O1
O2
O3
On

Y1
Y2
Y3
Yn

X1
X2
X3
Xn

GDP

Figure 8-1: Example Structure of an Economic InputOutput Transactions Table


Notes: Matrix entries Zij are the input to economic sector j from sector i. Total (row) output for each sector i,
Xi, is the sum of intermediate outputs used by other sectors, Oi, and final demand by consumers. Total
(column) output can also be defined as the sum of intermediate input purchases and value added.

Intermediate input, I, in an IO table is the sum of the inputs coming from other sectors (and
is distinct from the identity matrix I). Value Added, V, is defined by economists as the
increase in value as a result of a process. In an IO model, it is the increase associated with
taking inputs valued in total as I and making output valued X. While value added serves a
purpose in ensuring consistency between total inputs and total output, it includes some real
aspects of industrial production such as direct labor expenses, profits and taxes. While not
shown here, some IO frameworks have a sector representing household activities.
The typical process of making a transactions table involves acquiring data on transactions of
an economy through periodic surveys of companies in the various sectors to assess how
much economic output they are producing, and which other companies (and from which
sectors) they buy from. As you might imagine, these data collection activities can be very
time and resource intensive. The methods involve only a sampling of companies surveyed
rather than surveying every company. Like methods used for counting population,
additional statistical analyses are done to check the results and to ensure representative totals
have been estimated. A fundamental concern thus relates to deciding how many sectors to
divide the economy into. With fairly little effort one could develop a very coarse model of an
economy, e.g., with 10 sectors where agriculture, mining, manufacturing, etc. represent
various very aggregated sectors of activity. But such models have very low resolution and
answer only a limited number of questions. Thus, all parties have incentive to invest
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

188

Chapter 8: LCA Screening via Economic Input-Output Models

resources in generating tables with higher numbers of sectors so that more detailed analyses
are possible.
Since the effort needed to make a table is measured in person-years, often the highest
resolution tables (greater number of sectors) of an economy are not made annually. The fact
that economic production does not change quickly, i.e., the production recipe for the sectors
does not change much from year to year, further supports only periodic need for detailed
tables. There are typically annual and benchmark tables, where lower resolution (fewer
sectors) tables are made annually and higher resolution (more sectors) benchmark tables are
made less frequently, such as every 5 years as done in the U.S.
Economic input-output models are developed for
all major countries, usually by government agencies
such as the US Department of Commerce's Bureau
of Economic Analysis (BEA). Their primary use is
to help develop national accounts used to estimate
economic data such as the Gross Domestic
Product (GDP). Most nations routinely develop
inputoutput models with 50-100 sectors, although
few are as detailed as the current benchmark year
2002 428-sector model of the United States. (Given
the processing time required, the IO tables
reporting 2002 values were released in 2007. Tables
with data collected in 2007 were published in 2013.)
A recurring criticism of using EIO-based models is
that they rely on relatively old data due to these lags
in release of the economic data. However, as
shown in Figure 5-14, available process data tends
to be fairly old as well. Not all IO tables are made
by government employees. In various developing
countries, where expertise in government agencies
may not exist to do such work, these same activities could be done by other parties like
academic researchers in the home or a foreign country.
Gross
Domestic
Product
(GDP) is an indicator of output of
an economy, measured by the
sum of final demands or value
added across sectors. Consider
the alternative if final demands
and intermediate outputs were
both components of the output of
the economy, then much of that
output
would
be
"double
counted". Extending the example
above,
such
an
economic
measure would count both the
value
of
production
of
intermediate tires in new cars as
well as the value of the new cars
that came with those tires. Such
an
outcome
would
be
undesirable, thus only final
demands are counted.

For calculation purposes, it is helpful to normalize the IO table to represent the proportional
input from each sector for a single dollar of output. This table is calculated by dividing each
Zij entry by the total (column) output of that sector, Xj. We denote the resulting table - with
all entries between zero and one - as matrix A showing the requirements of other sectors
directly required to produce a dollar of output for each sector. When done in this way, A is
called the direct requirements table (or matrix). It is called "direct" because these
purchases happen at the highest level of decision making i.e., the direct purchases needed
to produce automobiles are windshields, tires, and engines.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

189

Example 8-1 illustrates the transformation of an IO table into its corresponding A matrix.
An economic inputoutput model is linear, so the effects of a $1,000 purchase from a sector
will be exactly ten times greater than the effects of a $100 purchase from the same sector.

Example 8-1: In this example, we will use the methods defined above to develop an A matrix.
Assume a transactions table for a simple 2-sector economy (values in billions):
1
2
V
X
Question:

1
150
200
650
1000

2
500
100
1400
2000

Y
350
1700

X
1000
2000

What is the direct requirements matrix (A) for this economy?

Answer:
We use the Zij/Xj formulation as described above. For example, the 150 and 200 values in
column 1 are divided by 1000 (the X value in column 1) and the 500 and 100 are normalized by 2000. Thus:

. 15 . 25
=!
!
. 2 . 05
The A matrix (and a Leontief model in general) thus represents a series of "production
recipes" for all of the sectors in the economy. A production recipe is just like a recipe for
cooking food, where you are told all of the ingredients needed to prepare a meal and in
which numerical amounts. Soon after Leontief won the Nobel Prize, he was quoted as
saying "When you make bread, you need eggs, flour, and milk. And if you want more bread,
you must use more eggs. There are cooking recipes for all the industries in the economy."
In A matrix terms, since all values in a column are normalized by the total sector output,
each of the coefficients in the production recipe is fractional. As a hypothetical and simple
example, imagine a small economy with only two sectors, like in Example 8-1, which are for
electricity generation and coal mining. The production recipe for making $1 worth of
electricity would involve purchasing a fraction of a dollar from the coal mining sector (as
well as some electricity). Likewise the production recipe for making $1 worth of coal would
involve purchasing some electricity and coal. This interdependence between sectors is
common, and a critical reason why EIO models are so useful in representing systems.
A key benefit of using EIO models is not the organization of the economy into tabular
form. It is that the direct requirements table can be used to trace out everything needed in
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

190

Chapter 8: LCA Screening via Economic Input-Output Models

the manufacture of a product going back to the very beginning of the life cycle of that
product. One can envision this by considering what direct purchases are needed, then what
purchases are needed to produce those direct purchases, and continuing back through the
purchasing levels to the initial raw materials obtained through mining or farming (n levels
back). If considering the manufacture of windows, a window manufacturing column may
show purchases of glass and wood (or metal) framing pieces. The glass manufacturing
column would show purchases of sand and other minerals, and the wood framing sector
would show purchases of forestry products. If given enough time, one could piece together
the total requirements by iteratively looking up the A matrix chain for such information.
Algebraically, the required economic purchases in all sectors of the economy required to
make any desired output Y is simply a vector of the various sectors' final demand inputs.
Thus, the total purchases, X, needed to generate output Y can be calculated as:
X = [I + A + AA + AAA + ] Y = IY +AY + A2Y + A3Y +

(8-1)

where X is the vector (or list) of required production, I is the identity matrix, A is the direct
requirements matrix (with rows representing the required inputs from all other sectors to
make a unit of output for that row's sector) and Y is the vector of desired output. For
example, this model might be applied to represent the various requirements for producing
electricity purchased by residences. In Equation 8-1, the summed terms represent the
production of the desired output itself, electricity (IY), as well as contributions from the
first tier suppliers, e.g., coal or natural gas (AY), the second tier suppliers, e.g., equipment
used at coal mines (A2Y), etc. In input-output terminology we refer generally to the IY and
AY terms (i.e., [I + A]Y) as the direct purchases (because those are everything related
directly to the decisions made by the operators of the final production facility) and all other
A2Y, A3Y, etc., terms as indirect purchases (since those production decisions are made
beyond the direct operators). The sum of the direct and indirect purchases is the total
purchases. Note that the terminology used in IO models may differ from that of other
modeling domains (and as introduced in Chapter 4) but is emphasized for consistency. In
other domains, direct purchases may only refer to IY.
How are economic inputoutput and process based
methods similar?

IO models that estimate direct and indirect


purchases use Equation 8-1 to combine the various
production recipes across the supply chain into a
total supply chain. That is, a final demand of
$20,000 into the automobile manufacturing sector
will determine all of the direct ingredients (in
dollars) needed to produce the car. One of these
direct requirements may be a $2,500 engine.
Therefore, the IO model also then (in the A2Y
term) estimates the ingredients needed to produce

Think of each of the sectors of the


IO model as a process. In each
sector's process, the "inputs" are
the
economic
inputs
(as
purchased) from all of the other
sectors and the "output" is the
product of the sector. An IO
Life
Cycle Assessment:
Quantitative
Approaches
for Decisions That Matter lcatextbook.com
model
is thus a linear
system
of
individual economic
process
models.

Chapter 8: LCA Screening via Economic Input-Output Models

191

the $2,500 engine. In the end, the thousands of overall ingredients needed to produce the
car are all included in the total purchases estimate. And all are aggregated into the relevant
sectors, even if they occur at different tiers of the supply chain (i.e., purchases of any direct
or indirect electricity are all added into a single sectoral total for purchases of electricity).
Since IO models by default represent flows across the entire economy, they can be classified
as a "top down" approach. Such methods give high level perspectives that can subsequently
be decomposed into pieces.
All of the linear algebra or matrix math needed is easily done in Microsoft Excel for small
models and there are many resources available on linear algebra as well as Excel matrix
arrays and functions should you decide to use them. In the Advanced Material at the end of
this chapter (Section 5), we describe how to do these operations in Excel and MATLAB 11,
a popular scientific analysis tool used by many academics and researchers that specializes in
matrix based computation.
For those of you familiar with the mathematics of infinite series (or matrix math), you will
recognize that the series in Equation 8-1 can be replaced by [IA]1, where the 1 indicates
the multiplicative matrix inverse, following an infinite geometric series approximation. Thus,
Equation 8-1 can be simplified to Equation 8-2 (see Advanced Material Section 1 at the end
of this chapter for more detail):
X = [I A]1 Y

(8-2)

Two important observations can be made from the use of Equations 8-1 and 8-2 above.
First, since summing all direct and indirect purchases results in all of the required production
needed, [I A]1 is called the total requirements table (or matrix).
Using the model in Equations 8-1 or 8-2, we can estimate all of the economic outputs
required throughout an economy to produce a specified set of products or services. The
total of these outputs is often called the "supply chain" for the product or service, where
the chain is the sequence of suppliers. For example, an iron ore mine supplies a blast furnace
to make steel, a steel mill supplies a fabricator that in turn ships their product to a motor
vehicle assembly plant. To make an automobile with its 20,00030,000 components,
numerous chains of this sort are required. An inputoutput model includes all such chains
within the linear model in Equation 8-1.
In the Advanced Material at the end of this chapter (Section 2), we further describe the
underlying data sources for the production recipes and transactions tables discussed here.

11

MATLAB is a registered trademark of The MathWorks, Inc., and will be referred to as MATLAB in the remainder of the book.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

192

Chapter 8: LCA Screening via Economic Input-Output Models

Example 8-2: Building on Example 8-1 we calculate direct and total requirement vectors.
Question:
Find the direct and total requirements of a $100 billion final demand into each
of the two sectors separately.
Answer:

Using the direct requirements matrix (A) found above, we know that:
=!

. 15
.2

1
. 25
!,=!
0
. 05

0
100
0
!, Y1 = !
!, Y2 = !
!
1
0
100

By definition, the direct requirements are [I+A]Y, and the total requirements are [I-A]-1Y. Using
Excel or a similar tool we can find the (rounded off) inverse matrix, which is:
[ ]!! = !

1.254
. 264

. 33
!
1.12

Thus, the direct requirements for Y1 and Y2 are:


[I+A]Y1 = !

115
25
! and [I+A]Y2 = !
!,
20
105

meaning that a $100 billion demand from sector 1 requires $115 billion in purchases from sector
1 and $20 billion in purchases from sector 2. Similarly, the total requirements are:
[I-A]-1 Y1 = !

33.0
125.4
! and [I-A]-1 Y2 = !
!, considering significant figures.
112.2
26.4

The supply chain perspective gives a basis for considering effects that happen before or after
a product is manufactured. We typically refer to points and decisions made before a product
is manufactured as upstream and those made after a product is manufactured as
downstream. Building on the previous example, from the perspective of the blast furnace,
the iron ore mine is upstream and the vehicle assembly plant is downstream. The upstream
and downstream terminology also can apply to process-based models.
Compared to process-based methods, IO methods take a more aggregate view of the sectors
producing all of the goods and services in the U.S. economy. IO models are quick and
efficient but are not perfect. Some of their key assumptions and limitations are listed below.
These lead to various uncertainties, and will be discussed in Chapter 11.
Sectors represent average production. All production facilities in the country that make products
and provide services are aggregated into a fixed number of sectors (in the US economy
models discussed in this book, approximately 400 sectors). Similar production facilities are
all assigned by definition into the same sector, and the model assumes identical production
in all facilities of these sectors. In short, no facility in a sector produces any differently than
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

193

any other in the model (even if in fact this is not true). This is the so-called "average
production" assumption. You can get a sense of why this is true by referring back to Figure
8-1, which simply aggregates all transactions from all of the facilities into the various Z
values, then normalizes by the total output of the entire sector, creating average A values.
Input-output models are completely linear. That is, if 10% more output from a particular factory is
needed, each of the inputs will have to increase exactly 10%. This of course is not generally
true, as there could be economies of scale that allow use of inputs to increase less than 10%.
However, this assumption is also common in process-based models.
Manufacturing impacts only. Given the data sources available and used, IO models generally
estimate total expenditures only up to the point of manufacture; that is, they do not estimate
downstream effects from product use (e.g., consideration of the gasoline needed to run the
consumer's car) or end-of-life (e.g., disposal costs and impacts). We will describe below ways
to use IO models to estimate impacts beyond the manufacturing phase.
Capital investments excluded. Capital inputs to manufacturing are not included in most IO
tables. In the US, such transactions are available in a supplemental transactions table and
could be added. Exclusion of capital investments is also a typical assumption for processbased models.
Domestic production. An IO model for a single economy is limited to estimating effects within
that country. Despite the fact that many inputs are likely sourced (imported) from other
countries in today's global economy, imported inputs are assumed to be produced in the
same way as in the model's home country. For certain sectors this may present a problem
because there is so little production done within the home country that the data and/or
environmental flows represented are not robust, but the model will still treat that production
as if done wholly within the home country and with the associated domestic impacts.
Models that move beyond this assumption are possible but beyond the scope of this chapter.
Circularity is inherent and incorporated into the model. In the previous chapters, we noted that all
interdependent systems have circularities such as steel needed to make steel, etc., and that
this complicated the ability to build process models. IO models embrace the existence of
circularity, and the effects are included within the basic model and matrix inversion.

InputOutput Models Applied to Life Cycle Assessment


Now that the underlying economic input-output models have been introduced, we can
discuss how they are applied to support decisions about LCA. By appending data on energy,
environmental, and other flows to the input-output table, non-economic impacts can be
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

194

Chapter 8: LCA Screening via Economic Input-Output Models

predicted. The resultant models are referred to as environmentally extended inputoutput models (EEIO).
Here we differentiate between using economic input-output methods generally to support
LCA as IO-LCA and the specific method as implemented with our colleagues in the Green
Design Institute of Carnegie Mellon as EIO-LCA (discussed below). This differentiation
seeks to emphasize general IO-LCA methods and practices, as well as an additional specific
set of data sources and assumptions for the EIO-LCA model. Within the US there are
various tables and resources related to IO-LCA. Aside from EIO-LCA there are the CEDA
database and OpenIO. There are other similar IO-LCA models outside the US.
The advantage of the process-based approach as described in Chapters 4 and 5 is that it can
answer as detailed a question as desired concerning the materials and energy balances of each
facility studied, assuming that adequate resources exist to collect and analyze data on the
relevant flows. In practice, getting these balances is sufficiently difficult that they rarely are
able to answer detailed questions. The disadvantage of the process-based approach is that
the required expense and time means that generally a tight boundary must be drawn that
excludes many of the facilities relevant to activities within the overall system.
The advantage of the IO-LCA approach is that a specific boundary decision is not required,
because by default the boundary is the entire economy of production-related effects,
including all the material and energy inputs. Another major advantage is that it is quick and
inexpensive. Results can be generated in seconds at no cost other than the time involved.
Note though that with respect to modeling in support of an ISO compliant LCA, IO-LCA
methods are generally most useful as a screening tool rather than as the core model needed
to answer the necessary goals of the LCA task. We introduce IO-LCA methods so that the
overall LCA task can be improved by gaining an appreciation for where the greatest
systemwide impacts occur. Such knowledge can inform choices of scope, boundaries, and
data sources for process based models. They can also be used to help validate results from
process-based methods, as the more comprehensive IO-LCA boundaries will generally lead
to higher estimates of impacts (upper bounds), which can be used to assess whether the
process-based results seem reasonable.
The IO-LCA approach has a major disadvantage: it uses aggregate and average data for a
sector rather than detailed data for a specific process. For example, an IO-LCA model will
yield results for the average production from, say, the sector "iron and steel mills," rather
than from producing particular steel alloys required for an automobile (which one could find
in process data). As another example, the US IO table does not distinguish between
generating electricity using a 50-year-old coal plant and using a new combined-cycle gas
turbine. The former emits much more pollution per kWh than the latter. A process model
could compare the different processes to the degree desired. The process models can be
specific to particular materials or processes, rather than the output from a sector of the
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

195

economy. Even with more than 400 sectors available, analysts would often like to
disaggregate IO-LCA models, such as dividing the plastics sector into production of
different types of plastic. Process models can also handle nonlinear effects.
While we focus on the use of IO-LCA in this chapter, we also describe "hybrid" models in
which IO-LCA and process models are combined to exploit the advantages of both (see
Chapter 9). In hybrid models, the results from production of a chemical might be from a
process model, while the effects of the inputs to the process might be assessed with IOLCA. With a hybrid model, the reliance on process models can vary from slight (such as one
model) to very extensive (IO-LCA might be used only for a single input such as electricity).
IO-LCA (or EEIO) models work by following the flow chart shown in Figure 8-2.
Economic activity generates environmental impacts. Production of steel generates solid
waste (slags, air pollution, wastewater) and consumes energy that results in greenhouse gas
emissions. These environmental impacts can be assumed to be linear in their magnitude and
can also be described as vectors.
EsHmate nal demand (Y)

Assess direct and indirect economic requirements (X)

Assess overall environmental or energy impacts per sector (E)

Sum sector level impacts for overall impacts


Figure 8-2: Flow chart for IO-LCA models

We have already described above the process needed to complete the first two steps. In this
section we describe the last two steps that allow use of the models for LCA screening
purposes. In our discussion, we use "dollars" as a currency given our own biases, but IOLCA models can and have been derived around the world in many currencies. Our use of
dollars is meant to merely provide a consistent terminology for expression.
Once economic output for each sector (X) is known, a vector of total environmental
effects (i.e., the sum of direct and indirect environmental effects) for each sector can be
obtained by multiplying the output by the environmental impact per dollar of output:
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

196

Chapter 8: LCA Screening via Economic Input-Output Models

E = RX = R [I A]1 Y

(8-3)

where E is the vector of environmental burdens (such as toxic emissions or electricity use
for each production sector), and R is a matrix with diagonal elements representing the
impact per dollar of output for each sector for a particular energy or environmental burden
(Lave 1995, Hendrickson 1998, Leontief 1970). A variety of environmental burdens may be
included in this calculation. For example, from estimates of resource inputs (electricity, fuels,
ores, and fertilizers), we can further estimate multiple environmental outputs (toxic
emissions by media, hazardous waste generation and management, conventional air pollutant
emissions, global warming potential, and ozone depleting substances). We find direct and
indirect environmental burdens by multiplying the R matrix by the direct and indirect
purchases. As before, the total environmental burdens are the sum of direct and indirect.
While the matrix math is trivial, the data needs are worth discussing. The R matrix has units
of burdens per dollar of output (e.g., kg CO2/$). Multiplying R by a vector (X) with unit of
dollars for each sector will yield E, with units of burdens by sector (e.g., kg CO2), removing
cost dependence from the final impact tally. Deriving R (in units of burdens per dollar)
merits additional discussion; understanding the process and its limitations is critical to
understanding how and why IO-LCA models can be used to support LCA tasks.
IO-LCA model results are better understood by example. Like the purely economic IO
models on which they are built, IO-LCA models can estimate environmental burdens across
the supply chain. Revisiting our example of a final demand of $20,000 of automobiles, if our
R matrix was for emissions, then the E = RX vector would represent total emissions.
Included in these estimated emissions would be not only the emissions from the automobile
factory, but also the emissions from the tire factory, the rubber factory, and all other
upstream processes (including transportation) that supported production of that $20,000 car.
All of this is possible since the simple assumption of the R matrix is that it contains
emissions per dollar for each sector, and the X vector has already estimated the necessary
economic outputs for all of the sectors. As mentioned in the beginning of this chapter, in
addition to the inputs from each sector, the output, X, includes value added such as labor or
profits. As a result, emissions will be indirectly associated with these activities as well.

Example 8-3: Build on methods above to estimate direct and total environmental burdens by sector.

Question:
What are the direct and total emissions of waste for the inputs specified in Example 8-2?
Assume emissions of waste per billion dollars of output of sector 1 are 50 g and sector 2 are 5 g.

Answer:

The direct emissions are E = R [I+A] Y and total emissions are E = RX = R[I A]1 Y.
50 0
!.
Example 8-2 derived [I+A] Y and [I A]1 Y for each of the two sectors. As given above, = !
0 5
or Y1 and Y2 the direct emissions are:
(50 115) + (0 20) = 5750
1250
!
! and !
!.
(0 115) +
(5 20) for
= 100
Life Cycle Assessment: Quantitative
Approaches
Decisions That525
Matter lcatextbook.com

Thus, the sum of direct emissions for Y1 are 5,850 g (5.9 kg) and for Y2 are 1,775 g (1.8 kg). Similarly the
otal emissions are 6403 g (6.4 kg) and 2211 g (2.2 kg), respectively, with consideration for significant figures.
The direct emissions in general are a fairly large share of the total emissions in both cases.

Chapter 8: LCA Screening via Economic Input-Output Models

197

Producing an R matrix for a burden, e.g., sulfur dioxide emissions, requires a comprehensive
data source of such emissions on a sector level. The sector classification of relevance is that
of the associated IO model. As mentioned above, IO models typically follow existing
classification schemes (e.g., the US 2002 benchmark model generally follows the NAICS
classification system used throughout North America). Thus, a data source of total sulfur
dioxide emissions broken up by NAICS sector is required. Such a data source is ideally
already available and in the US, can be obtained from the Environmental Protection Agency.
However, some work may need to be done to translate, convert, or re-classify existing data
into a NAICS or IO sector basis to use in an IO-LCA model (see the Advanced Material for
this Chapter, Section 4). Once total sulfur dioxide emissions in an economy in a given year
are found (ideally the data source would provide emissions in the same year as the IO table
used) and allocated to the various IO sectors, each of the sector emissions values is divided
by the total output (Xi) of the sector. The result is the R matrix of burdens per dollar of
output. Typically, IO models are used at the scale of "millions of dollars" (as opposed to
dollars or thousands of dollars) so the normalization factor Xi would be scaled to millions
(i.e., instead of dividing by $120 million would be divided by 120). This same process would
be repeated for all burdens of interest in the IO-LCA model building process.
Example 8-4: In this example, we show how to derive an R matrix value for a particular sector.
Question:
What is the R matrix value for the electric power sector for sulfur dioxide emissions in
2002 (in short tons per million dollars)?
Answer:
EPA data for 2002 suggests that the total sulfur dioxide emissions from power generation
was 10.7 million short tons. In 2002, the sector output of the power generation and supply sector was
$250 billion. Thus the R matrix value for the power generation sector would be (10.7 million short tons) /
($250 billion) = 42.8 short tons per million dollars.

Introduction to the EIO-LCA Input-Output LCA Model


In this section, we provide specific information about the EIO-LCA model and specific
illustrations to aid in your understanding of how such models are built. EIO-LCA was
developed by researchers at the Green Design Institute at Carnegie Mellon University and is
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

198

Chapter 8: LCA Screening via Economic Input-Output Models

available both as an Internet web tool and as an underlying MATLAB model environment at
http://www.eiolca.net/. Online tutorials are available on how to use the web model and are
not repeated here. For EIO-LCA MATLAB information, see the Advanced Material for this
Chapter, Section 5. EIO-LCA is available free for non-commercial use on the Internet
(although even corporate users are able to generate results to get a sense of how it works).
Process model databases and software sold by consulting companies can be quite expensive,
generally ranging in the order of thousands of dollars, as shown in Chapter 5. Other IOLCA models likely use very similar processes but if you intend to use them you should read
their documentation to ensure similarity and limitations.
The calculations required for Equations 8-1 through 8-3 are well within the capabilities of
personal computers and servers. The result is a quick and inexpensive way to trace out the
supply chain impacts of any purchase. The EIO-LCA website is able to assist in generating
IO-LCA estimates for various countries and with various levels of detail. In this chapter, we
will use the 428-sector 2002 US economy model to explore a variety of design and purchase
decisions. The EIO-LCA model traces the various economic transactions, resource
requirements, and environmental emissions required to provide a particular product or
service. The model captures all the various manufacturing, transportation, mining, and
related requirements to produce it. For example, it is possible to trace out the upstream
implications of purchasing $50,000 of reinforcing steel and $100,000 of concrete for a
kilometer of roadway pavement. Environmental impacts of these purchases can be estimated
using EIO-LCA. Converting such values into the relevant benchmark model year dollar
values is discussed in Section 3 of the Advanced Material.
We discuss the various data sources for the model, give an example application of the
software, provide a numerical example of the inputoutput calculations, and provide some
sample problems.
The data in the EIO-LCA software is derived from a variety of public datasets and
assembled for the various commodity sectors. For the most part, the data is self-reported
and is subject to measurement error and reporting requirement gaps. For example,
automotive repair shops do not have to report to the Toxics Release Inventory. The level of
quality assurance of the public data used varies. The major datasets include:
Direct and Total InputOutput Tables: The EIO-LCA website provides models for the
US, Germany, Spain, Canada, and China. US models available include those for the
benchmark years 1992, 1997, and 2002. Several of those years have multiple levels of
sector detail available. The 428-sector 2002 industry by commodity inputoutput (IO)
matrix of the US economy as developed by the U.S. Department of Commerce
Bureau of Economic Analysis is the default model. Economic Impacts are computed
from the IO matrix and the user-input change in final demand. While the remaining
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

199

data sets below are generally available for multiple country-year models, the specific
details provided are for the 428-sector 2002 model.
R matrices:
Energy use: Estimates of energy use for the 428 sectors come from a number of
sources. Energy use of manufacturing sectors (roughly 270 of 428) is developed from
the Manufacturing Energy Consumption Survey (MECS), for mining sectors is
calculated from the 2002 Economic Census (USCB 1997). Service sector electricity
use is estimated using the IO table purchases and average electricity prices for these
sectors.
Conventional pollutant emissions are from the US Environmental Protection Agency,
primarily the National Emissions Inventory (NEI) and onroad/nonroad data sources.
Greenhouse gas emissions are calculated by applying emissions factors to fuel use for
fossil-based emissions and allocating top-down estimates of agricultural, chemical
process, waste management, and other practices that generate non-fossil carbon
emissions to economic sectors.
Toxic releases and emissions are derived from EPA's 2002 Toxics Release Inventory
(TRI).
Hazardous waste: RCRA (Resource Conservation and Recovery Act) Subtitle C
hazardous waste generation, management, and shipment was derived from EPA's
National Biannual RCRA Hazardous Waste Report.
Water use come from various sources, as published in Blackhurst (2010).
Detailed information on how the underlying data sources are used to generate the R matrices
in EIO-LCA are available on the EIO-LCA website (at http://www.eiolca.net/docs/).
The EIO-LCA website follows the same workflow as any IO model, as shown in Figure 8-1.
As a user, all you need to do is to enter a value of final demand, select a sector that must
produce that final demand, and choose whether you want to see economic (X) or energyenvironmental results (R), and click a button. All of the matrix math, data management, etc.,
is done by the web server and results are shown in tabular form within seconds. With this
basic kind of IO model, you can only enter a final demand for a single sector (i.e., you can
only enter a value for a single Yi and all other elements of Y are assumed to be zero). It is
possible to build a custom model in the EIO-LCA model where simultaneous purchases can
be made from multiple sectors; however, such a model also has limitations on its

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

200

Chapter 8: LCA Screening via Economic Input-Output Models

meaningfulness. If you have not used EIO-LCA before, there are various tutorials,
screencasts, and other resources available on the website.

EIO-LCA Example: Automobile Manufacturing


As a specific demonstration of the utility of IO-LCA models (specifically EIO-LCA), this
section examines the manufacture of automobiles. As defined by the US Department of
Commerce, the automobile manufacturing sector for the US 2002 benchmark EIO model is
composed of the following NAICS sector:
336111 Automobile Manufacturing
This U.S. industry comprises establishments primarily engaged in one or more of the
following manufacturing activities:
* complete automobiles (i.e., body and chassis or unibody) or
* automobile chassis only.
Note that the EIO-LCA server shows the information above when browsing or searching
for sectors to make final demand. For example, choosing the related light truck manufacturing
sector would provide similar but different results below.
We can trace the supply chain for the production of $1 million of automobiles in 2002 using
EIO-LCA. This production of $1 million would represent the effects of making roughly 40
automobiles (given an approximate average price of $25,000 each in the year 2002). Figure
8-3 shows the total and direct (including percentage direct) economic contributions of the
largest 20 supply sectors within the supply chain for making automobiles in the US.
First, the economic results are considered. From Figure 8-3, a $1 million final demand of
automobiles requires total economic activity in the supply chain of $2.71 million. In the
total economic output column are the elements of X. EIO-LCA also sums across all of X to
present the total. Results for the other 403 sectors are available on the website but are not
shown here.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

201

Total
Economic
$mill

Direct
Economic
$mill

Direct
Economic
%

Total for all sectors

2.71

1.74

64.2

Automobile manufacturing

0.849

0.849

100.0

Motor vehicle parts manufacturing

0.506

0.446

88.1

Light truck and utility vehicle manufacturing

0.150

0.150

99.9

Wholesale trade

0.124

0.057

46.1

Management of companies and enterprises

0.108

0.033

30.9

Iron and steel mills

0.038

0.000

1.60

Semiconductor and related device manufacturing

0.026

0.014

54.7

Truck transportation

0.025

0.009

34.2

Other plastics product manufacturing

0.021

0.010

48.5

Power generation and supply

0.020

0.002

10.7

Real estate

0.020

0.001

5.97

Turned product and screw, nut, and bolt manufacturing

0.017

0.005

30.1

Ferrous metal foundaries

0.015

0.000

1.68

Nonferrous foundries

0.015

0.000

2.14

Glass product manufacturing made of purchased glass

0.015

0.012

84.4

Other engine equipment manufacturing

0.015

0.012

79.8

Machine shops

0.014

0.002

15.9

Oil and gas extraction

0.013

0.000

0.085

Monetary authorities and depository credit intermediation

0.013

0.000

2.18

Lessors of nonfinancial intangible assets


0.013
0.000
3.65
Figure 8-3: Supply chain economic transactions for production of $1 million of automobiles
in US, $2002. Top 20 sectors. Results sorted by total economic output.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

202

Chapter 8: LCA Screening via Economic Input-Output Models

As discussed above, the change in GDP as a result of this economic activity would be only
$1 million, since GDP measures only changes in final output, not of all purchases of
intermediate goods (i.e., not $2,710,000). The largest activity is in the automobile manufacturing
sector itself: $849,000. This includes purchases by the company that assembles vehicles from
other companies within the automobile manufacturing industry, like those that make steering
wheels, interior lighting systems, and seats.
The economic value of the supply chain is also shown in Figure 8-3. Direct purchases (from
the IO perspective, i.e., I+A) are $1.74 million, including the $1 million of final demand.
Not surprisingly, direct purchases are dominated by vehicle and parts manufacturing sectors.
The direct percentage compares the direct purchases for each sector to the total purchases
across the supply chain for each sector. Sectors with small direct purchase percentages (or,
alternatively, large indirect percentages) have most of their production feeding the indirect
supply chain of automobiles rather than the automobile assembly factories directly. Many of
the top 25 sectors have more than 50% of their total output as direct inputs into making
automobiles (e.g., semiconductor manufacturing and glass manufacturing sectors). Others primarily
supply the other suppliers (e.g., iron and steel mills, power generation and supply).
EIO-LCA also allows you to generate estimates of energy and environmental effects, using
the data sources identified above. Using the same final demand of $1 million, Figure 8-4
shows the energy use across the supply chain for producing automobiles for the top 10
energy-consuming sectors (results available but not shown for the other 418 sectors). We
remind you that IO models are linear. Any analysis you do for $1 million of automobiles can
be linearly scaled down per vehicle.
EIO-LCA estimates total supply chain energy use of 8.33 TJ per $1 million of automobiles
manufactured (or 167 GJ per vehicle assuming 50 vehicles produced at an average cost of
$20,000). About 25% of that energy use (2.19 TJ) comes from energy needed in the
electricity (power generation and supply) sector, and about 15% from iron and steel mills. Most of
the coal used in the supply chain is for generating power. Natural gas use is fairly evenly
split among the top sectors. Similarly, most of the petroleum used is in the various
transportation sectors, not all of which are shown in the top 10 list of Figure 8-4. Notice
that the top sectors in terms of economic output are not closely associated with the top
energy-consuming sectors! IO models will show that generally energy-intensive sectors are
an important part of the energy supply chain for every sector but are not always those that
have the largest economic input.
Note that specific fuels are not shown in Figure 8-4. Underlying data sources provide
information on consumption of diesel, gasoline, and other fuels that are aggregated into a
single estimate of "Petroleum" use.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

Sector

Total
Energy
TJ

Coal

NatGas

Petrol

Bio/Waste

TJ

TJ

TJ

TJ

203

Non-Foss
Elec
TJ

Total for all sectors

8.33

2.56

2.63

1.29

0.435

1.41

Power generation and supply

2.19

1.60

0.467

0.078

0.051

Iron and steel mills

1.25

0.743

0.341

0.012

0.005

0.151

Motor vehicle parts manufacturing

0.460

0.005

0.190

0.014

0.024

0.228

Automobile Manufacturing

0.381

0.004

0.190

0.013

0.040

0.133

Truck transportation

0.327

0.324

0.003

Other basic organic chemical


manufacturing

0.259

0.032

0.099

0.036

0.078

0.014

Petroleum refineries

0.187

0.000

0.050

0.121

0.009

0.007

Alumina refining and primary


aluminum production

0.172

0.046

0.001

0.004

0.120

Plastics material and resin


manufacturing

0.169

0.007

0.088

0.037

0.018

0.019

Paperboard Mills
0.161
0.015
0.033
0.007
0.095
0.011
Figure 8-4: Supply chain energy requirements for production of $1 of automobiles in 2002,
results for top 10 energy consuming sectors, sorted by total energy.

IO-LCA models must carefully manage fuel and energy data. Fuel use is tracked only in the
sector that directly uses it. Many sectors consume electricity, but only the power generation
sector consumes the coal, natural gas, petroleum, and biomaterial needed for generation.
Also, note that "non-fossil" electricity consumption is estimated in Figure 8-4. While
facilities within sectors are assumed to consume average electricity (generated from a mix of
fossil and non-fossil sources), if the model tracked total energy use of fossil and non-fossil
sourced electricity in TJ, and also tracked the coal and/or natural gas used to generate it, the
model would "double count" the energy in the fossil fuel and the electricity. Thus, we only
track an average amount of non-fossil electricity (which does not depend on fossil fuels to
generate it), avoiding the double counting of energy. To derive the non-fossil share,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

204

Chapter 8: LCA Screening via Economic Input-Output Models

Department of Energy data on percent non-fossil electricity generation in 2002 (31%) is


multiplied by the amount of electricity consumed by each sector.
The case study below helps to describe how an IO-LCA based screening assessment can be
used in a corporate setting.
Case Study: Bio-based Feedstocks in the Paint and Coatings Sector
A US company was considering acting on customer requests to provide an alternative
product comprised of bio-based feedstocks. These customers were looking to reduce
the "carbon footprint" of their products, and to reduce fossil fuel consumption, and
had read studies on the net carbon efficiency of bio-based as opposed to
petrochemical-based feedstocks. The question is whether such a conversion might be
a beneficial substitution for the producer and its customers.
An excerpted screening analysis using EIO-LCA for $100,000 of final demand in the
Paint and coatings sector demonstrated that the current mix of petroleum based
feedstocks across the entire production chain of making paints and coatings is a
fairly small part of the total purchases as well as only about 5% (6 / 107 tons) of the
carbon emissions. On the other hand, supply chain-wide purchases of electricity are
comparable in economic value ($2k and $5k) but constitute 25% (25 / 107 tons) of
the total carbon equivalent emissions. This screening analysis suggests that the
switch to bio-based feedstocks would likely have a modest effect on the burden of the
product. In addition, it suggests that a corporate push for more renewable electricity
in its supply chain could have substantial benefits. It is this latter strategy that we
recommended to our corporate partner.
Figure 8-5: EIO-LCA economic and CO2 emissions results for the 'Paint and Coatings' Sector
Total
($ thousand)

CO 2 equivalents
(tons)

Total across all 428 sectors

266

107

Paint and coatings

100

Materials and resins

13

Organic chemicals

12

Wholesale Trade

10

Management of companies

10

<1

Dyes and pigments

17

Petroleum refineries

Truck transportation

Electricity

25

Sector

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

205

Beyond Cradle to Gate Analyses with IO-LCA


So far, IO tables and models have been represented as capturing the entire upstream supply
chain up to the point of manufacture, economically referred to as a producer price basis.
This means the context of final demand as well as the structure of the production recipes
was from the perspective of the producer at their place of business. In other words, the
relevant input into the model would be as if the producer was merely trying to recover the
costs they incurred. Thus the perspective (or boundary) of producer-basis models is "cradle
to gate" the effects estimated by the model end at the point of production. The
appropriate final demand input is as measured or "seen" by the producer.
IO models can also be created on a purchaser price basis. In such models, additional
stages beyond production are internalized into the production recipe for each sector. For
physical goods, typical activities internalized include transportation of product from the
producer as well as wholesale and retail margins. These models have a "cradle to consumer"
boundary. The relevant dollar input into a purchaser price model is the price that a consumer
(buyer) would expect to pay, which is generally easier to determine or derive than a producer
price. As a simple example, an automobile may have a $20,000 producer price basis, but
after transportation and dealer overhead are included may have a purchaser price of $25,000.
In such a case, the "production recipe" of the $25,000 car in a purchaser price model might
have $20,000 of the recipe be associated with automobile manufacturing, $2,000 for
transportation (e.g., by truck and rail) and $3,000 for retail overhead. If we were able to
perfectly separate the pieces, the purchaser price model would have all of the effects as
estimated as if a $20,000 input into a producer price model had been used, as well as the
additional effects from the $5,000 of other activities. Additional impacts that might be
estimated via the wholesale and retail margins are electricity use by computers at the store or
emissions from climate-controlled warehouses. On the other hand, if we entered the same
value ($20,000 or $25,000) into both models, this correspondence would not occur since the
models are linear and the recipes would not be fully inclusive. Additional detail about price
systems in producer and purchaser models is available in the Advanced Material (Section 3)
at the end of this chapter.
Beyond producer and purchaser basis models, IO-LCA models can be used to estimate
effects of even broader life cycles. In the example above we consider what an EIO model
would estimate in terms of effects for a cradle to gate and cradle to consumer boundary for
an automobile. With those boundaries, the various requirements of using the vehicle (e.g.,
purchasing gasoline, insurance, maintenance, etc.) and managing it at end of life would not
be included in the model results.
However such a scope can still be approximated by using a slightly more complicated IOLCA model. Instead of entering only a single element of final demand, multiple Yi elements

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

206

Chapter 8: LCA Screening via Economic Input-Output Models

can be chosen. Alternatively, the model could be run multiple times and the individual
results aggregated to a single final result.
Building on the homework problem from Chapter 3, comparing the life cycle effects
associated with the manufacture and use of two washing machines might each include
various elements of final demand (entered as a series of elements in Y or run consecutively
through a model as separate individual elements of final demand):

An input of final demand into the Household laundry equipment manufacturing sector

A final demand input for a lifetime of water used (at an assumed $/gallon cost) from
the Water, sewage, and other systems sector

A final demand input for a lifetime of electricity used (at an assumed $/kWh cost)
from the Power generation and supply sector

A full analysis would include differences pertaining to end-of-life disposal, although the
differences are likely to be relatively small for two washing machines.
What do these EIO-LCA
results demonstrate?
Keeping in mind that IO-LCA
models are a screening tool, they
can help us to make and justify
our LCA project design decisions.
If we were doing an LCA of the
energy use of an automobile, our
IO-LCA results suggest that the
boundary of the manufacturing
processes
should
include
electricity,
semiconductors,
trade, and chemicals needed.
Many of the other processes
could be ignored with little
impact on the results.

IO model frameworks produce results as shown in


Figure 8-3 and Figure 8-4. The effects from all
facilities within a sector at many levels of the supply
chain are "rolled up" into these single value results.
For example, from Figure 8-4, the 1.6 TJ of coal
used in the power generation sector comes from
many individual power plants, some (about 10%)
directly, but mostly indirectly. These rolled up
results do not allow us to see energy use at specific
tiers of the supply chain, or at particular facilities.
From such frameworks, the best analysis possible is
a comparison of direct and indirect effects.
Advanced methods such as structural path analysis
(Chapter 10) allow one to drill down into specific
layers of the supply chain to find specific pathways
of connections of requirements between sectors.

As discussed in Chapters 2 and 5, referencing of


data sources and models is critical in LCA. If using an IO-LCA model in your study, you
should be sure to note:

the name and location of the model,

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

its country of focus,

EIO table year, and

whether it is a producer or purchaser basis model.

207

Somewhere in your study you must also clearly state the input value of final demand, the
name of any sectors chosen for analysis, and R matrix datasets chosen. For example, the
EIO-LCA model suggests the following citation be used:
Carnegie Mellon University Green Design Institute. (2013) Economic Input-Output Life
Cycle Assessment (EIO-LCA) US 2002 (428 sectors) Producer model [Internet], Available from:
http://www.eiolca.net/> [Accessed 25 Aug, 2013].

And you could separately provide in the LCA study document a table of final demands,
detailed sectors, and impact categories.
So far we have only motivated economic input-output models. However an IO framework
can be applied to any type of unit, for example, physical flows. If desired, one could derive a
linear system of equations that instead represented the mass quantities needed across an
economy to support production. Such models could also be built with multiple or mixed
units. All of the same matrix techniques can be used to estimate direct and total
requirements (see homework question 5).
Overall, tracing the supply chain requirements for production has yielded surprises that have
raised questions about sole reliance on process-based LCA. The most important suppliers in
one dimension (e.g., economic dependence) often are not the most important in another
(e.g., energy use). Figure 8-4 showed that some of the largest energy users in the supply chain
do not even appear among the top 25 economic supply sectors. A system-wide view is
critical in assessing life cycle effects. That said, IO-LCA models provide quick but coarse
and average estimates of LCI results, and cannot substitute for detailed process-based
analysis. IO-LCA methods can help you draw boundaries, assess which processes are
important, and can help validate process-based results. Given the very short time required
to generate results, there is little reason not to consult an IO-LCA model in support of an
LCA study when setting the SDPs. Your screening analysis could identify whether the
choice of sector was critical or not, and also generalize whether placing various processes
within the system boundary is critical or not.

Chapter Summary
We have shown several examples that highlight both the ease and utility, along with the
complications, of exploring the entire supply chain via IO and IO-LCA models. ProcessLife Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

208

Chapter 8: LCA Screening via Economic Input-Output Models

based models were shown to specifically estimate detailed mass and/or energy balances for
specific activities relevant to the life cycle of a product and to link many of these data
sources to yield a "bottom up" model. The process-based method is also generally expensive
and time-consuming, which leads to project design decisions that narrow the boundaries
around the problem, causing many supply chain aspects to be ignored. IO-LCA methods, on
the other hand, have a top-down system boundary of the entire supply chain up to the point
of production by default. The benefits of an economy-wide comprehensiveness in IO-LCA
models are traded off against the reality that the models are built upon average values for
sectors and environmental burdens. As such, the utility of IO-LCA models is primarily as a
screening tool rather than as a true alternative to a process-based model. For those wishing
to read further into the theory and practice of economic input-output models, we can
recommend two sources: Miller (2009) and Hendrickson (2006).

References for this Chapter


Hawkins, Troy R., and Matthews, Deanna H., 2009. A Classroom Simulation to Teach
Economic Input-Output Life Cycle Assessment. Journal Of Industrial Ecology 13, no. 4
(August): 622-637. doi:10.1111/j.1530-9290.2009.00148.x
Hendrickson, Chris., Arpad Horvath, Satish Joshi, and Lester Lave. Introduction to the Use
of Economic InputOutput Models for Environmental Life Cycle Assessment. Environmental
Science and Technology, 32(7): 184A191A, 1998.
Hendrickson, Chris T., Lave, Lester B., and Matthews, H. Scott, "Environmental Life Cycle
Assessment of Goods and Services: An Input-Output Approach", RFF Press, April 2006.
Lave, L., E. Cobas-Flores, C. Hendrickson, and F. McMichael, Generalizing Life Cycle
Analysis: Using InputOutput Analysis to Estimate Economy-Wide Discharges.
Environmental Science & Technology, 29(9): 420A426A, 1995.
Leontief, W., Environmental Repercussions and the Economic Structure: An InputOutput
Approach. Review of Economics and Statistics, 1970.
Miller, Ronald E. and Blair, Peter D., Input-Output Analysis: Foundations and Extensions, 2nd
edition. Cambridge University Press, 2009.
NAICS 2013 United States Census Bureau. 2013. 'North American Industrial Classification
System, http://www.census.gov/eos/www/naics/ (accessed July 10, 2013).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

209

210

Chapter 8: LCA Screening via Economic Input-Output Models

Homework Questions for Chapter 8


1. Use this transactions table (in millions of currency units) to answer the following
questions.
1
2
V
X

1
450
100
450
1000

2
200
600
1400
2200

Y
350
1500

X
1000
2200

a. Describe in words what the highlighted values in the table represent.


b. Generate the direct requirements matrix
c. Generate the total requirements matrix
d. For a final demand of $50 million in sector 1, find the direct and total
requirements.
2. You want to do a screening assessment of the energy needed to manufacture two different
types of plain white cotton t-shirts, one from a discount store costing $5 and another from a
specialty clothing store costing $15. What would an IO-LCA model suggest about the
differences in their energy use for manufacture? What likely is the real difference in
manufacturing energy if you could measure it yourself?
3. Consider the case of a university looking to better manage its greenhouse gas emissions.
a. What are the greenhouse gas emissions associated with $1 million of university
services in 2002 using the EIO-LCA 2002 benchmark model?
b. Suppose the university purchases 8% of its electricity from wind power. How
could you adjust the emissions found in part (a) for this fact? As a simplification,
assume that no greenhouse gas emissions are associated with wind power
generation and the amount of wind power used in the estimate of part (a) is zero.
c. Could your method in part (b) be used for similar adjustments to the greenhouse
gas emissions of a university? If so, give an example. If not, explain why.
4. Reconsider the washing machine homework question from Chapter 3 using EIO-LCA.
Assume a 10-year lifetime of each machine without discounting. Ignore potential impacts
from a disposal phase.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

211

a. Use the $500 and $1,000 purchaser prices as inputs into the 2002 EIO-LCA model
Household laundry equipment manufacturing sector to estimate the total energy
consumption and CO2 emissions to manufacture the two machines. Compare direct
and indirect effects. What do the results using these inputs suggest?
b. Use the assumptions about water use to estimate the use-phase energy and CO2
emissions via input to the Water, sewage, and other systems sector. Compare direct and
indirect effects.
c. Use the assumptions about electricity use to estimate the use-phase energy and CO2
emissions via input to the Power generation and supply sector. Compare direct and
indirect effects.
d. Create a table summarizing the results above and find a total energy and CO2
emissions for the two machines. Describe your results and how they can be used to
support a comparative assertion.
5. The NREL LCI database contains many unit process models relevant to energy systems
modeling in the United States. Many of these models are physical flow models as opposed to
the economic models we have been discussing lately. In this question, we will create a
streamlined "physical flow" input-output model of energy production - analogous to the
economic input-output models we have been making - that incorporates the production
from several processes in the NREL database (this was from a previous version of the
NREL database, in 2007, pre-SI units). This problem will really test whether you understand
IO model theory.
We will streamline by only focusing on distillate oil, electricity, gasoline, natural gas, residual
oil, and coal. All other inputs will be ignored. Summarized from the NREL database are the
following streamlined production functions:
0.375 pounds coal + 0.436 gallons of distillate oil + 9.61 kWh electricity + 0.0321 gallons
gasoline + 3.72 cubic feet natural gas + 0.161 gallons residual oil = 1000 pounds of coal
0.01 gallons distillate + 1.2 kWh electricity + 0.004 gallons gasoline + 49.6 cubic feet of
natural gas + 0.005 gallons residual oil = 1000 cubic feet of natural gas
13.7 kWh electricity + 32.1 ft3 natural gas + 0.589 gals residual oil = 219 pounds of distillate
oil
26.4 kWh electricity + 61.7 ft3 natural gas + 1.13 gals residual oil = 421 pounds of gasoline
3.07 kWh electricity + 7.18 ft3 natural gas + 0.132 gals residual oil = 49 pounds residual fuel
oil
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

212

Chapter 8: LCA Screening via Economic Input-Output Models

0.75 pounds of coal + 2.56 ft3 natural gas + 0.003 gals of residual fuel oil = 1 kWh of
electricity12
a) Using the production equations above, make an input-output model to estimate
the total amount of energy (in BTU) needed to produce:
1) 5,000 kWh of electricity (equivalent for one US citizen in a year)
2) 230 gallons of gasoline (rough average per household driving for a year)
Be sure to report your A matrix (requirements matrix) in order of: coal, gas, distillate,
gasoline, residual fuel, and electricity. For consistency purposes, please only use
the conversion factors given at the end of this question.
b) Validate your results by comparing to data in the EIO-LCA 2002 Benchmark
model (the default model). Explain the differences you find.

Conversion Factors:
Distillate Oil: 138,700 BTU/gallon

Electricity:

3,412 BTU/kWh

Gasoline:

125,000 BTU/gallon

Natural Gas:

1,000 BTU/ft3

Residual:

150,000 BTU/gallon

Crude Oil:

18,600 BTU/pound

Coal:

12,000 BTU/pound

Distillate:

18,600 BTU/pound

Gasoline:

18,900 BTU/pound

Residual:

17,800 BTU/pound

Note that since some renewable/other sources make electricity in the US, which have been excluded from this
streamlined model, and since coal + natural gas + residual is only 71% of the US electricity fuel mix, each of the NREL
LCI numbers were scaled up by 100/71 to get the production equations above.
12

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

213

Advanced Material for Chapter 8 - Overview


As with the Advanced Material elsewhere in this book, these sections contain additional
detail about the methods and principles discussed in the chapter. They have been moved to
the back of the chapter because knowing about them is not vital to understanding the
chapter content, but may be necessary if you intend to more substantively use those
methods. It is generally expected that an undergraduate course (or casual learner of LCA)
would generally focus on the main chapters, and a graduate course (or advanced practitioner)
would incorporate elements from the advanced material.
In the advanced material of this chapter, you will find more in depth discussion of the
theoretical framework of economic input-output models, how price systems change, how the
vectors and matrices of IO-LCA models (with specific examples from EIO-LCA) have been
constructed, and using software tools to develop IO-LCA models.

Section 1 - Linear Algebra Derivation of Leontief (Input-Output)


Model Equations
In the chapter, we showed the format of the transactions table and a general derivation of
the round by round purchases and how they become the Leontief inverse equation. In this
section, more detail is provided about the system of linear equations that drive IO models.
If you will be doing matrix computations in your work using IO-LCA models, it is important
to understand the equations in this section. We repeat Figure 8-1 here.

Input to sectors
Output from sectors

1
2
3
n
Intermediate input I

Z11
Z21
Z31
Zn1
I1

Z12
Z22
Z32
Zn2
I2

Z13
Z23
Z33
Zn3
I3

Z1n
Z2n
Z3n
Znn
In

Value added V
Total output X

V1
X1

V2
X2

V3
X3

Vn
Xn

Intermediate
output
O

Final
demand
Y

Total
output
X

O1
O2
O3
On

Y1
Y2
Y3
Yn

X1
X2
X3
Xn

GDP

Figure 8-1. Example Structure of an Economic InputOutput Transactions Table


Notes: Matrix entries Zij are the input to economic sector j from sector i. Total (row) output for each sector i,
Xi, is the sum of intermediate outputs used by other sectors, Oi, and final demand by consumers.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

214

Chapter 8: LCA Screening via Economic Input-Output Models

In a typical IO model, output across the rows of the transactions table (Figure 8-1), typically
commodity output, can be represented by the sum of each row's values. Thus for each of the
n commodities indexed by i, output Xi is:
Xi = Zi1 + Zi2 + + Zin + Yi

(8-5)

However, I-O models are typically generalized instead by representing inter-industry flows
between sectors as a percentage of sectoral output. This flow is represented by dividing the
economically-valued (transaction) flow from sector i to sector j by the total output of sector
j. Namely,
Aij = Zij / Xj

(8-6)

In such a system, the Aij term is a unitless technical (or inputoutput) coefficient. For
example, if a flow of $250 of goods goes from sector 3 to sector 4 (Z34), and the total output
of sector 4 (X4) is $5,000, then A34 = 0.05. This says that 5 cents worth of inputs from sector
3 is in every dollar's worth of output from sector 4. As a substitution, we can also see from
Equation 8-6 that Zij = Aij Xj. This form is more common since the system of linear
equations corresponding to Equation 8-5 is typically represented as
Xi = Ai1 X1 + Ai2 X2 + + Ain Xn + Yi.

(8-7)

It is straightforward to notice that each Xi term on the left has a corresponding term on the
right of Equation 8-7. Thus all X terms are typically moved to the left hand side of the
equation and the whole system of equations written as:
(1 A11)X1 A12 X2 A1n Xn = Y1
A21X1 + (1 A22)X2 A2n Xn = Y2

Ai1X1 Ai2X2 + (1 Aii)Xi Ain Xn = Yi

An1X1 An2X2 + (1 Ann)Xn = Yn

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

(8-8)

Chapter 8: LCA Screening via Economic Input-Output Models

215

If we let the matrix A contain all of the technical coefficient Aij terms, vector X all the
output Xi terms, and vector Y the final demand Yi terms, then equation system 8-8 can be
written more compactly as Equation 8-9:
X AX =
[I A] X = Y

(8-9)

where I is the nn identity matrix. This representation takes advantage of the fact that only
diagonal entries in the system are (1Aii) terms, and all others are (Aij) terms. Finally, we
typically want to calculate the total output, X, of the economy for various exogenous final
demands Y, taken as an input to the system. We can take the inverse of [I A] and multiply
it on the left of each side of Equation 8-9 to yield the familiar solution
X = [I A]1 Y

(8-10)

where [I A]1 is the Leontief inverse matrix or, more simply, the Leontief matrix. As
discussed in the main part of the chapter, the creation of this inverse matrix transforms the
direct requirements matrix into a total requirements matrix. The total requirements matrix
mathematically represents all tiers or levels of upstream purchases (instead of just direct
purchases) associated with an input of final demand.

Section 2 Commodities, Industries, and the Make-Use Framework


of EIO Methods
The Leontief model described above is very general, discussing output only in terms of
which sectors they apply to. For many readers and users, this is sufficient differentiation. In
reality, IO tables and models are generally "commodity by industry", where commodity
production sectors i are in the rows and industry sectors j are in the columns. The
distinction between commodities and industries is subtle but important. The traditional
definition of a commodity is a basic good that is produced widely but equally, for example,
white rice. In this traditional definition all companies and facilities make identical, nondistinct products. Industries on the other hand combine various commodity inputs and
make a new product. This traditional view of commodities is obsolete as these "commodity
sectors" in modern IO tables categorize such complex and distinct products as computers
and other electronics. The terminology is the only constant.
The simplified tables shown above (e.g., in Figure 8-1) are generally derived from "make and
use" tables. A make table organizes economic data related to which industry sectors make
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

216

Chapter 8: LCA Screening via Economic Input-Output Models

which commodities, while a use table organizes economic data related to which industries
use which commodities. While a full mathematical description of converting from make-use
tables to transactions tables is beyond the scope of this chapter (and better left for the
teacher or student to implement), the make-use framework forms the foundation of the
commodity-by-industry transactions table introduced in the chapter. The matrix math
involved in transforming make-use tables into transactions tables internalizes and allocates
the multiple productions and uses of commodities into a unified set of production recipes.
Matrix math also allows us to generate different formats of transactions tables from the
original make-use format, e.g., an industry-industry or commodity-commodity table format.
The columns of a make table show the distribution of industries producing a commodity,
while the rows show the distribution of commodities produced by an industry. If you read
across the values of a row in the make table, you see all of the commodity outputs that each
sector makes. The make table might reveal that in fact several sectors are responsible for
"making" a particular commodity, e.g., a steel facility and a power plant may both produce
electricity. Figure 8A-1 shows an excerpt of actual data from the 2002 Make Table of the US
economy13. It shows that the vast majority of farm commodities are produced by the farm
industry ($197 billion), and that billions of dollars of forestry commodities are produced in
both the farm and forestry sectors.

Industries/
Commodities

Farms

Forestry,
fishing,
and
related
activities

Oil and
gas
extraction

Mining,
except
oil and
gas

Support
activities
for
mining

Utilities

The Make and Use Tables in Figures 8A-1 and 8A-2 are excerpted from aggregated tables with about 80 sectors of resolution, not the
428 sectors in the benchmark tables. Less than 10 commodity and industry sectors are excerpted, so 70 other columns of data are not
shown.
13

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

Farms

217

197,334

3,306

...

...

...

...

Forestry, fishing,
and related
activities

19

38,924

...

...

...

...

Oil and gas


extraction

...

...

91,968

133

1,301

...

Mining, except oil


and gas

...

...

47,270

163

...

Support activities
for mining

...

...

33

86

32,074

...

Utilities

...
...
33
...
...
316,527
Figure 8A-1: Excerpted 2002 Make Table of the US economy ($millions)

Use Tables follow the "production recipe" style mentioned earlier in the chapter. If you read
down the values of a column in the Use Table, you see all of the economic inputs needed
from other sectors, i.e., you see how much the industry sector uses from the commodity
sectors. The rows show where the commodity outputs of sectors are used. Figure 8A-2
shows excerpted but actual data from the 2002 Use Table of the US economy. It shows
that the utilities industry, which includes power generation, uses billions of dollars of oil, gas,
and mined (coal) commodities. It also shows that a large share of the production of wood
products ($18 billion) were used by the construction industry in 2002.
Commodities/
Industries

Oil and gas


extraction

Mining, except
oil and gas

Utilities

Construction

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

218

Chapter 8: LCA Screening via Economic Input-Output Models

Farms

...

1,098

25,171

45,170

...

487

8,028

4,503

8,206

Support activities for


mining

5,354

2,378

...

Utilities

3,608

3,981

108

2,895

Construction

13,957

3,952

733

31

18,573

Oil and gas extraction


Mining, except oil and gas

Wood products

Nonmetallic mineral
314
615
52
products
Figure 8A-2: Excerpted 2002 Use Table of the US economy ($millions)

36,028

Make and Use Tables often exist with classifications of "before and after redefinitions". The
Figures above are both before redefinitions. While the various methods of redefinition vary
by the agency creating the tables, typically the process of redefinition involves carefully
remapping secondary activities within established sectors to other sectors. As an example,
the hotel industry typically has restaurants and laundry services on site, which are
represented by separate sectors in tables. As the data available supports it, activities within
the hotel industry are re-mapped into those other sectors (e.g., food purchases are switched
from the hotel industry to the restaurants industry). This affects both the make and use
tables, and the industry outputs are different between the versions of the tables with and
without redefinitions. In the end, some sectors' production recipes are basically unchanged
by redefinitions, while others are substantially changed. Since such redefinitions lead to
better-linked representations of the activities that could lead to energy and environmental
impacts, they are typically the basis of IO-LCA models.

Section 3 Further Detail on Prices in IO-LCA Models


Adjusting Values to Match Basis Year of EIO Models
As represented in Figure 8-2, one of the critical inputs to an EIO model is an increment of
final demand to be studied. The appropriate "unit" of this final demand is a currency-valued
input the same year as that of the model. If using a 2002 US EIO model, then a final
demand in 2002 dollars is needed. If you are using the model to assess the impacts of
automobile production in 2013, then you need to find a method of adjusting from 2013 to
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

219

2002 dollars for the final demand, since it is likely that prices in sectors have changed
significantly since the year of the model. However since the intention is to perform a
screening-level analysis, you can exploit the fact that production recipes (technologies) do
not change quickly, and assume that the only relevant difference between a 2013 and 2002
vehicle is the price (producer or purchaser).
For such conversions, the appropriate type of tool is an economic price index or GDP
deflator for a particular sector. These are generally available from national economic
agencies (in the US they are provided by the BEA, the same agency that creates the inputoutput tables, which leads to consistent comparisons). Note that an overall national price
index or GDP deflator may be the only such conversion factor available. In this case it can
be used but the adjustment should be clearly documented as using this national average
rather than a sector-specific value.
A full discussion of price indices and deflators is beyond the scope of this book, but are
typically represented as values where there is a "base year" with an index value of 100, and
values for years before and after the base year. Such values could be, e.g., 98 and 102, which
if before and after the base year would suggest annual price changes of about 2% per year.
It is the percentage equivalent values of the index values that are useful when using indexes
to adjust values from present day back to the basis year of EIO models. Note that the base
year of the index does not need to match the year of the EIO model; as long as you can use
the index values to adjust dollar values back and forth, you can adjust current values back to
the appropriate final demand value for the right year (or for any other year you might care
about). Equation 8A-1 shows how you can convert values from one basis year to another
using a price index (or a GDP deflator represented with a base=100 format):
!"#$%!
!"#$%!

!"#$% !"#$%

= !"#$% !"#$%!
!

(8A-1)

For example, assume the average retail price of automobiles in 2011 is known to be $30,000,
and we want to find the corresponding retail price to be used as the final demand in a 2002
US EIO purchaser basis model so that we could try to estimate the effects of manufacturing
a single automobile in 2011. The BEA provides spreadsheets of various economic timeseries estimates, such as Gross Output, including price indices, by sector14. For example,
price index values for the Automobile manufacturing sector (#336111), of 98.75 for 2002 and
99.4 for 2011, are shown in Figure 8A-3.

14

As of 2014 the file, named GDPbyInd_GO_NAICS_1998-2011.xls, can be found at http://www.bea.gov/industry/gdpbyind_data.htm

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

220

Chapter 8: LCA Screening via Economic Input-Output Models

Year

Index Value

1998

100.645

1999

100.451

2000

101.488

2001

100.902

2002

98.750

2003

98.748

2004

100.286

2005

100.000

2006

97.168

2007

96.280

2008

98.185

2009

99.918

2010

98.653

2011
99.400
Figure 8A-3: Price Index Values for Automobile Manufacturing Sector, 1998-2011 (Source: BEA)

Thus, the converted 2002 value can be found by applying equation 8A-1, as shown in
Equation 8A-2:
!"#$%!"##
!"#$%!""!

$"#,!!!

= !"#$%

!""!

!!.!

= !".!"

(8A-2)

In this case, the adjusted value for 2002 is $29,800. It may be surprising that the price level
has been almost unchanged over those ten years! One could instead use the negligible (less
than 1%) price level change as the basis of an assumption to ignore the need for adjustment
and just use $30,000 directly as the input final demand into the model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

221

Differences between producer and purchaser models


It is important to understand the multiple ways in which prices can be defined in inputoutput systems. While we primarily discussed and assumed the producer basis in Chapter 8,
there are other ways as well, as defined by the UN System of National Accounts (UN 2009):
Basic prices are the amount received by a producer from a purchaser for a good or service,
minus any taxes, and plus any subsidies (this is referred to as net taxes). This basis typically
excludes transport charges that are separately invoiced by the producer. You might more
simply consider basic prices as the raw value of a product before taxes or subsidies are
considered. Producer prices are the amount received by a producer from a purchaser, plus
any taxes and minus any subsidies. Producer prices are equivalent to the sum of basic prices
and net taxes. Finally, Purchaser prices are the amount paid by the purchaser and include
the cost of delivery (e.g., transportation costs) as well as additional amounts paid to
wholesale and retail entities to make it available for sale. These transportation and
wholesale/retail components are referred to as margins.
Example: Basic, producer and purchaser prices
To illustrate these different ways of describing prices, consider this example from
Statistics New Zealand (2012). Generic currency units of $ are used.
Figure 8A-4: Composition of basic, producer, and purchaser prices
Item

Amount

Basic price

12

+ Taxes on product, except sales tax or VAT

- Subsidy on products

= Producers price

+ Sales tax or VAT

+ Transport charges and trade margins paid by purchaser

= Purchasers price
13
Note: VAT = Value Added Tax (used in many parts of the developed world)

In this example, the seller is actually able to retain $12 for the product (basic price).
The sales transaction takes place at $8 (producers price). The seller gets an
additional $4 from the subsidy, less the tax. The purchaser has to pay $13 to take
possession of the good (producers price), with $5 going to non-deductible taxes and
transport charges and trade margins.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

222

Chapter 8: LCA Screening via Economic Input-Output Models

With respect to LCA, as we discussed, a producer basis is a cradle to gate perspective, while
a purchaser basis adds in transport and wholesale/retail operations and is thus cradle to
consumer (assuming pickup at store). In many cases, purchaser and producer prices would
be approximately the same. However, if transportation costs or retail markups needed to
bring the product to market are significant, there would be a difference.
Figure 8A-4 lists some typical differences between producer prices and consumer prices in
the 1997 US benchmark table. Note that these differences between producer and purchaser
price basis values are not provided to help "convert" values between the models as done
above with price indexes, but instead to emphasize the reasons why the prices and model
results are different.
Service sectors like barber shops have identical producer and
purchaser prices. On the other hand, the purchaser price of furniture is roughly split 50-50
between manufacture and the wholesale/retail activities needed to market the product.

Item
Shoes
Barber shops
Furniture

Producer Price
18,333
31,246
28,078

Transportation
Cost
179

235

Wholesale and
Retail Trade
Margin
21,748

27,648

Purchaser
Price
40,259
31,246
55,960

Producers Price
/ Purchasers
Price (%)
45
100
50

Figure 8A-4: Differences in Producer and Purchaser Prices (Millions of 1997 Dollars in Sector Output)

When a purchaser price model is used, a dollar of input of final demand to a single sector is
behind the scenes converted into these shares of the various underlying sectors (creating a
final demand vector with multiple entries instead of just a single value for the production
sector). For example $1,000 of final demand of furniture in a purchaser price model will
actually have a final demand of about $500 to the furniture manufacturing sector, a small
amount for transportation, and about $500 to the wholesale and retail trade sectors together.
This will be discussed in more detail in Section 5. Depending on the energy or
environmental intensity of the various sectors, the results for a given final demand using a
producer versus purchaser model may be higher, lower, or about the same. For example,
truck transportation (one of the margin sectors included in a purchaser basis model) is fairly
carbon intensive. If the purchased product has significant transportation requirements, the
purchaser model might have higher emissions than the producer model.
References

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

223

Statistics New Zealand, "Introduction to Price Indices", online course materials,


http://unstats.un.org/unsd/EconStatKB/KnowledgebaseArticle10351.aspx, posted June 14
2012. Last accessed January 10, 2014.
United Nations, System of National Accounts, 2008. New York, 2009.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

224

Chapter 8: LCA Screening via Economic Input-Output Models

Section 4 Mapping Examples from Industry Classified Sectors to


EIO Model Sectors
The organization of data for IO-LCA models is a substantial exercise requiring quality
checking and assurance processes. Economic matrices (e.g., an A matrix) are typically
provided directly by agencies and at worst typically only require minimal conversion or
preparation for use in EIO models. R matrices, on the other hand, require significant effort.
In this section, we focus on explaining how the various different industry classification
methods map to each other in support of making these matrices.
In the US, the primary classification scheme for industries (and the businesses within them)
is the North American Industry Classification System (NAICS). While the US government
has officially decreed that all industry data collection efforts shall use NAICS, some data
sources have not yet completely converted to this system. NAICS is a hierarchical
classification system with values ranging from 2 to 6 digits. Sectors are broadly categorized
by the first two digits, and then sub-classified by appending additional digits. For example,
manufacturing sectors start with the first two digits 31-33. Three digit sector numbers (e.g.,
311, 312, , up to 339) further classify manufacturing into activities like food
manufacturing and miscellaneous manufacturing. The three digit sector values can be
similarly broken up into more specific manufacturing categories which can be described with
4-digits sector numbers. (e.g., 3111, 3112, etc.), Six-digit sectors are the most detailed (and
least aggregated) classifications of activity in the economy. For example, the Automobile
manufacturing sector discussed at various times in this chapter is classified hierarchically in the
NAICS system as follows:
NAICS 33

Manufacturing (note 31-33 are all classified in the same way)

NAICS 336

Transportation equipment manufacturing

NAICS 3361

Motor vehicle manufacturing

NAICS 33611
NAICS 336111

Automobile and light truck manufacturing


Automobile Manufacturing

There are of course many other complementary manufacturing subsectors throughout that
hierarchy that are not shown. The full official US Census Bureau NAICS classification is
available on the Internet (at http://www.census.gov/eos/www/naics/).
While the Census Bureau (via BEA) is also the creator of the input-output tables, they do
not simply define the sectors of the input-output table to correspond precisely to 6-digit
NAICS industries or commodities. As mentioned in the chapter, they balance available
resources against the need to produce a sufficiently detailed input-output table. Thus of the
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

225

428 sectors in the 2002 US input-output model, relatively few correspond directly to 6-digit
NAICS codes (though these are mostly in the manufacturing sectors), many IO sectors map
to 5-digit NAICS codes, and a significant number map to 3- and 4-digit level NAICS codes.
Beyond the mapping of IO sectors to n-digit NAICS level, many IO sectors are not simple
one-to-one mappings, meaning the IO sectors represent aggregations of multiple underlying
NAICS codes. Figure 8A-5 shows a summary of how NAICS codes map into the first set of
sectors of the 2002 US IO detailed models from BEA. In the left hand column is a subset
of the hierarchical classifications of IO sectors.
I-O Industry Code and Title
11
AGRICULTURE, FORESTRY, FISHING AND
HUNTING
1110
Crop production
1111A0
Oilseed farming
1111B0
Grain farming
111200
Vegetable and melon farming
1113A0
Fruit farming
111335
111400

Related 2002 NAICS Codes

11111-2
11113-6, 11119
1112
11131-2,111331-4, 111336*,
111339
111335, 111336*
1114

Tree nut farming


Greenhouse, nursery, and floriculture
production
111910
Tobacco farming
11191
111920
Cotton farming
11192
1119A0
Sugarcane and sugar beet farming
11193, 111991
1119B0
All other crop farming
11194, 111992, 111998
Figure 8A-5: Correspondence of Crop Production NAICS and IO Sectors, 2002 US Benchmark Model

(Source: Appendix A)In the right hand column are the NAICS level sectors that are mapped
into each of the detailed IO sectors. For example, two 5-digit NAICS sectors (11111 and
11112) map into the Oilseed farming sector. A single 4-digit sector (1112) maps into the
Vegetable and melon farming sector. Various 5 and 6-digit level NAICS sectors map into the
Fruit farming IO sector. The asterisk next to 111336 notes that output from that sector is not
1:1 mapped into a single sector. As you can see, some of NAICS 111336's output is mapped
into the Tree nut farming sector below it. Fortunately, the names of the IO sectors tend to be
very similar or identical to the NAICS sector names (not shown above but available on the
Census NAICS URL above), so following the mapping process is a bit easier.
While the discussion above is motivated by how the IO transactions tables are created, it is
also critical to understand because of how the classifications and mappings affect the
creation of R matrices. Since each value of an R matrix is in units of effects per currency
unit of output for a sector, we need to ensure that data on energy and environmental effects

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

226

Chapter 8: LCA Screening via Economic Input-Output Models

for a sector have been correctly mapped into IO sectors so that the R matrix values
(numerators and denominators) have been derived correctly.
Reconsider Example 8-4 from the chapter. If instead of trying to find SO2 emissions for the
power generation sector, imagine you were deriving an R matrix of fuel use by sector. As
defined in Figure 8A-5, the fuel use of the Oilseed farming IO sector (1111A0) would be found
by finding data on the fuel use of NAICS sectors 11111 and 11112, and adding them
together. Finally, this sum would be normalized by the output of the oilseed farming sector
(from the Use Table) and the result would be the entry of the R matrix for that sector. As
another example, the R matrix value for the Greenhouse, nursery, and floriculture production
(111400) sector requires data on fuel use from just one 4-digit NAICS sector, 1114.
The mapping process for building R matrices seems simple, and is conceptually. However,
data for the required level of aggregation (4, 5, or 6-digit) is often unavailable. When you
have data at one aggregation level, but need to modify it for use for another level,
assumptions need to be made and documented.
If you have more detailed data, but need more aggregated data, the process is generally
simple. You can aggregate (sum) 6-digit NAICS data into a single 5-digit level. However
when the data only exists at an aggregate (e.g., 3 or 4-digit NAICS) level then you need to
create ways of allocating the aggregate data into more disaggregated 4, 5, or 6-digit level
sectors.
The following example shows the challenges present in mapping available data to the
corresponding IO sectors. It represents actual data available for use in the 2002 US
benchmark IO model, at the 428 sector level. Figure 8A-6 shows an excerpt of the NAICS
to IO mapping for the 29 Food manufacturing sectors. For these 29 sectors, the required level
of aggregated NAICS data ranges from the 4 to 6-digit levels. Creating the R matrix for
every type of fuel would require at least 29 different energy values that would then be
divided by sectoral output. Figure 8A-7 shows available data on energy use from food
manufacturing sectors from the US Department of Energy (MECS 2002).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

I-O Industry Code and Title


31

MANUFACTURING

3110

Food manufacturing

227

Related 2002
NAICS Codes

311111
Dog and cat food manufacturing
311111
311119
Other animal food manufacturing
311119
311210
Flour milling and malt manufacturing
31121
311221
Wet corn milling
311221
31122A
Soybean and other oilseed processing
311222-3
311225
Fats and oils refining and blending
311225
311230
Breakfast cereal manufacturing
311230
31131A
Sugar cane mills and refining
311311-2
311313
Beet sugar manufacturing
311313
311320
Chocolate and confectionery manufacturing from cacao beans
31132
311330
Confectionery manufacturing from purchased chocolate
31133
311340
Nonchocolate confectionery manufacturing
31134
311410
Frozen food manufacturing
31141
311420
Fruit and vegetable canning, pickling, and drying
31142
31151A
Fluid milk and butter manufacturing
311511-2
311513
Cheese manufacturing
311513
311514
Dry, condensed, and evaporated dairy product manufacturing
311514
311520
Ice cream and frozen dessert manufacturing
311520
31161A
Animal (except poultry) slaughtering, rendering, and processing
311611-3
311615
Poultry processing
311615
311700
Seafood product preparation and packaging
3117
311810
Bread and bakery product manufacturing
31181
311820
Cookie, cracker, and pasta manufacturing
31182
311830
Tortilla manufacturing
31183
311910
Snack food manufacturing
31191
311920
Coffee and tea manufacturing
31192
311930
Flavoring syrup and concentrate manufacturing
31193
311940
Seasoning and dressing manufacturing
31194
311990
All other food manufacturing
31199
Figure 8A-6: Correspondence of Food Manufacturing NAICS and IO Sectors,
2002 US Benchmark Model (Source: Appendix A)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

228

Chapter 8: LCA Screening via Economic Input-Output Models

NAICS
Sector Name
Total
Net
Residual Distillate Natural
Code
Electricity
Fuel
Fuel
Gas
311
Food
1,116
230
13
19
575
311221
Wet Corn Milling
217
23
*
*
61
31131
Sugar
111
2
2
1
22
311421
Fruit and Vegetable Canning
47
7
1
1
36
Figure 8A-7: NAICS Level Fuel Use Data From Manufacturing Energy Consumption Survey,
units: trillion BTU (Source: MECS 2002)

As you can see, the immediate challenge is that only 4 different sectors of results (rows) are
available from MECS, the best available data source. One of the sectors in MECS is a value
of energy use for the entire 3-digit NAICS Food manufacturing sector (311). Estimates of
energy use are provided for only three more detailed food manufacturing sectors: Wet corn
milling (311221), Sugar (31131), and Fruit and vegetable canning (311421). The reasons why only
these sectors were estimated is not provided, but presumably again seeks to balance data
quality, budget resources, and resulting resolution. Regardless, the MECS data provided for
311221 maps perfectly into IO sector 311221. The MECS data for 31131 needs to be split
into values for IO sectors 31131A and 311313. The MECS data for 311421 can be put into
IO sector 311420 (but may be missing data for sectors 311422, etc.). So the 5 and 6-digit
data from MECS can only at best help us with specifically mapped data into 4 of the 29
sectors. For the remaining 25 sectors, we need to find a method to allocate total energy use
data from the 3-digit NAICS level for 311 as shown in the first row of Figure 8A-7. It is not
as easy as taking these values for sector 311 and allocating because the values provided (e.g.,
1,116 trillion BTU of total energy use) already include the energy use of three detailed
sectors below in the table. Thus the energy of the other 25 sectors to be allocated is the
difference between the values provided in the NAICS 311 row and the three detailed rows.
In this example, total energy use of the 25 other sectors is 1,116 217 111 - 47, or 741
trillion BTU (with similarly calculated values for the other fuels).
In EIO-LCA, the allocation method used to distribute 741 trillion BTU of energy use into
the other 25 food manufacturing sectors is to use the dollar amounts from the 2002 Use
Table as weighted-average proxies for consumption of each energy source, which assumes
that each sector within a sub-industry is paying the same price per unit of energy. For the
case of the 1997 and 2002 Benchmark IO models for the US, more complete documentation
of how various effects have been derived is available in the EIO-LCA documentation
(http://www.eiolca.net/docs). Other IO-LCA models may make different assumptions to
allocate the available data.
Hopefully this discussion into the lack of consistency in the aggregation and organization of
data for IO-based sectoral analysis helps you to appreciate the complexity in creating such
models that are inevitably simple to use!

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

229

Section 5 Spreadsheet and MATLAB Methods for Using EIO Models


In this section we overview the specific use of Microsoft Excel and MATLAB in support of
linear algebra / matrix math manipulations of IO models. This section is not intended to be
an introduction to using these two software tools, or as an introduction to linear algebra.
Modeling Using Microsoft Excel Software
Despite its cost, Microsoft Excel is ubiquitous spreadsheet software already installed on
many computers. Other spreadsheet programs (such as those from OpenOffice) use very
similar methods to those described here. For relatively small projects, spreadsheets can be
very useful in organizing LCA data and in assisting with matrix calculations.
In Excel, elements of vectors and matrices are easy to enter by hand (for small matrices) and
also by pasting in or importing data from other sources. A 1 row by 5 column or 5 row by 5
column area of a spreadsheet can be generated quickly. Returning to Example 8-2, the A
and I matrices, and vectors Y1 and Y2 could be entered into Excel as in Figure 8A-8.

Figure 8A-8: Data Entry in Microsoft Excel for Example 8-2

However, such entries, despite looking like a matrix, would not be treated as such in Excel.
All cells in Excel are by default treated individually. To be recognized as a vector or matrix,
Excel requires you to create arrays. This can be achieved in one of two ways. The most
convenient way of making re-usable arrays in Excel is to highlight the entire area of the
matrix (e.g., the 1 by 5 or 5 by 5 series of cells created above) and to use the built-in naming
feature of Excel. For example, we can highlight the cell range B2:C3 and then move the
cursor to the small box between the "Home" ribbon bar and cell A1 and type in "A", to
designate this set of cell as the A matrix, as shown in Figure 8A-9. The same can be done
for I, Y1, and Y2, however note that you can not use "Y1" and "Y2" as Excel names because
there are already Excel cells with those names (in column Y of the spreadsheet) you must
instead name them something like "Y_1" and "Y_2". These names of specific cells or
groups of cells help you to create more complex cell formulas as they act like aliases or
shorthand notations that refer to the underlying cell ranges. In practice, instead of having to
enter the cell range (e.g., B2:C3) and potentially making typos in formulas, you can instead
just use the name you have assigned. This is useful with matrix math because it is easier to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

230

Chapter 8: LCA Screening via Economic Input-Output Models

ensure you are multiplying the correct vectors and matrices by using their names instead of
cell ranges.

Figure 8A-9: Named A Matrix in Microsoft Excel for Example 8-2

Once you have made names for your data ranges, you can use built-in Excel matrix math
functions like multiplication and inversion. Addition and subtraction of vectors or matrices
of the same dimensions (m x n) can be done with regular + and operators. However, you
need to help Excel realize that you are making an array and set aside space for it to be
created based on knowing its dimensions. To find I+A as in Example 8-2, you need to first
select an unused cell range in your spreadsheet that is 2 x 2 (and optionally name it, e.g.,
IplusA, and press enter), then type the equal sign (=), enter the formula (I+A), and then
click CTRL-SHIFT-ENTER. This multi-step process tells Excel that you want the results of
the matrix operation I+A to be entered into your selected cell range, to add the previously
named references I and A, and to generate the result with array formulas (thus the CTRLSHIFT-ENTER at the end). The screenshots in Figures 8A-10 and 8A-11 show the
intermediate and final steps (before and after typing CTRL-SHIFT-ENTER) of this process
in Excel. Note that after CTRL-SHIFT-ENTER has been typed, Excel modifies the cell
formula such that curly brackets are placed around the formula, denoting the use of an array
function as applied to a cell in the named range.

Figure 8A-10: Entering Array Formula for Selected Area in Microsoft Excel for Example 8-2

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

231

232

Chapter 8: LCA Screening via Economic Input-Output Models

Figure 8A-11: Result of Array Formula in Microsoft Excel for Example 8-2

Multiplication and inversion of matrices uses the same multi-step process, but use the builtin functions MMULT and MINVERSE. You can use the MMULT and MINVERSE
functions by typing them into the formula bar or by using the Excel "Insert->Function"
dialog box helper. As with the example shown above, as long as you first select the cell
range of the expected result (with the appropriate m x n dimensions), enter the formula, and
click CTRL-SHIFT-ENTER at the end, you will get the right results. You will see an error
(or a result in only one cell) if you skip one of the steps. While a bit cumbersome, using
array functions in Excel is straightforward and very useful for small vectors and matrices.
Figure 8A-12 shows a screenshot where [I-A]-1 and [I+A] Y1 have been created.
E-resource: A Microsoft Excel file solving Examples 8-1 through 8-3 is posted to the
textbook website.

Figure 8A-12: Result of Array Formula in Microsoft Excel for Example 8-2

Note that you can perform vector and matrix math without using the Excel name feature.
In this case, you would just continue using regular cell references (e.g., B2:C3 for the A
matrix in the screenshot above). All of the remaining instructions are the same.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

233

Brief MATLAB tutorial for IO-LCA modeling section by itself?


This short primer on using Mathworks MATLAB is no substitute for a more complete
lesson or lecture on the topic but will help you get up to speed quickly. It presumes you
have MATLAB installed on a local computer with the standard set of toolboxes (no special
ones required). MATLAB, unlike Microsoft Excel, is a high-end computation and
programming environment that is often used when working with large datasets and matrices.
It is typically available in academic and other research environments.
When MATLAB is run, the screen is split into various customizable windows. Generally
though, these windows show:

the files within the current directory path,

the command window interface for entering and viewing results of analysis,

a workspace that shows a listing of all variables, vectors, and matrices defined in the
current session, and

a history of commands entered during the current session.

In this tutorial, we focus on the command line interface and the workspace. Despite the
brevity of the discussion included here, one could learn enough about MATLAB in an hour
to replicate all of the Excel work above.
MATLAB has many built-in commands, and given its scientific computing specialties, is
designed to operate on very large (thousands of rows and columns) matrices when installed.
Some of the most useful commands and operators for use with EIO models in MATLAB
are shown in Figure 8A-11. Many commands have an (x,y) notation where x refers to rows
and y refers to columns. Others operate on whole matrices.
Working with EIO matrices in MATLAB involves defining matrices and using built-in
operators much the same way as was done in the Excel examples above. Matrices are
defined by choosing an unused name in the workspace and setting it equal to some other
matrix or the result of an operation involving commands on existing matrices. MATLAB
commands are entered at the command line prompt ( >> ) and executed by pressing
ENTER, or placed all in a text file (called a .m file) and run as a script. If commands are
entered without a semicolon at the end, then the results of each command are displayed on
the screen in the command window when ENTER is pressed. If the semicolon is added
before pressing ENTER, then the command is executed, but the results are not shown in
the command window. One could look in the workspace window to see the results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

234

Chapter 8: LCA Screening via Economic Input-Output Models

Command

Description of Result

zeros(x,y)

creates a matrix of zeros of size (x,y). This is also useful to "clear out" an existing matrix.

ones(x,y)

same as zeros, but creates a matrix of all ones.

eye(x)

creates an identity matrix of size (x,x). Note the command is not I(x), a common
confusion.

inv(X)

returns the matrix inverse of X.

diag(X)

returns a diagonalized matrix from a vector X, i.e, where the elements of the vector are the
diagonal entries of the matrix (like the identity matrix).

sum(X)

returns an array with the sum of each column of the input matrix. If X is a vector then the
command returns the sum of the column.

size(X)

tells you the size of a matrix, returning (number of rows, number of columns). This is
useful if you want to verify the row and column sizes of a matrix before performing a
matrix operation.

A'

performs a matrix transpose on A, inverting the row and column indices of all elements of
the matrix.

A*B

multiplies matrices A and B in left-to-right order and with usual linear algebra.

A.*B

element-wise multiplication instead of matrix multiplication, i.e., A11 is multiplied by B11


and the results put into cell11 of the new matrix (A and B must be the same size).

A,B

concatenates A and B horizontally.

A;B

concatenates A and B vertically.

clear all

empties out the workspace and removes all vectors, matrices, etc. Like a reset.
Figure 8A-11: Summary of MATLAB Commands Relevant to EIO Modeling

In this section, courier font is used to show commands typed in to, or results returned
from, MATLAB. For example, the following commands, entered consecutively, would
"clear out" a matrix named "test_identity" and then populate its values as a 2x2 identity
matrix:
>> test_identity=zeros(2,2)
>> test_identity=eye(2)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

235

and the results consecutively displayed would be:


test_identity =
0

test_identity =
1

The format of matrices displayed in MATLAB's command window is just as one would
write them in row and column format. Matrices are populated with values by either
importing data (not discussed here) or by entering values in rows and columns, where
columns are separated by a space and rows by a semicolon. For example, the following
command would create a 2x2 identity matrix:
identity_2 = [1 0; 0 1]
which would return the following result in the command window:
identity_2 =
1

The workspace window has a list of all vectors or matrices created in the session. All are
listed, and for small matrices individual values are shown. For larger matrices, only
dimensions (m x n) are shown. Display of the dimensions is useful to ensure that you do
not try to perform operations on matrices with the wrong number of rows and columns.
Double clicking on a vector or matrix in the workspace opens a new window with a tabbed
spreadsheet-like view of its elements (called the Variable Editor). It is far easier to diagnose
problems in this editor window than in scrolling through the results in the command
window, which can be overwhelming to read with many rows and columns.
As discussed above, commands can be run from a text file containing a list of commands.
Code is written into such files and saved to a filename with a .m extension. To run .m files,
you navigate within the current directory path window until your .m file is visible. Then in
the command window, you type in the name of the .m file (without the .m extension) and hit
ENTER. MATLAB then treats the entire list of commands in the file as a script and runs it
sequentially. Depending on your needs, you may or may not need semicolons at the end of
lines (but usually you will include semicolons so the command window does not become
cluttered as results speed by in the background). Any commands without semicolons will
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

236

Chapter 8: LCA Screening via Economic Input-Output Models

have their results shown in the command window. If semicolons are always included, the
results can be viewed via the workspace.
As a demonstration, one possible sequence of commands to complete Example 8-1 (either
entered line by line or run as an entire .m file) is:
Z=[150 500; 200 100];
X=[1000 2000; 1000 2000];
A=Z./X;
A command sequence for Example 8-2 is (assuming commands above are already done):
y1=[100; 0];
y2= [0; 100];
direct=eye(2)+A;
L=inv(eye(2)-A);
directreq1=direct*y1;
directreq2=direct*y2;
totalreq1=L*y1;
totalreq2=L*y2;
where the final 4 commands create the direct and total requirements for Y1 and Y2. A
command sequence for Example 8-3 is (assuming commands above are already done):
R=[50 5];
R_diag=diag(R);
E_direct_Y1 = R_diag*directreq1;
E_direct_Y2 = R_diag*directreq2;
E_total_Y1=R_diag*totalreq1;
E_total_Y2=R_diag*totalreq2;
E_sum_Y1=sum(E_total_Y1);
E_sum_Y2=sum(E_total_Y2);

EIO-LCA in MATLAB

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

237

The EIO-LCA model in a MATLAB environment is available as a free download from the
website (www.eiolca.net). The full 1997 model in MATLAB is available directly for
download, and a version of the 2002 model excluding energy and GHG data is available
directly for download. The 2002 MATLAB model with energy and GHG data is available
for free for non-commercial use via a clickable license agreement on the www.eiolca.net
home page (teachers are encouraged to acquire this license and the MATLAB file for local
distribution but to make non-commercial license terms clear to students).
Within the downloaded material for each model are .mat files with the vectors and matrices
needed to replicate the results available on the www.eiolca.net website, and MATLAB code
to work with producer and purchaser models. MATLAB .m files named EIOLCA97.m and
EIOLCA02.m are scripts for the 1997 and 2002 models, respectively, to generate results
similar to what is available on the website.
For example, running the EIOLCA97.m file in the 1997 MATLAB model will successively
ask whether you want to use the producer or purchaser model, which vector (economic,
GHG, etc.) to display, and how many sectors of results (e.g., all 491 or just the top 10).
Before running this script, you need to enter a final demand into one or more of the 491
sectors in the SectorNumbers.xls spreadsheet file. Results will be saved into a file called
EIOLCAout.xls in the 1997 MATLAB workspace directory. Note: to run the
EIOLCA97.m script file, you must be running MATLAB directly in Windows or via
Windows emulation software (e.g., Boot Camp or Parallels on a Mac) since it uses
Microsoft Excel read and write routines only available on Windows. The vectors and
matrices in the 1997 model though are accessible to MATLAB on any platform. Due
to these limitations, and the age of the data in the 1997 model, this section focuses on
the 2002 MATLAB model (but similar examples and matrices exist in the 1997
model).
Similarly, running the EIOLCA02.m file in the 2002 MATLAB model files will successively
ask whether you want to use the producer (industry by commodity basis default) or
purchaser model, the name of the vector variable that contains your final demand (which
you will need to set before running the .m file), and what you would like to name the output
file. Note the 2002 MATLAB model can be run on any MATLAB platform (not just
Windows).
Before running this script, you need to create and enter a final demand into one or more of
the 428 sectors. The following MATLAB session shows how to use the EIOLCA02.m
script to model $1 million of final demand into the Oilseed farming sector. All lines beginning
with >> show user commands (and as noted above the user also needs to choose between
the producer and purchaser models, and give names for the final demand vector and a
named txt file for output highlighted in green). Before running this code, you will need to

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

238

Chapter 8: LCA Screening via Economic Input-Output Models

change the current MATLAB directory to point to where you have unzipped the MATLAB
code.
>> y=zeros(428,1);
>> y(1,1)=1;
>> EIOLCA02
Welcome to EIO-LCA
This model can be run in 2002 $million producer or
purchaser prices.
For producer prices, select 1.
prices, select 2.
Producer or Purchaser prices?

For retail (purchaser)


1

Name of the 428 x 1 final demand vector

Output file name? (include a ".txt")


Filename

xout.txt

Total production input is:

1$M2002, producer prices

The resulting xout.txt file shows the total supply chain results across all sectors for $1
million of final demand in all data vectors available in the MATLAB environment (which
would match those on the website), all in one place. This imported as a text file into
Microsoft Excel with semicolon delimiters for more readable output and for easier
comparison to the results on the website. An excerpt of rows and columns from this file is
shown in Figure 8A-12 (sorted by sector number):

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 8: LCA Screening via Economic Input-Output Models

239

Total econ,
$M

Total
Energy, TJ

GHG Emissions,
mt CO2e

Total, All Sectors

2.1

16.1

3029.8

1111A0

Oilseed farming

1.1

8.4

2218.7

1111B0

Grain farming

0.0

0.2

91.1

111200

Vegetable and melon farming

0.0

0.0

0.2

111335

Tree nut farming

0.0

0.0

0.1

1113A0

0.0

0.0

0.3

111400

Fruit farming
Greenhouse and nursery
production

0.0

0.0

0.5

111910

Tobacco farming

0.0

0.0

0.4

111920

Cotton farming

0.0

0.2

43.7

1119A0

Sugarcane and sugar beet farming

0.0

0.0

0.3

1119B0

All other crop farming

0.0

0.0

3.5

Figure 8A-12: First 10 Sectors of Output from EIOLCA02.m script for $1M of oilseed farming

The script .m files have much useful information in them, should you care to follow the
code. For example, you can see the specific matrix math combinations used to generate the
producer and purchaser models and their direct and total requirements matrices used in
EIO-LCA.
Instead of using the provided .m script files, the MATLAB workspaces for 1997 and 2002
can be used on any MATLAB platform to do tailored modeling using the various vectors
and matrices. For example you may want to quickly just generate the Total GHG emissions
for $1 million of oilseed farming in the same EIO-LCA 2002 model. Total GHG emissions
are in the matrix called EIvect, in row 7 (rows 1-6 are the various energy vector values and
rows 7-12 are the various GHG emission vector values)
>> clear all
>> load EIO02.mat
>> y=zeros(428,1);
>> y(1,1)=1;
>> x=L02ic*y;
>> E=EIvect(7,:)*x
which returns:
E = 3.0298e+03
The same value as in the first row of the last column of Figure 8A-12.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

240

Chapter 8: LCA Screening via Economic Input-Output Models

Likewise, you might be interested in generating the total GHG emissions across the supply
chain for $1 million into each of the 428 sectors:
>> allsects=EIvect(7,:)*L02ic;
which returns a 1x428 vector containing the requested 428 values (where the data in column
1 is the same as above for oilseed farming). This simple one-line MATLAB instruction
works because the 1x428 row vector chosen from EIvect (total GHG emissions factors per
$million for each of 428 sectors) is multiplied by the column entries in the total requirements
matrix for each of the sectors, and the result is the same as finding the total GHG emissions
across the supply chain as if done one at a time. The first four values in this vector (rounded)
are:
[3030 4470 1303 1329],
representing the total GHG emissions for $1 million of final demand into the first 4 (of 428)
sectors in EIO-LCA. Much more is possible given the available economic and
environmental/energy flow matrices that is not possible on the website or with the included
script file. For example, you could do a similar analysis as above for the purchaser-based
model to find the results of $1 million in all sectors.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

241

Chapter 9 : Advanced Life Cycle Models


In this chapter, we define alternative approaches for LCA using advanced methods such as
process matrices and hybrid analysis. Process matrices organize process-specific data into
linear systems of equations that can be solved with matrix algebra, and represent a significant
improvement over traditional process flow diagram approaches. Hybrid LCA models,
combining process and input-output based methods, offer ways to leverage the advantages
of the two methods while minimizing disadvantages. Three approaches to hybrid LCA
modeling are presented, with the common goal of combining types of LCA models to yield
improved results. The approaches vary in their theoretical basis, the ways in which the
submodels are combined, and how they have been used and tested.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:
1. Define, build, and use a process matrix LCI model from available process flow data.
2. Describe the advantages of a process matrix model as compared to a process flow
diagram based model and an input-output based model.
3. Describe the various advantages and disadvantages of process-based and IO-based
LCA models.
4. Classify the various types of hybrid models for LCA, and how they combine
advantages and disadvantages of process and IO-based LCA models.
5. Suggest an appropriate category of hybrid model to use for a given analysis, including
the types of data and process-IO model interaction needed.

Process Matrix Based Approach to LCA


In Chapters 5 and 8 we introduced process-based and IO-based methods as two approaches
to performing life cycle assessment. The bottom-up process method presented in Chapter 5
(which is more widely referred to as the process flow diagram approach) is a fairly limited
application of the process method. It requires time for iteratively finding each needed set of
process data, including for following the connections between processes. We found results
by summing effects from each included process in the diagram in a bottom-up method. On
the other hand, IO-LCA methods presented a distinct benefit in terms of delivering quick
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

242

Chapter 9: Advanced Life Cycle Models

and easy top-down results by exploiting matrix math methods to invert and solve the entire
upstream chain.
We can merge the concepts of input-output analysis with the data from a process flow
diagram approach to create linear systems of equations that represent a comprehensive set of
process models known as a process matrix.
Now that you have seen both process and IO methods, you might have already considered a
process matrix-based model. Conceptually, a process matrix model incorporates all available
process data (whether explicitly part of the process flow diagram or not) into the system.
The process matrix approach yields results similar to what would be expected if we added
more and more processes to the process flow diagram. However, as we will see, the process
matrix approach is able to improve upon the bottom up process diagram approach, as it can
model the interconnections of all processes, and as in IO methods, will be able to fully
consider the environmental flows of all upstream interconnections. The process matrix
approach thus gives us some of the benefits of an IO model system but with data from
explicit (rather than average) processes.
Before going further, we use the linear algebra introduced in Chapter 8 to re-define process
data and models. Figure 9-1 shows a hypothetical system with two processes, one that
makes fuel and one that makes electricity.15 This example is similar to the main example of
Chapter 5 that discussed making electricity from coal.

Figure 9-1: Process Diagrams for Two-Process System

Focusing on the purely technical flow perspective, process 1 takes the raw input of 50 liters
of crude oil, and process 2 takes an input of 2 liters of fuel. Likewise, the output flow
arrows show production of 20 liters of fuel and 10 kWh of electricity, respectively, in the
two processes (the emissions shown in the figure will be discussed later). In this scenario,
the functional units are the outputs, 20 liters of fuel and 10 kWh of electricity, since all of the

15

Thanks to Vikas Khanna of the University of Pittsburgh for this example system.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

243

process data corresponds to those normalized values. Without any analysis we know that
fuel (process 1's output) must be produced to produce electricity (process 2's output).
As with any linear system, but especially for the types of analysis of interest in LCA, we need
to consider alternative amounts of outputs needed, and thus create a general way of scaling
our production higher or lower than the functional unit values above. That is, we do not
merely have to be constrained to produce 20 liters of fuel or 10 kWh of electricity. Once a
scaling factor is established, the output for any input, or the input for any output can be
found.
Within our process system, we initially consider only flows of product outputs through the
processes, e.g., fuel and electricity, not elementary flows. Thus for now we ignore necessary
crude oil input and the various emissions (again, we will consider these later). In such a
linear system we define a scaling factor vector X with values for each of the two processes,
X1 and X2, and the total net production across the system for each of the two outputs, Y1
and Y2. Here, Y1 is the total net amount of fuel produced, in liters. Y2 is the total net
amount of electricity produced, in kWh.
We can define a sign convention for inputs and outputs such that positive values are for
outputs and negative values are for inputs (i.e., product output that is input to other
processes in the system). Given this framework and notation, we define the following linear
system of equations which act as a series of physical balances given our unit process data:
20 X1 - 2 X2 = Y1
0 X1 +10 X2 = Y2

(9-1)

where the first equation mathematically defines that the total amount of fuel produced is 20
liters for every scaled unit process 1, net of 2 liters needed for every scaled unit produced in
unit process 2. Likewise the second equation defines that the total amount of electricity
produced is zero per scaled unit of process 1 and 10 kWh per scaled unit process 2. To scale
our functional unit-based processes (up or down), we would insert values for X1 and X2. In
general, these values could be fractions or multiples of the unit. If X1 = 1 and X2 = 1, we
would generate the identical outputs in the processes shown in Figure 9-1. If we wanted to
make twice as much fuel, then X1 = 2, which, for example would require 100 liters of crude
oil. If we wanted to make twice as much electricity as in the unit process equation (20 kWh),
then X2 = 2, requiring 4 liters of fuel input.
Similar to what was shown in Chapter 8 (and its Appendices) we use the generalized matrix
notation AX = Y to describe the system of Equations 9-1, as demonstrated in Heijungs
(1994) and Suh (2005). Now the matrix A, which in the process matrix domain is called the

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

244

Chapter 9: Advanced Life Cycle Models

technology matrix, represents the technical coefficients of the processes linking the
product systems together. In the system describing Figure 9-1 above,
=

20
0

2
10

Note the structure of the matrix. The functional units, representing the measured outputs of
each of the processes, are along the diagonal of A. The use of outputs from other processes
within the system are the off-diagonal entries, e.g., -2 shows the fuel (output 1) used in the
process for making electricity (output 2).
To solve for the required scaling factor to produce a certain net final production in the
system, the linear system AX = Y is solved as in Chapter 8, by rearranging the linear system
equation and finding the inverse of A:
AX = Y X=A-1Y

(9-2)

In this example, the inverse of A is:


! =

. 05
0

. 01
.1

thus if we want to produce Y2 = 1,000 kWh of electricity, we can use Equation 9-2 to
determine what the total production in the system needs to be. In this case it is:
= ! =

. 05
0

0
10
. 01
=
100
. 1 1000

which says that to make 1,000 kWh of electricity in our system of two processes, then from a
purely technological standpoint, we would need to make 10 times the unit process 1 of fuel
production (200 liters total) and we would need to scale the electricity generation process 2
by a factor of 100. Within the system, of course, we would be making 200 liters of fuel, all
of which would be consumed as the necessary (sole) input into making 1,000 kWh of
electricity. Figure 9-2 shows this scaled up sequence of processes, including the dotted line
"connection" of the two processes. The processes are defined identically as those in Figure
9-1, but with all values scaled by 10 for process 1 and by 100 for process 2.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

245

Figure 9-2: Scaled-Up and Connected Two-Process System

We could also use AX = Y notation


for the linear system if we instead
wanted to determine the total net
output given a set of scaling factors.
For example, if X = [10 ; 100] (our
result from the example above) then:
= = !

20
0

2 10
0
!!
!=!
!
10 100
1000

So if we want to make 10 times the


unit production of process 1 (200
liters of fuel), and 100 times the unit
production of process 2 (1,000 kWh
of electricity), net production is only
Y2 = 1,000 kWh of electricity, since
all of the 200 liters of fuel produced
in process 1 are consumed in process
2 to make electricity, resulting in a
net of Y1 = 0 liters of fuel. This result
(i.e., Y = [0; 1000]) is the same as
used in the previous example.

So far, we have motivated the purely


technological aspects of the simple two-process
system. However Figure 9-1 gives us additional
information on the resource use and emissions
performance of the two processes. We create
an environmental matrix, B, analogous to
Chapter 8's R matrix, to represent the direct
per-functional unit resource use and emissions
factors. The B matrix has a conceptually
identical basis of flows per functional unit as in
Chapter 8, except that instead of consistently
being flows per million dollars, the units are
flows per functional unit, which vary across the
processes (e.g., per kg, per MJ, etc.).
E = BX = BA1Y

(9-2)

Again we use a sign convention where negative


values are inputs and positive values are
outputs. From Figure 9-1, process 1 uses 50
liters of crude oil as an input, and emits 2 kg
SO2 and 10 kg CO2. Process 2 has no raw
inputs (only the product input of fuel already represented in the A matrix), and emits 0.1 kg
of SO2 and 1 kg of CO2 emissions, respectively. Thus the environmental matrix B for our
system, where the rows represent the flows of crude oil, SO2, and CO2, and the columns
represent the two processes, can be represented as:
50 0
= 2
0.1
10
1
Of course, the linear system behind B reminds us of the connection of inputs, outputs, and
emissions shown in Figure 9-1:
E crude = 50 X1 ; E SO2 = 2 X1 + 0.1 X2 ; E CO2 = 10 X1 + X2
Put another way, the elements of BA-1 in Equation 9-2 represent total resource and
emissions factors across the process matrix system, analogous to total economy-wide
emissions factors R[I -A]-1 of an IO system (and the A matrices are different in the systems).
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

246

Chapter 9: Advanced Life Cycle Models

Building on our example from above, we can estimate the environmental effects of
producing the required amount of outputs by multiplying the B matrix by our previously
found vector Y [0 ; 1,000]. The resulting E is:
=

500
30
200

While we have motivated this initial example with an intentionally small (two process)
model, it is easy to envision how adding additional processes would change the system. If
we added a third process, which potentially had interconnections to the original two
processes, we would merely be adding another dimension to the problem. The A matrix
would be 3x3, and the X and Y vectors would have an additional row as well. If we added
no environmental flows, B would also be 3x3. If we added flows (e.g., another fuel input or
emission) then there would be additional rows. The linear algebra does not become
significantly more difficult. Depending on the necessary scope, your system may end up
with 5, 10, or 50 processes.
We generally use integer scaling factors and achieve integer results in the chapter examples.
However, the linear algebra method can derive any real input and output values. Note that
some processes may in fact only be able to use integer inputs (or be able to produce integer
levels of output), in which case your results would need to be rounded to a near integer.
E-resource: "Chapter 9 Excel Matrix Math" shows all of the examples in this chapter in a
Microsoft Excel spreadsheet.

Connection Between Process- and IO-Based Matrix Formulations


The most important aspect of the process matrix approach to recognize is the similarity to
how we solved EIO models. The matrix math (AX = Y X=A-1Y) is identical, differing
only where in the EIO notation the "A matrix" is instead I-A. If you look at the elements of
the technology matrix A in the process matrix domain, and think through its composition, a
more distinct connection becomes clear. As noted above, the diagonal entries of the
technology A matrix summarize the functional units of the processes collected within the
system. If we were to think only about the inputs into the process system, and/or collect an
A matrix consisting only of the values of our own technology matrix from available process
data, the matrix may not have any of those functional unit values it would just contain data
on the required inputs from all of the processes in the system. We would have no need to
specify a particular sign convention for inputs, so we could include them as positive values.
In the example above, the adjusted A matrix with this perspective would be:
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

0
0

247

2
0

which would summarize the case where process 1 had no technological inputs from other
parts of the system (i.e., no input of fuel or electricity) and where process 2 had a
requirement of 2 liters of fuel. If we wanted to make productive use of this different
process, we would need to add in the functional unit basis of the system (otherwise we
would have no way of knowing how many units of output can be created from the listed
inputs). In doing that, we would need to create a diagonalized matrix containing the
functional unit values of each of the processes, which in this case is:
=

20
0

0
10

and we would combine the information in these two matrices before inverting the result. W
is a matrix of positive values of the process outputs while A* is made of positive values for
the process inputs. The net flows are found as outputs minus inputs:
W + -(A*) = W A*
And our modified matrix math system would be:
[W A*]X = Y X=[W A*]-1Y
Of course, combining W and A* in this way gives exactly the original A process matrix,
which is then inverted to find the same results as above.
The key part to understand is that this is exactly what is done in IO systems, but since the
system is in general normalized by a constant unit of currency (e.g., "millions of dollars"), all
of the functional units are already "1", and thus the identity matrix I is what is needed as the
general W matrix above. Nonetheless, this exposition should help to reinforce the similarity
in derivation of the process based and IO-based systems.

Linear Systems from Process Databases


The simple two-process system above used hypothetical values for inputs and outputs. But
we may also build up our linear system with data from processes in available databases. We
could envision the process flow diagram from Chapter 5, where we made electricity from
burning bituminous coal that had been mined and delivered to the power plant by train.
This example used actual data from the US-LCI database. In Chapter 5, we already saw how
to build simple LCI models from this process flow diagram. We could find the same
answers by building a linear system. Using the same notation as in our two-process example
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

248

Chapter 9: Advanced Life Cycle Models

above, but using the US LCI database values from Chapter 5, and assuming that bituminousfired electricity generation is process 1, bituminous coal mining is process 2, and rail
transport is process 3, we could define the linear system in 9-3:

1 X1+ 0 X2 + 0 X3 = Y1
-0.442 X1 + 1 X2 + 0 X3 = Y2
-0.461 X1 + 0 X2 + 1 X3 = Y3

(9-3)

For this system,


1
= 0.442
0.461

0
1
0

0
0
1

So if we want to produce 1 kWh of electricity, as found in Chapter 5, we would need to


produce the following in each of the three processes:
1
= ! = 0.442
0.461

0
1
0

0 1
1
0 0 = 0.442
1 0
0.461

and considering only fossil CO2 emissions,


= 0.994 0 0.0189
so, using E = BX , E ~ 1.003 kg CO2, the same result reached in Chapter 5 (Equation 5-1).
This example shows us that we could build linear system models based on process data, and
could include as many processes as we have time or resources to consider. Of course we can
use software tools like Microsoft Excel or MATLAB to manage the data and matrix math.
From the example in Chapter 5, we could expand the boundary to add data from additional
processes like refining petroleum, so as to capture effects of diesel fuel inputs. As we add
processes (and flows) we are just adding rows and columns to the linear system above.
Beyond adding rows and columns as we expand our boundary, we also generally add
technical coefficients to the A matrix that were not previously present (e.g., if we had data
showing use of electricity by the mine). We would thus be adding upstream effects that
would likely not have been modeled in a simple process flow diagram approach. The threeprocess example above does not shed light on this potential, because there are no upstream
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

249

coefficients that are in the system that were not in our process flow diagram example in
Chapter 5.
If building linear system models from the bottom up, we would eventually decide that we
were unlikely to add significantly more information by adding data from additional processes
or flows. The dimensions of the technology matrix A of our linear system would be equal to
the number of processes included in our boundary, and the dimensions of the environmental
matrix B would be the number of processes and the number of flows.
However, if we have access to electronic versions of all the process LCI modules from
databases, we can use them to build large process matrix models. Since databases like US
LCI and ELCD are publicly available, matrices and spreadsheet models can be built that by
default encompass data for all of the many interlinked processes and flows provided in the
database. Many researchers and software tools incorporate external databases with the
process matrix approach (e.g., SimaPro). In the rest of this section, we explore construction
and use of these comprehensive process matrix models to facilitate rapid construction of
LCI models. While not publicly available, licensees of ecoinvent data can download or build
complete matrices representing all processes and flows.
Chapter 5 discussed the availability of unit process and system process level data in the
ecoinvent database and software tools. System processes are aggregated views of processes
with relatively little detail, and no connections to the other unit processes. Using them is like
using a snapshot of the process (i.e., where the matrix math has already been done and
saved). Using ecoinvent unit processes allows the full connection to all upstream unit
processes, and calculations involving them will "redo" the matrix math).
While the US LCI database as accessed on the LCA Digital Commons website does not
directly provide A and B matrices, the matrices can be either built by hand using the
downloadable spreadsheets (see Advanced Material for Chapter 5), or by exporting the entire
matrix representation of the US LCI database from SimaPro (choose the library after
launching the software, then choose "Export Matrix" from the File menu). The US LCI
database, as exported from SimaPro as of 2014, provides LCI data for 746 products and 949
flows. These 746 products are the outputs of the various processes available in US LCI.
Given that the US LCI database has information on 746 products, we could form an A
matrix of coefficients analogous to the linear system above with 746 rows and 746 columns,
where the elements of the matrix are the values represented as inputs of other product
processes within the US LCI database to produce the functional unit of another product's
process. For example, the coefficients of our three-process US LCI example above would
be amongst the cells of the 746 x 746 matrix. Of course, the A matrix will be very sparse
(i.e., have many blank or 0 elements) since many processes are only technologically
connected to a few other processes. This matrix would be similar to the IO direct
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

250

Chapter 9: Advanced Life Cycle Models

requirements matrix considered in Chapter 8. Likewise B can be made from the nonproduct input and output flows of the US LCI database, resulting in a 949 x 746 matrix.
This matrix again will be quite sparse since the number of flows listed in a process is
generally on the order of 10 to 20.
For small-scale LCA projects, a Microsoft Excel-based process matrix model could suffice
and provide the same quantitative results, yet will not provide the graphical and other
supporting interpretation afforded by other LCA software tools. The process matrix models
will, in general, represent more technological activity than bottom-up process flow diagrams
because of the addition of many upstream matrix coefficients, and thus will generally
estimate higher environmental flows.
E-resources: The Chapter 9 folder shows two forms of the US LCI database represented
as a complete matrix of 746 processes in Excel. The main (larger) spreadsheet shows all of
the intermediate steps of building the matrices, including the raw exported data from
SimaPro, matrices normalized by functional unit, transposed, inverted, etc., on separate
worksheets. It also has a "Model" worksheet where you can enter production values for
processes and see the "direct" (just the top-level technical requirements for the product
chosen as would be shown in the LCI module of the chosen process), total (X), and E
results. It is useful to look at this spreadsheet and its various cell formulas, especially those
involving array formulas, to see how such models can be built from the databases. Amongst
the many features of this larger spreadsheet are that the coefficients of the A and B matrices
can be modified, and changes would ripple through the model. A second spreadsheet
(filename appended with "smaller") is the same model, but with just the final resulting
matrices, without intermediate matrix math steps. It has the same functionality, but is
significantly smaller in size. It may be more appropriate to use for building models that will
not modify any of the A or B matrix coefficients.
These two spreadsheets work by entering a desired input of product Y into the blue column
of the Model worksheet to then estimate the direct and total technical requirements X and
the environmental effects E, shown in yellow cells. Be sure to enter values pertinent to the
listed functional unit of the product (i.e., check whether energy products have units of MJ or
kWh). The spreadsheet conditionally formats in red any results deemed "significant", in this
case greater than 0.000001, since imperfect matrix inversion results in many very small values
(less than 10-16) throughout the model which can be treated as negligible.
Example: Estimating the life cycle fossil CO2 emissions of bituminous coal-fired
electricity using a process matrix model of the US LCI database.
We can estimate the total fossil CO2 emissions of making coal-fired electricity in the US
using the Microsoft Excel US LCI e-resource spreadsheets. The first step is determining the
appropriate model unit process and input value to use. The same product used in other US
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

251

LCI examples, Electricity, bituminous coal, at power plant/US, (process number 416 of 747 US
LCI processes when sorted alphabetically) is chosen and an input value of 3.6 MJ (equal to 1
kWh) is used. Figure 9-3 shows the "direct" technical flows from this input, corresponding
to the seven direct inputs needed for this product in the US LCI data module for this
process.
Flows prepended with "Dummy" in the US LCI database were not discussed in Chapter 5.
In short, these are known technical flows for a process, but for which no LCI data are
included within the system. They thus act only as tracking entries in the model.
Product

Unit

Flow

Transport, train, diesel powered/US

tkm

0.461

kg

0.442

tkm

0.056

kg

0.044

Bituminous coal, at mine/US


Transport, barge, average fuel mix/US
Dummy_Disposal, solid waste, unspecified, to unspecified treatment/US
Dummy_Disposal, ash and flue gas desulfurization sludge, to unspecified reuse/US

kg

0.014

Transport, combination truck, diesel powered/US

tkm

0.003

Dummy_Transport, pipeline, coal slurry/US

tkm

0.002

Figure 9-3: Direct technological flows from 3.6 MJ (1 kWh) of Electricity, bituminous coal, at power
plant in US LCI Process Matrix

The values of X and E for all processes and flows are also shown in the spreadsheet. Figure
9-4 shows the elements of X with physical flows greater than 0.001 in magnitude. There are
18 more products upstream of electricity than in the "direct" needs. While, as expected, the
process matrix values for the same physical products are larger in Figure 9-4 than in Figure
9-3, the differences are generally small. We can also see that the additional amount of
bituminous coal-fired electricity needed across the upstream chain within the process matrix
model is small (0.037 MJ).
The Air, carbon dioxide, fossil, column (number 231 out of 949 flows) shows the estimate of
total emissions of fossil CO2 across the entire process matrix, 1.0334 kg CO2 (with apologies
for the abuse of significant figures). While larger, this result is only marginally more than the
result from the process flow diagram approach. This is not surprising, though, because it is
well known that the main contributor of CO2 emissions in fossil electricity generation is the
combustion of fuels at the power plant, which was included in the process flow diagram. If
we were to choose a different product for analysis, we may see substantially higher
environmental flows as a result of having the greater boundary within the process matrix.
Note that the US LCI data provides information on various other carbon dioxide flows
which have not been included in our scope. There are two "Raw" flows of CO2 (as inputs),

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

252

Chapter 9: Advanced Life Cycle Models

as well as 4 other air emissions. The only other notable one of these is the Air, carbon dioxide,
biogenic flow, which represents non-fossil emissions, such as from biomass management.

Process
Electricity, bituminous coal, at power plant/US
Transport, train, diesel powered/US

Functional
Unit
MJ

Output Value
X
3.637

tkm

0.466

Bituminous coal, at mine/US

kg

0.447

Dummy_Disposal, solid waste, unspecified, to underground deposit/US

kg

0.105

Electricity, at grid, US/US

MJ

0.068

Transport, barge, average fuel mix/US

tkm

0.057

kg

0.044

Transport, barge, residual fuel oil powered/US

tkm

0.044

Transport, ocean freighter, average fuel mix/US

tkm

0.037

Transport, ocean freighter, residual fuel oil powered/US

tkm

0.033

Electricity, nuclear, at power plant/US

MJ

0.015

Dummy_Disposal, ash and flue gas desulfurization sludge, to unspecified reuse/US

kg

0.014

Transport, barge, diesel powered/US

tkm

0.012

Electricity, natural gas, at power plant/US

MJ

0.012

Crude oil, at production/RNA

kg

0.008

Dummy_Transport, pipeline, unspecified/US

tkm

0.007

Dummy_Electricity, hydropower, at power plant, unspecified/US

MJ

0.005

Transport, ocean freighter, diesel powered/US

tkm

0.004

Transport, combination truck, diesel powered/US

tkm

0.003

Dummy_Transport, pipeline, coal slurry/US

tkm

0.002

Electricity, residual fuel oil, at power plant/US

MJ

0.002

Electricity, lignite coal, at power plant/US

MJ

0.002

Natural gas, at extraction site/US

m3

0.001

Natural gas, processed, at plant/US

m3

0.001

Electricity, biomass, at power plant/US

MJ

0.001

Dummy_Disposal, solid waste, unspecified, to unspecified treatment/US

Figure 9-4: Total technological flows (X) from 3.6 MJ (1 kWh) of Electricity, bituminous coal, at power plant in
US LCI Process Matrix, abridged

Figure 9-5 shows the top products that emit CO2 in the upstream process chain of
bituminous coal-fired electricity. As previously discussed, the combustion of coal at the
power plant results in 97% of the total estimated emissions. The emissions from rail
transport by train are another 1%. Thus our original process flow diagram model from
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

253

Chapter 5, which we motivated as a simple example, ended up representing 98% of the CO2
emissions from coal-fired electricity found in the more complex process matrix model.
Process
Total
Electricity, bituminous coal, at power plant/US

Emissions
(kg)
1.033

Percent of
Total

1.004

97.2%

Diesel, combusted in industrial boiler/US

0.011

1.0%

Transport, train, diesel powered/US

0.009

0.9%

Electricity, natural gas, at power plant/US

0.002

0.2%

Residual fuel oil, combusted in industrial boiler/US

0.002

0.2%

Transport, barge, residual fuel oil powered/US

0.001

0.1%

Natural gas, combusted in industrial boiler/US

0.001

0.1%

Gasoline, combusted in equipment/US

0.001

0.1%

Electricity, lignite coal, at power plant/US

0.001

0.1%

Transport, ocean freighter, residual fuel oil powered/US

0.001

0.1%

Bituminous coal, combusted in industrial boiler/US

0.001

0.0%

Figure 9-5: Top products contributing to emissions of fossil CO2 for 1kWh of bituminous coal-fired
electricity. Those representing more than 1% are bolded.

One of the aspects of the ISO Standard that we did not discuss in previous chapters is the
use of cut-off criteria, which define a threshold of relevance to be included or excluded in a
study. For example, the cut-off may say to only include individual components that are 1%
or more of the total emissions. The Standard motivates the use of cut-off criteria for mass,
energy, and environmental significance, which would mean that we could define cut-off
values for these aspects for which a process is included within the boundary (and can be
excluded if not). As an example, if we set a cut-off criterion of 1% for environmental
significance, and our only inventory concern was for fossil CO2 to air, then we could choose
to only consider the first two processes (three if we rounded off conservatively) listed in
Figure 9-5. Likewise with a cut-off criterion of 0.1%, we would only need to consider the
first 10. Neither of these cut-off criteria significantly affects our overall estimate of CO2
emissions.
Cut-off criteria could be similarly applied for energy via LCI results. Mass-based cut-off
criteria are evaluated separately than from LCI results, e.g., by considering the mass of
subcomponents of a product. In all cases, the cut-off criteria apply to the effects of the
whole system boundary, not to each product or process in the system, so if the product
system were fairly large, it is possible that many initially scoped parts of the system could be
excluded based on the cut-off criteria. For example, if the system studied is the life cycle of
an automobile, and the scope is fossil CO2 emissions, then electricity use in production of

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

254

Chapter 9: Advanced Life Cycle Models

the vehicle may not be large enough to matter. In that case all the results in the example
above would be excluded.
The other side of the cut-off criteria selection decision issue is that of truncation error.
LCI models are inevitably truncated when arbitrary or small boundaries are used. Thus,
studies showing the effects of truncation can compare the results from within a selected
boundary as compared to those with a complete (e.g., process matrix or IO system)
boundary. In the example above, the truncation error is very small as long as the "at power
plant" effects are within the boundary, but such errors can sometimes be substantial as
analysts define boundaries based on what they think are the important components without
knowledge of which are the most important. In the end, the process matrix approach is yet
another valuable screening tool, albeit one with substantial process-based detail, to be used
in setting analysis boundaries.
As you have seen in this chapter, the process matrix approach provides an innovative way to
use our process data models. The results from using the matrix approach will generally be
larger and more comprehensive compared to the simpler process diagram approach, in the
same way as when we used IO models. This is because the process flow diagram approach
is inherently limited by what you include in the boundary. To some extent, a process flow
diagram approach assumes by default that everything excluded from the diagram doesn't
matter. But as we have seen with the process matrix (and IO) approaches, it is difficult to
determine what matters until you have considered these larger boundaries. We should
generally expect IO models to estimate the largest amount of flows, as they comprise the
whole economy within the boundary, including overhead and service sectors, which are
rarely included in LCI databases. It is for this reason that IO models are often used to prescreen the needed effort for a more comprehensive process-based analysis.

Extending process matrix methods to post-production stages


In our examples above, the boundaries have only included the cradle to gate (including
upstream) effects. We can extend our linear systems methods to include downstream effects
as well, such as use and end-of-life management, since again this will merely involve adding
rows and columns, as well as additional coefficients, to the matrices.
In this case, lets build on the example from Figure 9-1 but where we make alternative
descriptions of the processes involved. Assume that we want to make a lamp product
requiring fuel and electricity to manufacture it, and that it will be disposed of in a landfill by
a truck. We had previously referred to fuel production as process 1, and electricity
production as process 2. Now we add lamp production as process 3, and disposal as process
4. Specifically, lets assume that producing each lamp requires 10 litres of fuel and 300 kWh
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

255

of electricity (and emits 5 kg CO2), and finally that disposing of the lamp at the end of its life
consumes 2 litres of fuel (and emits 1 kg CO2).
Our linear system can be written as in system 9-4:

20 X1 - 2 X2 - 10 X3 - 2 X4 = Y1
0 X1 +10 X2 - 300 X3 + 0 X4 = Y2
0 X1 +0 X2 + 1 X3 + 0 X4 = Y3
0 X1 + 0 X2 + 0 X3 + 1 X4 = Y5

(9-4)

where
20
0
=
0
0

2
10
0
0

10
300
1
0

2
0
0
1

In this case when we have an input (Y) to the system of 1 produced lamp and one disposed
lamp, the total production is:
. 05
0
= ! =
0
0

. 01
0.1
0
0

3.5
30
1
0

0.1
0
0
1

0
3.6
0
30
=
1
1
1
1

Thus, in order to produce 1 lamp (the output of 1 lamp unit process), 3.6 units of fuel and
30 units of electricity are also needed. If we only model CO2 emissions, then
= 10 1 5 1
and R = 72 kg of CO2. While the system has few interrelationships, it may not be as easy to
validate this result as before, but if we think through our production, we need to make 72
litres of fuel (12 litres from producing the lamp and disposing of it, and 60 litres from
making electricity), 300 kWh of electricity (all for making the lamp), and 1 lamp for every
lamp produced. The total emissions of 72 kg come from 36 kg of CO2 from fuel
production, 30 kg CO2 from electricity production, 5 kg from lamp production, and 1 kg
from disposal. So our results make sense.
While not shown here, we could add another row and column to represent use of the lamp.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

256

Chapter 9: Advanced Life Cycle Models

Advantages and Disadvantages of Process and IO Life Cycle Methods


Before considering other advanced LCA methods, we summarize the strengths and
weaknesses of process and IO methods. Both IO-LCA and process modeling have their
advantages and drawbacks. In principle, they can yield quite different results. If so, the
analyst would need to make a closer examination to determine the reasons for the
differences and which one or combination of the two gives the best estimate. Figure 9-6
compares the strengths and weaknesses of the two types of LCA models.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

Process Models

Input-Output

Detailed process-specific analyses

Economy-wide, comprehensive assessments (direct


and indirect environmental effects included)

257

Advantages

Specific product comparisons


Process improvements, weak point
analyses

System LCA: industries, products, services, national


economy
Sensitivity analyses, scenario planning

Future product development


assessments

Publicly available data, reproducible results


Future product development assessments
Information on every commodity in the economy

System boundary setting subjective

Some product assessments contain aggregate data

Tend to be time intensive and


costly

Process assessments difficult

Disadvantages

Difficulty in linking dollar values to physical units


New process design difficult
Use of proprietary data
Cannot be replicated if
confidential data are used
Uncertainty in data

Economic and environmental data may reflect past


practices
Imports treated as U.S. products
Difficult to apply to an open economy (with
substantial non-comparable imports)
Non-U.S. data availability a problem

Uncertainty in data
Figure 9-6: Advantages and Disadvantages of Process and IO-Based Approaches

The main advantage of a process model is its ability to examine, in whatever detail is desired,
the inputs and discharges for a particular process or product. The main disadvantage is that
gathering the necessary data for each unit process can be time consuming and expensive. In
addition, process models require ongoing comparison of tradeoffs to ensure that sufficient
process detail is provided while realizing that many types of relevant processes may not have
available data. Even though process matrix methods are quick and comprehensive, their
boundaries still do not include all relevant activities. Process models improve and extend the
possibilities for analysis, but we often cannot rely wholly on process models.
The main advantage of an IO approach is its comprehensiveness, it will by default include all
production-based activities within the economy. Its main disadvantage is its aggregated and
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

258

Chapter 9: Advanced Life Cycle Models

average nature, where entire sectors are modeled as having equal impact (e.g., no
differentiation between types of electricity). IO-based models simplify our modeling effort
and avoid errors arising from the necessary truncation or boundary definition for the
network of process models. An IO model's operation at this aggregate level fails to provide
the detailed information required for some analyses.

Categories of Hybrid LCA Models


An inevitable goal is thus to develop hybrid LCA methods that combine the best features
of process and IO-based approaches. In general, hybrid approaches use either a processbased or IO model as the core model, but use elements of the other approach to extend the
utility of the overall model.
While Bullard (1978) was perhaps the first to discuss such hybrid methods, Suh (2004)
categorizes the types of hybrid models in LCA as follows: tiered, input-output based, and
integrated hybrid analysis.
In a tiered hybrid analysis, specific process data are used to model several key components
of the product system (such as direct and downstream effects like use phase and end of life),
while input-output analysis is used for the remaining components. If the process and IO
components of tiered hybrid analysis are not linked, the hybrid total results can be found by
summing the LCI results of the process and IO components without further adjustment (but
identified double counted results should be deducted). The point or boundary at which the
system switches from process to IO-based methods is arbitrary but can be affected by
resources and data available. This boundary should thus be selected carefully, to reduce
model errors. Note that unlike many of the other methods discussed in this book, there are
no standard rules for performing the various types of hybrid analysis.
In tiered hybrid models, the inputoutput matrix and its coefficients are generally not
modified. Thus, analysis can be performed rapidly, allowing integration with design
procedures and consideration of a wide range of alternatives. Process models can be
introduced wherever greater detail is needed or the IO model is inadequate. For example,
process models may be used to estimate environmental impacts from imported goods or
specialized production.
The most basic type of tiered hybrid model is one where the process and IO components
are not explicitly linked other than by the functional unit. An example of such a tiered
hybrid model could be in estimating the life cycle of a consumer product. In this case, one
could use an IO-based method, e.g., EIO-LCA, to estimate the production of the product
(and depending on which kind of IO model used, the scope of this could be cradle to gate or

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

259

cradle to consumer). Process based methods could then be used to consider the use phase
and disposal of the product.
Example 9-1: Tiered separable hybrid LCA of a Washing Machine
To estimate the flows of a washing machine over its life cycle, we could assume that an EIOLCA purchaser basis model is able to estimate the effects from cradle to the point at which the
consumer purchases the appliance. Likewise we could assume that process data can be used to
estimate the effects of powering the washing machine over its lifetime. We use the data shown in
Chapter 3 for "Washing Machine 1".
IO component: Assuming that the purchaser price of a new washing machine is $500, we could
estimate the fossil CO2 emissions from cradle to consumer via the 2002 US purchaser price basis
model in EIO-LCA (Household laundry equipment manufacturing sector) as 0.2 tons.
Process components: (Using the US-LCI process matrix Excel spreadsheet, we can consider the
production and upstream effects of 10 years' worth (8,320 kWh) of electricity. Since the
functional unit is MJ, we convert by multiplying by 3.6 (30,000 MJ), and the resulting fossil CO2
emissions are 6,140 kg (6.1 tons).
Note: The US-LCI process matrix has no data on water production and landfilling so these stages
are excluded from this example.
Total: 6.3 tons fossil CO2.

The description of tiered hybrid methods noted the possibility that the process and IO
components could both be estimating some of the same parts of the product system
diagram. In these cases, common elements should be dealt with by subtracting out results
found in the other part of the model. In Williams et al (2004), the authors performed a
hybrid LCA of a desktop computer. The three main subcomponents of the hybrid model
are shown in Figure 9-7, where most of the major pieces of a desktop computer system were
modeled via process-based methods, capital equipment and feedstocks were modeled with
IO methods, and the net "remaining value" of the computer systems not otherwise
considered in the two other pieces were also then modeled with IO methods.
The overall result from Williams is that the total production energy for a desktop computer
system was 6400 MJ, 3140 MJ from the process-sum components, 1100 MJ from the
additive IO component, and 2130 MJ from the remaining value. Common elements were
subtracted but not represented in the values listed above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

260

Chapter 9: Advanced Life Cycle Models

Figure 9-7: Subcomponents of Hybrid LCA of Desktop Computer (Source: Williams et al 2004)

Others have used tiered hybrid methods to consider effects of goods and services produced
in other countries when using IO models, which would otherwise assume that impacts of
production would be equal to domestic production.
In an input-output based hybrid analysis, sectors of an IO model are disaggregated into
multiple sectors based on available process data. A frequently mentioned early discussion of
such a method is Joshi (2000), where an IO sector was proposed to be disaggregated to
model steel and plastic fuel tanks for vehicles. In this type of hybrid model, the process level
data allows us to modify values in the rows or columns of existing sectoral matrices by
allocating their values into an existing and a disaggregated sector.
In Chapter 8 we discussed various aggregated sectors in the US input-output tables, such as
Power generation and supply, where all electricity generation types (as well as transmission and
distribution) activities are all in a single sector. If one could collect sufficient data, this single
sector could be first disaggregated into generation, transmission, and distribution sectors,
and then the generation sector further disaggregated into fossil and non-fossil, and then
perhaps into specific generation types like coal, gas, or wind. Another example of
disaggregation that could be accomplished with sufficient process data is the Oil and gas
extraction sector (which could be disaggregated into oil extraction and gas extraction). Any of
these would be possible with sufficient data, but only if the resulting models would be better
than what process-based methods could achieve.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

261

Of course when disaggregating, all relevant matrices need to be disaggregated and updated,
and to use the disaggregated results in a model, the A matrix and R matrix values need to be
adjusted based on the process data. Since the A and R matrices are already normalized per
unit of currency, it is usually easier to modify and disaggregate make, use, or transaction
matrices for economic values (and then re-normalize them to A) and to disaggregate
matrices with total un-normalized flows to subsequently make R matrices.
Let us consider that we want to build a hybrid LCI model based on Example 8-3 in Chapter
8. At the time a two-sector economy was defined as follows (values in billions):
1
2
V
X

1
150
200
650
1000

2
500
100
1400
2000

Y
350
1700

X
1000
2000

Assume that sector 1 is energy and sector 2 is manufacturing, and that we have process data
(not shown) to disaggregate sector 1 into sectors for fuel production (1a) and electricity (1b).
The data tells us that most of the $150 billion purchased by sector 1 from itself is for fuel to
make electricity, and how the value added and final demand is split between the fuel and
electricity subsectors. We verify that the X, V, and Y values for sector 1 in the original
example are equal to the sum of the values across both sectors in the revised IO table.
1a: Fuel
1b: Elec
2: Manuf
V
X

1a: Fuel
15
10
100
400
525

1b: Elec
100
25
100
250
475

2: Manuf
300
200
100
1400
2000

Y
110
240
1700

X
525
475
2000

The direct requirements matrix for the updated system (rounded to 2 digits) is:
. 03 . 21 . 15
= . 02 . 05 . 1
. 19 . 21 . 05
Finding values for the disaggregated R requires more thought. Emissions of waste per
$billion were 50 g in the original (aggregated) sector 1 and 5 g for sector 2. Thus, the total
emissions of sector 1 were originally (50g/$billion)($1,000 billion) = 50,000 g. If our
available process data suggests that emissions are 20% from fuel extraction and 80% from
electricity, then there are 10,000 g and 40,000 g, respectively. Given disaggregated sectoral
outputs of fuel and electricity of $525 and $475 billion, the waste factors for sectors 1 and 2
are 19 g and 84 g, respectively (sector 3's value is unchanged). The disaggregated R matrix is:
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

262

Chapter 9: Advanced Life Cycle Models

19
= 0
0

0
84
0

0
0
5

Following the same analysis as done in Example 8-3 ($100 billion into each of the sectors),
the total waste generated by each of the (now 3) sectors are 2.5, 9.9, and 2.0 kg, respectively.
The new emissions for the disaggregated energy sectors are both fairly different than the
original aggregated sector 1's emissions of 6.4kg. The emissions from sector 2 are slightly
less (2kg compared to the previous 2.2 kg), since the revised A matrix from our hybrid
analysis splits the purchases of energy by sector 2 differently, with relatively less dependence
on the more polluting electricity sector.
This example is intended to demonstrate the method by which one could mathematically
disaggregate an A matrix with process data. Note that complete process data is not required,
and that even limited process data for some of the transactions, coupled with assumptions
on how to adjust other values, can still lead to interesting and relevant hybrid models. For
example, if disaggregating an electricity sector into generation, transmission, and distribution,
purchases of various services by the three disaggregated sectors may not be available.
Assumptions that purchases of services are equal (i.e., divide the original sector's value by 3),
or proportional to the outputs of the disaggregated sectors (i.e., distribute the original value
by weighted factors) are both reasonable. Given the ultimate purpose of estimating
environmental impact, its unlikely any of these choices on how to redistribute effects of a
service sector would have a significant effect on the final results of the hybrid model.
In the final category are integrated hybrid models, where there is a technology matrix that
represents physical flows between processes and an economic IO matrix representing
monetary flows between sectors. The general rationale for wanting to build an integrated
hybrid model is that IO data may be comprehensive but slightly dated, or too aggregated to
be used by itself. Use of process data can attempt to overcome both of these shortcomings.
Both the process and IO matrices in this kind of model use make-use frameworks (see
Advanced Material for Chapter 8), and are linked via flows at the border of both systems.
Unlike the tiered or IO-based approached above, the models are called integrated because
the process level data is fully incorporated into the IO model. Double counting is avoided
by subtracting process-based commodity flows out of the IO framework. Integrated models
require substantially more effort than the other two types of hybrid models because of the
need to manage multiple unit systems (physical and monetary) as well as the need to avoid
double counting of flows through subtraction. They may also require sufficiently detailed
estimates of physical commodity prices across sectors. In general, the goal of an integrated
hybrid model is to form a complete linear system of equations, physical and monetary, that
comprehensively describe the system of interest.
The general structure of such a model, analogous to Equation 8-5 is:
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

263

m
m
m
x1m = z11
+ z12
+...+ z1jm +...+ z1n
+ y1m

(mass)

m
m
m
x 2m = z 21
+ z 22
+...+ z 2jm +...+ z 2n
+ y 2m

(mass)

x $3 = z $31 + z $32 +...+ z $3j +...+ z $3n + y $3

(dollar)

:
x $n = z $n1 + z $n2 +...+ z $nj +...+ z $nn + y $n

(dollar)

which leads to an A matrix with mixed units which has four separate partitioned matrices
representing fully physical flows, fully monetary flows, and two connecting matrices for the
interacting flows from the physical to monetary and monetary to physical sectors. The
models can be "run" with inputs (Y) of physical and/or monetary flows, and outputs are
then physical and monetary.
Hawkins et al (2007) built an integrated hybrid model to comprehensively consider the
implications of lead and cadmium flows in the US. Data on material (physical) flows were
from USGS process-based data. The economic model used was the US 1997 12-sector
input-output model (the most aggregated version of the benchmark model).
E-resource: The Microsoft Excel spreadsheet for the Hawkins (2007) lead and cadmium
models is available on the course website as a direct demonstration of the data needs and
calculation methods for an integrated hybrid model (see the paper for additional model
details).
From the Hawkins model, Figure 9-8 shows direct and indirect physical and monetary flows
from an input of final demand of $10 million into the manufacturing sector in 1997. Since
the focus of this model is on lead, the left side of the graph summarizes flows (kg) through
the physical lead-relevant sectors of the mixed-unit model while the right side of the graph
summarizes monetary flows ($ millions) through the economic model.
While the
manufacturing sector is highly aggregated in this model, it shows the significant flows of lead
through various physical sectors needed in the manufacturing sector. While a more
disaggregated IO model may provide higher resolution insights into the specific flows of
lead through manufacturing sectors, even this highly aggregated model required a personyear of effort to complete, and pushed the limits of the available USGS physical flow data.
Given the specific data needs of these models, it is likely that a more disaggregated model is
not possible given available process data.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

million $

16.7
13.7

2.0

1.5

1.0

0.5

0.0

L
Le ead
ad P Le
Se rim ad
co ary Min
nd S i n
ar me g
y
Le Sm ltin
ad e g
S ltin
Le hee g
En N
ad tin
d ew
g
L
N of
at Li Sto ea sold
ur fe ra d
e
al S ge ox r
re to
i
so ra Ba des
ur ge tte
ce B rie
s att s
an er
d i
C m es
on in
Tr
M st ing
an
an r u
sp
uf cti
a c on
or
Pr
ta
tu
tio
of
rin
es
n
an Tr g
si
on
d ad
ut e
Ed al F
uc an ina Info iliti
at d b nc rm es
io u ia
n si l a atio
an ne c
n
Le d h ss tivit
is ea se ies
ur
e lth rvic
an se e
d rv s
O hos ice
th p s
er ita
se lity
rv
ic
e
O s
th
er

Lead
(tonnes)

tonnes

Monetary Requirements
million
(million $)

264

Figure 9-8: Physical Lead And Monetary Output Required For $10 Million Final Demand Of
Manufacturing Sector Output, 1997 (Source: Hawkins 2007). Output Directly Required Is
Represented By The Hashed Areas While Dotted Gray Regions Depict The Indirect Output.

Chapter Summary
Mathematics allows us to organize generic process data into matrices that can be used to
create process matrix models. These process matrix models share some of the desirable
features of input-output based models such as fast computation and larger boundaries while
preserving the desirable process-specific detail. As LCA needs evolve, process and IO
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

265

models can be combined in various ways resulting in hybrid LCA models that leverage
advantages of the two model types while overcoming some of the disadvantages. Hybrid
LCA models vary with respect to the amount of resources and data needed, integration, and
model development involved. All will generally yield more useful results than a single
model. Now that we have introduced all of the core quantitative models behind LCA, we
can learn how to take the next step, impact assessment.

Note that the E-resources provided with this book do not provide spreadsheet forms of the
ecoinvent database, as it is a commercial product. However, if you have a license for
ecoinvent directly from the website you can request the files needed to construct a process
matrix. If you have an ecoinvent sublicense through purchase of SimaPro, you can use the
"Export Matrix" option mentioned above to create your own Microsoft Excel-based
ecoinvent process matrix. Note that the dimensions of the A matrix for ecoinvent 2.0 will be
roughly 4,000 x 4,000, and the spreadsheet files will quickly become large (the B matrix
dimensions will be 1,600 x 4,000). If trying to use ecoinvent data in a process matrix form, it
is better to use MATLAB or other robust tools given issues in working with matrices of that
size in Excel (see Advanced Material at the end of this chapter).

References for this Chapter


Bullard C. W., Penner P. S., Pilati D.A., "Net energy analysis: handbook for combining
process and input-output analysis", Resources and Energy, 1978, Vol. 1, pp. 267-313.
Hawkins, T., Hendrickson, C., Higgins, C., Matthews, H. and Suh, S., A Mixed-Unit InputOutput Model for Environmental Life-Cycle Assessment and Material Flow Analysis,
Environmental Science and Technology, Vol. 41, No. 3, pp. 1024 - 1031, 2007.
Heijungs, R., "A generic method for the identification of options for cleaner products",
Ecological Economics, 1994, Vol. 10, pp. 69-81.
Joshi, S., "Product environmental life-cycle assessment using input-output techniques", The
Journal of Industrial Ecology, 2000, 3 (2, 3), pp. 95-120.
Suh, S., and Huppes, G., Methods for Life Cycle Inventory of a Product, Journal of Cleaner
Production, 13:7, June 2005, pp. 687697.
Suh, S., Lenzen, M., Treloar, G., Hondo, H., Horvath, A., Huppes, G., Jolliet, O., Klann, U.,
Krewitt, W., Moriguchi, Y., Munksgaard, J., and Norris, G., "System Boundary Selection in
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

266

Chapter 9: Advanced Life Cycle Models

Life-Cycle Inventories Using Hybrid Approaches", Environmental Science and Technology, 2004,
38 (3), pp 657664.
Williams, Eric, "Energy Intensity of Computer Manufacturing: Hybrid Assessment
Combining Process and Economic InputOutput Methods", Environmental Science and
Technology, 38 (22), pp. 61666174, 2004.

End of Chapter Questions

1. Modify the two-process example from the Chapter (equations 9-1), and estimate E, if
process 1 requires 1 kWh of electricity as an input.
2. The un-excerpted list of inputs in the US LCI database for the Bituminous coal, at mine
process is shown below. The electricity flow is currently outside of our three-process linear
system since we do not have an "at grid" electricity process.
Input
Bituminous coal, combusted in industrial boiler
Diesel, combusted in industrial boiler
Electricity, at grid, US, 2000
Gasoline, combusted in equipment
Natural gas, combusted in industrial boiler
Residual fuel oil, combusted in industrial boiler
Dummy, Disposal, solid waste, unspecified, to underground deposit

Unit
kg
L
kWh
L
m3
L
kg

Amount
0.00043
0.0088
0.039
0.00084
0.00016
0.00087
0.24

Update the three-process example by assuming that the given flow of electricity is from
bituminous coal-fired electricity (not grid average). How different are X and E?
3. Redo question 2 by updating the US LCI process matrix (746x959) found on the book
website with the same electricity assumption. How different are X and E from the values in
Figures 9-3 through 9-5?
4. Expand the scope of the three-process example by including the Diesel, combusted in
industrial boiler process (an input at the mine) based on the US LCI data. What is your
updated estimate of total fossil CO2 emitted across the new four-process system?
5. What price of electricity is needed as a final demand in the 2002 EIO-LCA producer
price model to yield results comparable to the Microsoft Excel US LCI process matrix
spreadsheet for electricity? Discuss which of the methods is likely more relevant, and why
each model type is limited.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

267

6. Compare the percentage contribution results of the 2002 EIO-LCA producer price model
Power generation and supply sector with the US LCI process matrix Electricity, at grid US process.
Generally describe the differences in results of these two models.
6. Estimate the fossil CO2 emissions of 1kg alumina, at plant/US using the US-LCI process
matrix model. How does your estimate of fossil CO2 emissions change if you apply cut-off
criterion of 1%, 5%, and 10% to remove processes from the estimate? Given these findings
and continuing to only be concerned with fossil CO2, what might be an appropriate cut-off
criterion here?
Have a problem where I truncate by hand the process matrix and have them run the same
input into each and discuss results and effects of truncation?

Advanced Material for Chapter 9 Section 1 Process Matrix Models


in MATLAB
In this section, we build on the material presented in the chapter about process matrix
models, which have already been demonstrated in Microsoft Excel, by showing how to
implement them in MATLAB.
One of the benefits of using MATLAB, if available, is that it is well-suited for manipulating a
series of consecutive operations quickly and in real-time, without needing to save versions of
normalized or inverted matrices for later use (as required for the Excel version of the model
introduced in the chapter, leading to its large file size).
E-resource: In the online supplemental material for Chapter 9 is a zip file containing
matrix files and a .m file for the US LCI database (746 processes, 949 flows) that can be used
in MATLAB. The A and B matrices are identical to those used in the Excel version. A
subset of the code in the .m file is discussed below, which shows the MATLAB
implementation of the same US LCI process matrix model as in the US LCI process matrix
spreadsheets discussed in the chapter. Since the models were developed with the same
parameters exported from SimaPro and using the same algorithm, results are identical.
% matrices assumed to be in workspace (USLCI.mat):
% USLCI_Atech_raw -

technology matrix from exported matrix (A)

% env_factors - environmental coefficients in exported matrix (B)


% funct_units - row of functional units from exported Matrix
clear all
load('USLCI.mat')
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

268

Chapter 9: Advanced Life Cycle Models

%makes a "repeated matrix" with funct_units down columns


funct_units_mat=repmat(funct_units,746,1);
norm_Atech_raw=USLCI_Atech_raw./funct_units_mat; % normalizes the
A matrix by functional units
L=inv(eye(746)-norm_Atech_raw);

% this is the I-A inverse matrix

y=zeros(746,1);
funct_units_env=repmat(funct_units,949,1);
except has 949 rows

same

as

above

env_factors_norm=env_factors./funct_units_env;
co2fossil=env_factors_norm(231,:); % row vector for the fossil
CO2 flow
% as example, enter a value into the y vector to be run through
the model
% default example here is 1 kWh into the bituminous coal-fired
electricity process
y(416,1)=3.6;
(so 1 kWh)
out=L*y;

%% funct unit basis is in MJ, this is MJ per kWh

% equivalent to x = [I-A]inverse * y

co2out=co2fossil*out;
co2outcols=diag(co2fossil)*out;
sum(co2outcols) % result of running this script will be the sum
of fossil CO2 emissions throughout the upstream process matrix

If we run the .m code either by double clicking it in the MATLAB window, or selecting it
and choosing the "Run" menu option, the result is 1.0334, which matches the Microsoft
Excel version of the model.

Additional HW Questions for this Section


Ecoinvent Example here?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

269

Advanced Material for Chapter 9 Section 2 Process Matrix Models


in SimaPro
In the Advanced Material for Chapter 5, demonstrations were provided on how to find
process data in SimaPro. Here we show how the process matrix LCI results of a particular
process can be viewed in SimaPro. Recall that SimaPro uses the same process matrix-based
approach as shown for Microsoft Excel in the chapter.
Using the same steps as shown in Chapter 5, find and select the US LCI database process for
Electricity, bituminous coal, at power plant. Click the analyze button (shown highlighted by the
cursor in Figure 9-9 below).

Figure 9-9: Analyze feature used in SimaPro to view data

The resulting window allows you to set some analysis options, as shown in Figure 9-10.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com
Figure 9-10: New Calculation Setup Window in SimaPro

270

Chapter 9: Advanced Life Cycle Models

If needed, change the amount from 1 to 3.6MJ in the calculation setup window. Click the
Calculate button. In the resulting window (shown in Figure 9-11), click the "Process
contribution" tab. This shows the total technical flows from other products / processes
needed to make the chosen (3.6 MJ) amount of electricity. Note the default units checkbox
ensures the normally used units (e.g., kg) are displayed, otherwise SimaPro will try to
maintain 3 digits and move to a higher or lower order unit (e.g., grams).

Figure 9-11: Results of Process Contribution Analysis for Process in SimaPro

You will see that these values are the same as those presented by the Microsoft Excel
spreadsheets for the US LCI database (shown in Figure 9-4), but SimaPro tends to maintain
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

271

three significant digits so may be slightly different due to rounding. Clicking on the
inventory tab of the results window shows the E matrix results for all tracked flows (Figure
9-12). Substances, compartments, and units are presented. While not all are shown, the
results from the US LCI process matrix Excel spreadsheet would match those here.

Figure 9-12: Inventory Results in SimaPro

The final part to explore is the Network tab view. This tool creates a visualization of flows
for the entire network of connected processes (as summarized in the process contribution
tab). By default, SimaPro will truncate the Network display so as to reasonably draw the
network system without showing all flows. Figure 9-13 shows a default network diagram for
fossil CO2, with an assumed cut-off of about 0.09%. This cut-off can be increased or
decreased to see more or less of the network.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

272

Chapter 9: Advanced Life Cycle Models

Figure 9-13: SimaPro Network view of process outputs (excerpt)

Further discussion of modeling with SimaPro is in a later chapter, but the analysis of LCI
and LCA results uses this same analyze feature.

Additional HW Questions for this Section?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

273

Advanced Material for Chapter 9 Section 3 Process Matrix Models


in openLCA
As with SimaPro, openLCA also uses a process matrix approach behind the scenes of the
tool. Again the focus on this Section will be on how to view LCI results.
To do this, start openLCA as described previously, and click on the "Product systems"
folder under the US LCI data folder. Choose the "Create a new product system" option.
You may name it whatever you like, and optionally give it a description. In the reference
process window, drill down through Utilities and then Fossil Fuel Electric Power Generation
to find the Electricity, bituminous coal, at power plant process. Keep both options at the bottom
of the window selected (as shown in Figure 9-14).

Figure 9-14: Creation of a New Product System in openLCA

You may then choose the calculate button at the top of the window (the green arrow with
x+y written on it) to do the analysis of the process, as shown in Figure 9-15.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

274

Chapter 9: Advanced Life Cycle Models

Figure 9-15: Product System Information in openLCA

In the calculation properties dialog box, choose the "Analysis" calculation type, then click
the Calculate button as shown in Figure 9-16.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

275

Figure 9-16: Calculation Properties Window in openLCA

The resulting
window (not
shown) has various tabs to display tables and graphs of the process matrix-calculated
upstream results for your product system (your selected process in this case). If you enable
it in the openLCA preferences, you can also download a spreadsheet export of analysis
results here. The "LCI Total" tab summarizes the inputs and output from the process
matrix calculation as shown in Figure 9-17. Again, these are very similar to the results from
the US LCI Excel spreadsheet or SimaPro. For our usual observation of the CO2 results,
openLCA seems to aggregate all carbon dioxide air emissions into a single value (the US LCI
database tracks 4 separate air emissions of CO2, including biogenic emissions).

Figure 9-17: LCI - Total Results for Product System in openLCA

The "Process results" tab shows the additional detail of the direct inputs and outputs from
the chosen process as well as the total upstream, as shown in Figure 9-18.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

276

Chapter 9: Advanced Life Cycle Models

Figure 9-18: Process results view in openLCA

Additional HW Questions for this Section?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 9: Advanced Life Cycle Models

Another Advanced Material Section - Advanced features of EIO website (custom tab)?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

277

278

Chapter 10: Life Cycle Impact Assessment

Chapter 10 : Life Cycle Impact Assessment


In this chapter, we complete the discussion of the major phases of the LCA Standard by
defining and describing life cycle impact assessment (LCIA). This is the part of the standard
where we translate the inventory results already created into new information related to the
impacts of those flows, in order to help to assess their significance. These impacts may be
on ecosystems, humans, or resources. As with the previous discussions about quantitative
methods, life cycle impact assessment involves applying a series of factors to inventory
results to generate impact estimates. While many impact assessment models exist, we begin
by assessing some of the more common and simpler impact categories, such as those used
for energy use and greenhouse gases, and then move on to more comprehensive LCIA
methods used around the world. As always, our focus is on understanding the quantitative
fundamentals associated with these efforts.
Learning Objectives for the Chapter
At the end of this chapter, you should be able to:
1. Describe various impact categories of interest in LCA and the ways in which those
impacts can be informed by inventory flow information.
2. Describe in words the cause-effect chain linking inventory flows to impacts and
damages for various examples.
3. List and describe the various mandatory and optional elements of life cycle impact
assessment.
4. Select and justify LCIA methods for a study, and perform a classification and
characterization analysis using the cumulative energy demand (CED) and/or climate
change (IPCC) methods for a given set of inventory flows.

Why Impact Assessment?


To help motivate the general need to pursue LCIA, we create a hypothetical set of LCI
results that we will revisit throughout the chapter. These LCI results for two alternative
product systems, A and B, may have been generated either as part of a prior study intended
to only be an LCI (as opposed to an LCA), or as the LCI results to be subsequently used in
an LCA. Due to either data constraints, or explicitly chosen statements in the goal and
scope of the study, only a few flows have been tracked. As shown in Figure 10-1, a life cycle
interpretation analysis of these results based only on the inventory would be challenging.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

279

Option A has more fossil CO2 emissions (5 kg) and use of crude oil 100 MJ), but fewer
emissions of SO2 (2 kg), than Option B (2 kg, 80 MJ, 5 kg, respectively). Aside from stating
that obvious tradeoff, which is how much of a compromise we would need to achieve
between the flows, it is not clear what else an interpretation may contribute towards the
decision support for A versus B.
Flow
Carbon dioxide, fossil
Sulfur dioxide
Crude oil

Compartment
air
air

Units
kg
kg
MJ

Option A
5
2
100

Option B
2
5
80

Figure 10-1: Hypothetical Study LCI Results

The ideal case, of course, for the interpretation of LCI results is vector dominance, where
the flows of one option across all inventoried flows are lower for one option than the other.
In such a case, we would always prefer the option with lower flows. In reality, vector
dominance in LCI results is rare, even with a small number of inventoried flows. As
inventory flows are added (i.e., more rows in Figure 10-1), the likelihood of vector
dominance nears zero, because more tradeoffs in flows are likely to appear across options.
It is the existence of tradeoffs, and the typical comparative use of LCA across multiple
product systems, that makes us seek an improved method to allow us to choose between
alternatives in LCA studies, and for that we need to use impact assessment.

Overview of Impacts and Impact Assessment


In Chapter 1, we motivated the idea of thinking about impacts of product systems. We
showed that we might have concern for various impacts, and seek indicators to help us to
understand how to measure and assess those impacts. For example, we described how we
might measure our concern for fossil duel depletion by tracking coal and natural gas use (in
MJ or BTU). Similarly we described how we might measure our concern for climate change
in terms of greenhouse gas emissions (in kg or tons). These indicators, which became our
LCI results, were intended to be short-term placeholders on our path to being able to circle
back and consider the eventual impacts. In this chapter, we take the next steps needed to
achieve this goal.
The idea of impact assessment is not new. Scientists have been performing impact
assessments for decades. Entire career domains exist for those interested in environmental
impact assessment, risk assessment, performance benchmarking, etc. A key difference
between life cycle impact assessment and other frameworks is its link to a particular
functional unit (and of course the entire life cycle as a boundary), which focuses our
attention on impacts as a function of that specific normalized quantity. Typically, risk or
environmental impact assessments are for entire projects or products, such as the
environmental impact of a highway expansion or a new commercial development. That said,
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

280

Chapter 10: Life Cycle Impact Assessment

the methods we will use in life cycle impact assessment (LCIA) have in general been
informed and derived from activities in these other domains. Impact assessment is about
being able to consider the actual effects on humans, ecosystems, and resources, instead of
merely tracking quantities like tons of emissions or gallons of fuel consumed as a result of
production.
This chapter cannot fully describe the various methods needed to perform LCIA. Our focus
is on explaining what the ISO LCA Standard requires in terms of LCIA, and on the
qualitative and quantitative skills needed to document and complete this phase. Before
discussing the mandatory and optional elements for LCIA in the Standard, we reintroduce
the notion that there are various impacts that one
might be concerned about, and discuss in limited
There are impact assessment
detail how we might frame our concerns for such
categories for energy use and
impacts in an LCA study.
climate change, as we will see
later in the chapter. But before
we get to the formal definitions of
those methods, we will reuse
energy and climate examples
along the way, as these two
concepts are likely already
familiar to you. Many of the
other impact categories and
methods available are much more
complex, and we will save all
discussion of those for later.

Figure 10-2 summarizes the different classes of


issues of concern, called impact categories, which
are commonly used in LCA studies. Also included
is the scale of impact (e.g., local or global), and the
typical kinds of LCI data results that can be used as
inputs into methods created to quantitatively assess
these impacts. This list of impact categories is not
intended to be exhaustive in terms of listing all
potential impact categories for which an individual
or a party might have concern, or in terms of the
potentially connected LCI results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

Impact Category

Scale

281

Examples of LCI Data (i.e. classification)


Carbon Dioxide (CO2), Nitrous Oxide (N2O), Methane (CH4),
Chlorofluorocarbons (CFCs),
Hydrochlorofluorocarbons (HCFCs),
Methyl Bromide (CH3Br)

Global Warming

Global

Stratospheric Ozone
Depletion

Global

Chlorofluorocarbons (CFCs),
Hydrochlorofluorocarbons (HCFCs), Halons,
Methyl Bromide (CH3Br)

Acidification

Regional, Local

Sulfur Oxides (SOx), Nitrogen Oxides (NOx), Hydrochloric Acid


(HCl), Hydrofluoric Acid (HF), Ammonia (NH4)

Eutrophication

Local

Phosphate (PO4), Nitrogen Oxide (NO), Nitrogen Dioxide


(NO2), Nitrates,Ammonia (NH4)

Photochemical Smog

Local

Non-methane hydrocarbon (NMHC)

Terrestrial Toxicity

Local

Toxic chemicals with a reported lethal concentration to rodents

Aquatic Toxicity

Local

Toxic chemicals with a reported lethal concentration to fish

Human Health
Resource Depletion
Land Use
Water Use

Global,
Regional, Local
Global,
Regional, Local
Global,
Regional, Local

Total releases to air, water, and soil.


Quantity of minerals used, Quantity of fossil fuels used
Quantity disposed of in a landfill or other land modifications

Regional, Local

Water used or consumed

Figure 10-2: Summary of Impact Categories (US EPA 2006)

Impact Assessment Models for LCA


Figure 10-2 introduced various individual impact categories, but most of the attention and
examples so far have related to climate change and energy. While these continue to be the
most popular impact categories of interest in LCA (partly due to the relatively small amount
of uncertainty regarding their application and thus the large degree of scientific consensus on
their use), more comprehensive models of impacts exist that encompass multiple impact
categories and have been incorporated into LCA studies and software tools. Some of the
most frequently used LCIA methods are summarized and mapped to their available
characterization models in Figure 10-3. Some of these may already be familiar to those that
have reviewed existing studies.
As Figure 10-3 shows, some LCIA methods are focused on a single category, e.g.,
cumulative energy demand (CED), while others broadly encompass all of the listed
categories. Note that only the TRACI method is US-focused, with the remainder being
mostly Europe-focused.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Resource consumption

Land use

CML2002

Eco-indicator 99

EDIP 2003/EDIP976

EPS 2000

Impact 2002+

IPCC

LIME

LUCAS

MEEuP

ReCiPe

Swiss Ecoscarcity 07
TRACI
USEtox

Aquatic eutrophication.

Model
CED

Terrest. eutrophication

Acidification

Ozone formation

Ecotoxicity

Ionising radiation

Human toxicity

Respiratory inorganics

Ozone depletion

Chapter 10: Life Cycle Impact Assessment

Climate change

282

Figure 10-3: Summary of Impact Categories (Characterization Models) Available


in Popular LCIA Methods (modified from ILCD 2010)

We will not be forced to choose a single impact category of concern. A study may set its
study design parameters to include several, all, or none of the impact categories from the list
in Figure 10-2, and thus may use one or more of the LCIA methods in Figure 10-3, with
varying comprehensiveness. Using a diverse set of impact categories could allow us to make
relevant comparisons across inventory flows so that we could credibly assess whether we
should prefer a product system that releases 3 kg less of CO2 to air or one that releases 3 kg
less SO2 to air (as in Figure 10-1). If our concerns are cross-media, such that some of our
releases are to air and some are to water or soil, the challenge is even greater because we
then need to balance concern for impacts in both ecosystems. Being able to take this next
step in our LCA studies beyond merely providing LCI results is significant. The degree of
difficulty and effort needed to successfully complete LCIA precludes some authors from
even attempting it (which, as discussed above, is a big driver for why so many studies end at
the LCI phase). As LCIA is perhaps most useful in support of comparative LCAs, it will not
typically be very interesting or useful to know the LCIA results of a single product system.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

283

Beyond using multiple impact categories, multiple LCIA methods are often used in studies
to assess whether different approaches agree on the severity of the chosen impacts. Of
course, this is only useful when the LCIA methods use different underlying characterization
models. The outputs of LCIA methods will be discussed below.
In order to understand impact assessment, and thus LCIA, it is important to understand
how LCI results may eventually connect to impacts. Figure 10-4 shows the cause-effect
chain (also referred to as the environmental mechanism) for an emission category. Similar
chains exist for impact categories like resource depletion and land use. At the top are
emissions, sometimes referred to as stressors, so called because they are triggers for potential
impacts (the 'causes' in the cause-effect chain). While shown as single chains in the figure,
there may be various stressors all leading to the same potential impacts or damages.
Likewise, the same emissions may be the first link in the chain for multiple effects (not
shown).

Emissions
ConcentraHons
Impacts

Midpoints

Damages

Endpoints

Figure 10-4: General Cause-Effect Chain for Environmental Impacts


(Adapted from Finnveden 1992)

Next in the chain are concentrations, which in the case of air emissions are the resulting
contribution of increased emissions with respect to the rest of the natural and manmade
molecules in the atmosphere. A relatively small emission would have a negligible effect on
concentrations, while a large emission may have a noticeable effect on concentrations. In
the case of climate change impacts, increased emissions of greenhouse gases lead to
increased concentrations of greenhouse gases in the atmosphere.
As concentrations are changed in the environment, we would expect to see intermediate
impacts. For the case of climate change, increased concentrations of greenhouse gases are
expected to lead to increased warming (actually radiative forcing). Emissions of
conventional pollutant emissions lead to increased concentrations in the local atmosphere.
These intermediate points of the chain are also called midpoints, which are quantifiable
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

284

Chapter 10: Life Cycle Impact Assessment

effects that can be linked back to the original emissions, but are not fully indicative of the
eventual effects in the chain.
Finally, damages arise from the impacts. These damages are also referred to as endpoints,
since they are the final part of the chain and represent the inevitable ending point with
respect to the original stressors. These damages or endpoints are the "effects" in the causeeffect chain. For global warming (or climate change), the damages/endpoints of concern
may be destruction of coral reefs, rising sea levels, etc. For conventional pollutants,
endpoints may be human health effects due to increased exposure to concentrations, like
increases in asthma cases or hospital admissions. For ozone depletion, we may be concerned
with increases in human cancer rates due to increased UV radiation. Note that LCIA will
not actually quantify these damages (i.e., it will not give an estimate of the number of coral
reefs destroyed or height of sea level change), but it will provide other useful and relevant
information that could subsequently allow us to consider them.
Fortunately, as we will learn below, the science behind impact assessment, while continuing
to be developed, is available for us to use without needing to build it ourselves. But using
the relevant methods still requires substantial understanding of how these methods work.
Getting to the idea of an endpoint is hard, and again, that is partly why people stop at the
inventory stage.
Along the way, we have seen how LCAs can yield potentially large lists of inventory results.
These are generally lists of inputs needed (e.g., fuels used) and outputs created (e.g., GHG
emissions) by our product systems. The prospect of impact assessment may create an
intimidating sense of "how will we pull together a coherent view of impact given this large
list of effects?" However, in reality, impact assessment methods are created exactly to deal
with using large inventories as inputs. Impact assessment methods will attempt to take the
detailed information in those inventories and create summary indicators of impacts from
them.

ISO Life Cycle Impact Assessment


In Chapter 4, we began our summary discussion of the ISO Standard. Figure 10-5 repeats
the original Figure 4-1 which overviews the major phases of the Standard. As we have
already discussed, the various phases are all iterative. We remind you that the text in this
chapter is not intended to replace a careful read of the ISO LCA Standard documents
specific to LCIA, as here we only summarize the information, link it to previously discussed
material, and show examples.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

285

Figure 10-5: Overview of ISO LCA Framework (Source: ISO 14040:2006)

In life cycle impact assessment (LCIA), we associate our LCI results with various impact
categories and then apply other factors to these categorized results to give us information
about the relevant impacts of our results. We also then iteratively return to the life cycle
interpretation phase so that we can add to our interpretations made when only the LCI was
complete. LCIA also connects iteratively back to the LCI phase, so that if the LCIA results
do not help us in expected ways, we can refine the inventory analysis to try to improve our
study. While not shown as a direct connection in Figure 10-5, we may also iteratively decide
to adjust the study design parameters (i.e., goal and scope) if we interpret that our impact
assessment results are unable to meet our objectives.
As we will see below, some elements of LCIA may be subjective (i.e., influenced by our own
value judgments). As stated in the Standard, it is important to be as transparent as possible
about assumptions and intentions when documenting LCIA work so as to be clear about
these subjective qualities. Figure 10-6 shows the various steps in LCIA, which includes
several mandatory and several optional elements, each of which is discussed below. The
steps in LCIA are commonly referred to by the shorthand name in parentheses in the figure.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

286

Chapter 10: Life Cycle Impact Assessment

Figure 10-6: Overview of Elements of LCIA Phase (Source: ISO 14040:2006)

Mandatory Elements of LCIA


Selection
The first mandatory element of LCIA is the selection of impact categories, their indicators,
and the characterization models and LCIA methods to be used. In practice, this element also
involves sufficiently documenting the rationale behind these choices, which need to be
consistent with the stated goal and scope of the study. While we will discuss more of the
various possible impact categories later in this chapter (as well as indicators and
characterization models), we know from previous discussions that climate change is an
impact category. If we wanted to include climate change as one of our study's impact
categories, then we should justify why climate change is a relevant impact category given our
choice of study design parameters (goal and scope) and/or given the product system itself
i.e., is it a product considered to be a major potential cause of climate change?
The ISO Standard requires that the impact assessment performed encompass "a
comprehensive set of environmental issues" so that the study is not narrowly focused, for
example, on one particular hand picked impact that might be chosen since it can easily show
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

287

low impacts. Thus, our LCIA should use multiple impact categories. Our justification
should include text for all of our chosen categories. A study's justification for selection of
impact categories should not be subjective to the author's own personal wishes. They
should consider those of the organization responsible for funding the study, or who is
responsible for the product system. For example, if the organization requesting the study
has long-term goals of mitigating climate change in their actions, that would be an
appropriate justification for choosing climate change as an impact category when assessing
their products.
The LCIA methods selected should be relevant to the geographical area of the study. An
LCA on a product manufactured in a US factory would not be well-served by using an LCIA
method primarily developed in, and intended to be applied in, Europe. However, the
majority of LCIA models have been created only for the US and Europe, and thus, it can be
challenging to select a model if considering a product system in Asia. In such cases, it may
make sense to use multiple models outside of the relevant geographic region to consider
ranges of results and to try to generalize findings.
This step should also document and reference the studies on impacts used, i.e., the specific
scientific studies used to assess impacts of greenhouse gases. Of course, the vast majority of
LCA studies will be using well-established LCIA methods. Beyond this initial LCIA element
for justification of choices, the remaining mandatory elements involve the organization and
application of indicator model values to your previously generated inventory results.

Classification
Classification is the first quantitative element of LCIA, where the various inventory results
are organized such that they map into the frameworks of the relevant impact category
frameworks chosen for the study. Classification involves copying your inventory items into
a number of different piles, where each pile is associated with one of the impact categories
used by the selected LCIA methods.
Consider again the hypothetical inventory from Figure 10-1. If a study has selected climate
change as an impact category, then the carbon dioxide, fossil inventory flow would be classified
into that pile since it is a greenhouse gas (and the other two flows would not). If you chose
an impact category for energy, then the crude oil inventory flow would be classified there (and
the other two would not). If you chose no other impact categories, then the sulfur dioxide
flow would not be classified anywhere, and would have no effect on the impact assessment.
To be able to perform classification, each LCIA method must have a list of inventory flows
connected to that impact. As discussed in Chapter 5, LCI results can have hundreds or
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

288

Chapter 10: Life Cycle Impact Assessment

thousands of flows. Thus, the list of relevant connected flows for LCIA methods can
likewise be substantial (hundreds or thousands of interconnections). Classification has no
quantitative effect on the inventory flows other than arranging and creating piles. It is
possible that the classified list of inventory flows relevant to a chosen LCIA method have
different underlying units (e.g., kg, g, etc.). These differences will be managed in subsequent
elements of the LCIA.
Amongst the most widely used impact categories are those for climate change and energy
use. Two specific underlying methods to support these are the Intergovernmental Panel for
Climate Change (IPCC) 100-year global warming potential method and the cumulative
energy demand (CED) method, respectively. We describe each below and use them to
illustrate the mechanics of the various LCIA elements through examples. Since the IPCC
and CED methods have all of the mandatory elements, they qualify as LCIA methods, but
they are fairly simplistic and singularly focused compared to some of the more advanced
LCIA methods in Figure 10-3. Studies that only consider energy and global warming
impacts are sometimes viewed as being narrowly focused with respect to impact assessment,
especially since energy and climate results tend to be very similar.
Figure 10-7 provides an abridged list of substance names and chemical formulas (generally
greenhouse gases) that are classified in the IPCC method introduced above. Thus, any of
the substances in this list that are in an LCI would be copied into the pile of classified
substances to be used in assessing the impacts of climate change.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

Name
Carbon dioxide
Methane
Nitrous oxide
CFC-11
CFC-12
CFC-13
CFC-113
CFC-114
CFC-115
Halon-1301
Halon-1211
Halon-2402
Carbon tetrachloride
Methyl bromide
Methyl chloroform
HCFC-22
HCFC-123
HCFC-124
HCFC-141b
HCFC-142b
HCFC-225ca
HCFC-225cb

289

Chemical
Formula
CO2
CH4
N2O
CCl3F
CCl2F2
CClF3
CCl2FCClF2
CClF2CClF2
CClF2CF3
CBrF3
CBrClF2
CBrF2CBrF2
CCl4
CH3Br
CH3CCl3
CHClF2
CHCl2CF3
CHClFCF3
CH3CCl2F
CH3CClF2
CHCl2CF2CF3
CHClFCF2CClF2

Figure 10-7: (Abridged) List of Substances Classified into IPCC (2007) LCIA Method

Likewise, Figure 10-8 provides an example list of energy flows that would be classified into
the CED method. Note that the CED method further sub-classifies renewable and nonrenewable energy, as well as particular subcategories of energy (e.g., fossil, solar). Also note
that the listings in Figure 10-8 are not specific to known flows in any of the databases. One
database might have a flow for a particular kind of coal or wood that is named something
different in another database.
Category
Non-renewable resources

Renewable resources

Subcategory
fossil
nuclear
primary forest
biomass
wind
solar
geothermal
water

Included Energy Sources


hard coal, lignite, crude oil, natural gas, coal mining off-gas, peat
uranium
wood and biomass from primary forests
wood, food products, biomass from agriculture, e.g. straw
wind energy
solar energy (used for heat & electricity)
geothermal energy (shallow: 100-300 m)
run-of-river hydro power, reservoir hydro power

Figure 10-8: Energy Sources Classified into Cumulative Energy Demand (CED) LCIA Method
(Source: Hischier 2010)

If classification is done manually (which is rare), then various quality control problems could
occur in creating the piles of classified inventory flows. For example, you would need to
look at each of your inventory results and then check every LCIA method's list of classified
substances to see whether it should be put into that pile, and to put it into the correct pile.
It would be easy to make errors in such a process, either by not noticing that certain
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

290

Chapter 10: Life Cycle Impact Assessment

inventory flows are classified into a method, or by classifying the wrong flows (e.g., those in
an adjacent row number).
In practice, most classification is done via the use of software tools and/or matrix
manipulation. Even so, making the classification process work efficiently is not easy. There
are also potential problems associated with the computerized classification process (Hischier
2010). First, inventory flows reported in databases (or from primary data collection) may be
named inconsistently with scientific practice, and cause mismatches or inability to match
with LCIA methods. For example, one source may list CO2 and another carbon dioxide.
Behind the scenes of the software tools, many of the "matches" are done by using CAS
numbers to avoid such problems. CAS Numbers give unique identities to specific
chemicals (e.g., formaldehyde is 50-00-0). Beyond naming problems, an LCIA method may
have many listed flows that should be classified under it, but the inventory done may be so
streamlined that none of the classified flows have been estimated. Conversely, a relatively
substantial LCI may have no flows that can be classified into any of the selected LCIA
methods. In short, the connection between flows in an LCI and the classification list of
flows in the LCIA method is not one-to-one. Of course, should problems like these be
identified during the study, then changes should be made to the study's goal, scope, or
inventory results to ensure that relevant flows are identified so as to be able to make use of
the selected LCIA method (or, of course, the LCIA method should be adjusted).
This potential disconnect between available and quantified inventoried flows and the
connections to the inputs of LCIA methods goes is critical to understand. Since most LCIA
methods have a large list of classifiable LCI flows, it is critical that inventory efforts are
sufficiently robust so as to make full use of the methods. Thus, inventories must track a
sufficient number of flows needed for the classification step of the chosen method. A
significant risk is posed when doing primary data collection of a new process. Imagine the
case where the study author has chosen a climate change method for LCIA. If only CO2 is
inventoried as part of the boundary set in the data collection effort, then the potential
climate change effects due to non-CO2 GHGs (which are more potent) cannot be
considered in the LCIA. One could use IO-LCA screening as a guide to help explicitly
screen for inventory flows that should be measured or verified to be zero when using a
particular method. For example, if another round of data collection could measure
emissions of methane or other GHGs, it could have a substantial effect on the results.
It is possible that you could have chosen an impact category (or categories) such that none
of your quantified inventory flows are classified into the pile for that category, giving you a
zero impact result. While unlikely, this again would be a situation where you would want to
iterate back to the inventory stage and either redouble data collection efforts, or iterate all
the way back to goal and scope to change the parameters of the study.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

291

It is likely that inventory flows will be classified into multiple impact category piles. For
example, various types of air emissions may be classified into a climate change impact
category, an acidification impact category, and others. In these cases, the entire flow is
classified into each pile (not assigned to only one of the piles, and not having its flows
allocated across the impact piles). Figure 10-9 shows the classification results for the
hypothetical inventory example for the IPCC and CED methods. Once the classification is
completed, the LCIA proceeds to the next required step, characterization.
Classification: Climate Change Impact Category (IPCC)
Compartment
Units
Option
A
Carbon dioxide, fossil
air
kg
5
Classification: Energy Impact Category (CED)
Crude Oil
kg
10
Flow

Option
B
2
8

Figure 10-9: Classification of Hypothetical Inventory

Characterization
The characterization element of LCIA quantitatively transforms each set of classified
inventory flows via characterization factors (also called equivalency factors) to create
impact category indicators relevant to resources, ecosystems, and human health. The
purpose of characterization is to apply scientific knowledge of relative impacts such that all
classified flows for an impact can be converted into common units for comparison. The
characterization methods are pre-existing scientific studies that are leveraged in order to
create the common units. For example, in the climate change impact example we have been
using in this chapter, the characterization method is from IPCC (2007). This IPCC method
is well known for creating the global warming potential equivalency values for greenhouse
gases, where CO2 is by definition given a value of 1 and all other greenhouse gases have a
factor in equivalent kg of CO2, also abbreviated as CO2-equiv or CO2e. Similar to other
methods, this creates (in effect) a weighting factor adjustment for greenhouse gases.
Furthermore, since all characterized values are in equivalent kg of CO2, the values can be
aggregated and reported in the common unit of an impact category indicator. The IPCC
report actually provides several sets of characterization factors, for different time horizons of
greenhouse gases in the atmosphere. The factors typically used in LCA and other studies are
the IPCC 100-year time horizon values, but values for 20 and 500 years are also available.
Figure 10-10 shows the characterization (equivalency) factors for greenhouse gases in the
IPCC Fourth Assessment Report (2007) 100-year method. Thus, 1 kg of methane has the
warming potential of 25 kg of carbon dioxide. Any classified greenhouse gases (or other
substances appearing in the list of characterized flows) would then be multiplied by the "kg
CO2e/kg of substance" factors to create the characterized value for each inventory flow.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

292

Chapter 10: Life Cycle Impact Assessment

Name

Chemical Formula

Carbon dioxide
Methane
Nitrous oxide
CFC-11
CFC-12
CFC-13
CFC-113
CFC-114
CFC-115
Halon-1301
Halon-1211
Halon-2402
Carbon tetrachloride
Methyl bromide
Methyl chloroform
HCFC-22
HCFC-123
HCFC-124
HCFC-141b
HCFC-142b
HCFC-225ca
HCFC-225cb

CO2
CH4
N2O
CCl3F
CCl2F2
CClF3
CCl2FCClF2
CClF2CClF2
CClF2CF3
CBrF3
CBrClF2
CBrF2CBrF2
CCl4
CH3Br
CH3CCl3
CHClF2
CHCl2CF3
CHClFCF3
CH3CCl2F
CH3CClF2
CHCl2CF2CF3
CHClFCF2CClF2

Characterization Factor
(kg CO2-eq / kg of
substance)
1
25
298
4,750
10,900
14,400
6,130
10,000
7,370
7,140
1,890
1,640
1,400
5
146
1,810
77
609
725
2,310
122
595

Figure 10-10: IPCC 2007 100-year Characterization Factors (abridged)

While similar in application, the Cumulative Energy Demand (CED) method introduced
above has multiple subcategories for which inventory flows are classified, and thus an
additional level of characterization factors by subcategory. The characterization factors
transform original physical units of energy from each source into overall MJ-equivalent
category indicator values. Category indicators for CED are typically reported by subcategory
(e.g., fossil, nuclear, solar, or wind), aggregated into the categories (e.g., non-renewable and
renewable), and then aggregated into a total (cumulative) energy demand. Figure 10-11
shows CED characterization values used in the ecoinvent model.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

293

CED characterization factors (MJ-equivalent per unit)


Non-renewable
primary
forest
fossil nuclear
9.90
19.10
38.29
560,000
45.80
9.9

Renewable
wind
geo(kinetic) solar thermal

water
Source
Unit
biomass
Coal, brown
kg
Coal, hard
kg
Natural gas
Nm3
Uranium
kg
Crude oil
kg
Peat
kg
Energy,
biomass,
MJ
1
primary forest
Energy, in
MJ
1
biomass
Energy, wind
MJ
1
(kinetic)
Energy, solar
MJ
1
Energy,
MJ
1
geothermal
Energy,
hydropower
MJ
1
(potential)
Figure 10-11: Cumulative Energy Demand Values Used In Ecoinvent Model
(Abridged from Hieschier 2010). Nm3 means normal cubic foot (normal temperature and pressure)

Models may internally change their mappings between inventory flows. For example,
ecoinvent maps hard and soft wood uses into the energy, biomass categories shown above.
Likewise, energy values are often pre-converted and adjusted to appropriate categories when
creating the processes in LCI databases (as in the kinetic and potential energy values in
Figure 10-11). Due to differences in naming inventory flows in different systems, CED
characterization factors often have to be tailored for different frameworks (i.e., CED values
used for the US LCI database may be different than those above). All of these issues make
comparisons of CED results across different databases and software tools problematic.
While not discussed in this book, the science behind the development of characterization
factors for use in LCIA methods is an extremely time consuming and comprehensive
research task. Activities involved include finding impact pathways, relating flows to each
other, and then inevitably the development of the equivalency factors. Such research takes
many person-years of effort, yet the provision of convenient equivalency factors as shown
above may make the level of rigor appear to be small.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

294

Chapter 10: Life Cycle Impact Assessment

Characterized flow = inventory flow (raw flow unit) * characterization factor


(characterization factor units / unit inventory flow)
Category Indicator Results or LCIA Results
The summary of all category indicator values used is referred to as the LCIA profile. Using
the IPCC and CED factors above, Figure 10-12 shows the LCIA profile associated with
Figure 10-9. Note that the CED values are the product of the raw values for kg of crude oil
(10 and 8 for Options A and B, respectively) with the CED characterization factor for crude
oil of 45.8 MJ-eq/kg.
Characterization: Climate Change (IPCC 2007)
Indicator
Units
Option A
Equivalent releases CO2
kg CO2 equiv.
5
Characterization: Energy (CED)
Non-renewable fossil
MJ-eq.
458
Non-renewable nuclear
MJ-eq.
0
Non-renewable forest
MJ-eq.
0
Non-renewable total
MJ-eq.
458
Renewable total
MJ-eq.
0

Option B
2
366
0
0
366
0

Figure 10-12: LCIA Profile of Hypothetical Example

Note from Cate:


The sample question was originally posed as assessing the decision between an option with 3kg less CO2
or an option with 3 kg less SO2. But the SO2 was not addressed in either of the methods. Maybe theres
another example that can be tied up cleanly for the purpose of this chapter.

You presented research in class that a student had worked on about natural gas, diesel, and
gasoline fueled buses and cars. I cant find it now, but I thought it compated CO2 and SO2
emissions for the different fuel type. Not sure if it was and LCA . Maybe you could reference
the results as a way to conclude the example, but without having to discuss an assessment
method that addresses the sulfur dioxide. Characterization represents the last of the initial
mandatory elements in LCIA, as the remaining elements are optional, and many LCA studies
skip all optional elements. If it were to be the final step, one could next interpret the results
in Figure 10-12. Given that only energy and greenhouse gas related impacts were chosen for
the study, and that these impacts tend to be highly correlated, it is not a surprise to see that
the characterized LCIA results suggest the same result, i.e., that Option B has lower impacts
than Option A. The interpretation of course should still highlight the fact that this result
occurs because of the chosen impact assessment methods. If other impact categories were
selected, different answers could result, including tradeoffs between impacts.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

295

There is a final step, evaluation and reporting, which is not officially in the Standard but is
described after the optional elements below. This step should be done after characterization
regardless of whether the optional elements are pursued.

Optional Elements of LCIA


The remaining text in this chapter discusses the optional elements of LCIA. Note that each
of the elements below is independently optional, i.e., one could extend the characterized
result by doing none, some, or all, of these optional elements. The underlying concepts of
the optional elements are far simpler than in the mandatory elements, and thus the
explanations are more concise. Part of the reason that they are optional is that they build on
the relatively objective results from the mandatory elements and may introduce subjective
components (even if not perceived as subjective to the study authors) into the LCA. They
also modify the "pure" results that end with characterization, which all use known and
established scientific factors used throughout the community. It is in passing over the
threshold between the mandatory and optional elements that two parties provided with the
same characterized LCIA results could subsequently generate different LCIA results. Beyond
the mere subjectivity issues, taking the additional optional steps can lead to results that are
hard to validate or compare against in future LCA studies. Because of these issues, as noted
above, many studies end the LCIA phase of the study at characterization.
Normalization
Normalization of LCIA results involves dividing by a selected reference value or
information. A separate reference value is chosen for each impact. The rationale of
normalization is to both provide perspective or context to LCIA results and also to help to
validate results. There is no specified set of reference values to be used in all LCIA studies.
The Standard provides suggestions on useful reference values, such as dividing by total (or
total per-capita) known indicators in a region or system, total consumption of resources, etc.
Another useful normalization factor is an LCIA indicator result for one of the alternatives
studied (or the indicator value from a previously completed study of a similar product
system) as a baseline. The chosen reference value might be the largest or smallest of the
results. In this type of normalization, a key benefit is the creation of a ratio-like normalized
indicator for which alternatives can be compared. For example, any normalized result
greater than 1 has higher impact than the baseline, and any less than 1 has lower impact.
A potential downside of normalization is that the vast majority of product systems studied
will have negligible impacts compared to the total or even the per-capita values to be used as
the reference value. Thus, normalized values tend to be extremely small and thus their effect
can be viewed as irrelevant or negligible. This can be partially solved by choosing similar but
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

296

Chapter 10: Life Cycle Impact Assessment

normalized reference values; for instance, instead of using the total annual impact or
resource consumption, one may choose a daily value. It can also be solved by assuming a
level of production for the product system and scaling up the functional unit of the study so
that the normalized values are larger. As an example, in a study considering the life cycle of
gasoline, the functional unit could be 100 billion gallons per year instead of 1 gallon.
Given the potential for comparability issues, it is often useful to develop multiple
normalization factors, and to perform sensitivity analyses on the normalization results.
Various LCA communities around the world have invested time and research effort in the
development and dissemination of normalization databases and factors to be used in support
of LCA studies. Such efforts are extremely valuable as they serve to provide a common set
of factors that can be cited and used broadly in studies of impacts in the relevant country. It
also removes the need for practitioners to independently create their own normalization
values, which can cause problems in comparing results across studies. In the US, the EPA
published a set of total and per-capita normalization factors to be used as relevant for the
year 1999 (Bare et al. 2006) in support of the TRACI US LCIA model as shown in Figure
10-13 (yearly) and Figure 10-14 (yearly, per capita). The "NA" values in the tables represent
normalization factors that are unnecessary, such as for greenhouse gas emissions to water, or
fossil fuel depletion from air or water. Note that while these values were created as being
specific to the year 1999, various practitioners have continued to use them as is for the past
decade. The assumed population in deriving the per-capita estimates was 280 million, so,
one could update the per-capita factors with the current population if desired while retaining
the 1999 baselines.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

Water

Total
normalized
Value

Normalized value
Impact category

Air

297

Normalized unit

Acidification

2.08 E+12

NA

2.08 E+12

H+ equiv/yr

Ecotoxicity

2.03 E+10

2.58 E+08

2.06 E+10

2,4-D equiv/yr

Eutrophication

1.44 E+09

3.58 E+09

5.02 E+09

N equiv/yr

Global warming

6.85 E+12

NA

6.85 E+12

CO2 equiv/yr

Human health cancer

7.03 E+07

1.76 E+06

7.21 E+07

benzene equiv/yr

Human health noncancer

3.69 E+11

4.24 E+10

4.11 E+11

toluene equiv/yr

Human health criteria

2.13 E+10

NA

2.13 E+10

PM2.5 equiv/yr

Ozone depletion

8.69 E+07

NA

8.69 E+07

CFC-11 equiv/yr

Photochemical smog

3.38 E+10

NA

3.38 E+10

NOx equiv/yr

Fossil fuel depletion

NA

NA

1.14 E+07

surplus MJ of energy/yr

Figure 10-13: Summary of Total Annual Normalization Factors for US, 1999 (Source: Bare et al 2006)

Air

Water

Total
normalized
Value percapita

Acidification

7.44 E+03

NA

7.44 E+03

H+ equiv/yr / capita

Ecotoxicity

7.29 E+01

9.24 E-01

7.38 E+01

2,4-D equiv/yr / capita

Eutrophication

5.15 E+00

1.28 E+01

1.80 E+01

N equiv/yr / capita

Global warming

2.45 E+04

NA

2.45 E+04

CO2 equiv/yr / capita

Human health cancer

2.52 E-01

6.30 E-03

2.58 E-01

benzene equiv/yr / capita

Human health noncancer

1.32 E+03

1.52 E+02

1.47 E+03

toluene equiv/yr / capita

Human health criteria

7.63 E+01

NA

7.63 E+01

PM2.5 equiv/yr / capita

Ozone depletion

3.11 E-01

NA

3.11 E-01

CFC-11 equiv/yr / capita

Photochemical smog

1.21 E+02

NA

1.21 E+02

Fossil fuel depletion

NA

NA

4.08 E-02

NOx equiv/yr / capita


surplus MJ of energy/yr/
capita

Normalized value per-capita


Impact category

Normalized unit percapita

Figure 10-14: Summary of Per-Capita Normalization Factors for US, 1999 (Source: Bare et al 2006)

Chris notes there is a new version of this paper/table to include


Something to add for usetox?
Given our example above, we could create normalized values for Figure 10-12 by dividing
the equivalent CO2 releases by the total factor of 6.85x1012 and/or the per-capita value of
2.45x104, and the energy values by 1.14x107 and 4.08x10-2, respectively.
Grouping
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

298

Chapter 10: Life Cycle Impact Assessment

Grouping of LCIA results is achieved by combining LCIA results to meet objectives stated
in the goal and scope. If a study includes only one or two preselected impacts, then the
results of grouping are not apparent. However, if more than a handful of impacts have been
selected, then grouping them together for reporting and presentation can help to guide the
reader through the results.

Note from Cate: Its not critical for understanding, but it might be nice to show an excerpt
of a study that presented its prioritized results in a color-coded heat map to give the reader
an idea of one way that the results could be communicated.
Grouping is accomplished by sorting and/or ranking the characterized or normalized LCIA
results. The Standard allows sorting of the results along dimensions of the values, the spatial
scales, etc. Ranking, on the other hand, is done by creating a hierarchy, such as a
subjectively-defined impact priority hierarchy of high-medium-low, to place the impacts into
context with each other. Since it involves an assessment of how to prioritize impacts,
grouping should be done carefully, but should also acknowledge that other parties might
create different rankings based on different priorities and ranked impacts.

Weighting
Weighting of LCIA results is the most subjective of the optional elements. In weighting, a
set of factors are developed, one for each of the chosen impact categories, such that the
results are multiplied by the weighting factors to create a set of weighted impacts. Weighting
factors may be derived with stakeholder involvement. As with grouping, the practice of
weighting is subjective and could lead to different results for different authors or parties.
The weights chosen in the study may be different than what may be chosen by the reader.
Regardless, the method used to generate the weighting factors, and the weighting factors
themselves, needs to be documented. Results should be shown with and without weighting
factors applied.
Beyond subjectivity concerns, it is possible that a consideration in doing the study was a
particular set of potential impacts, such as local emissions of hazardous substances at a
factory. In such a case, weighting such impacts greater than other impacts can be deemed
credible and also fit well within the goal and scope considerations. It also means that a
separate study of the same product system but with a different production location (or a
different set of weights) could lead to a different perceived impact.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

299

Last Step - Evaluation and Reporting


While not listed as an element in the Standard, a final step of LCIA is to evaluate and report
on results from the various elements. It is important that intermediate LCIA profile results
from the individual mandatory (and optional) elements be shown. This prevents the study
from, for example, providing only final results that have been normalized and/or grouped
and/or weighted, at the expense of not showing what the characterized results would have
been. Showing the pure impact assessment results also gives your study greater utility as it
will be relevant for comparison to a larger number of other studies.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

300

Chapter 10: Life Cycle Impact Assessment

Chapter Summary
As first introduced in Chapter 4, life cycle impact assessment (LCIA) is the final quantitative
phase of LCA. LCIA allows us to transform the basic inventory flows created from the
inventory phase of the LCA and to attempt to draw conclusions related to the expected
impacts of these flows for product systems. While climate change and cumulative energy
demand tend to dominate LCA studies, other impact categories of broad interest have
characterization models that are scientifically credible and available for use. Despite the
availability of these credible models and tools, many LCA studies continue to focus just on
generating inventory results, or at most, use only climate and energy impact models.
Now that we have reviewed all of the important phases of LCA, in the next few chapters we
focus on ways in which we can create robust analyses that will serve our intended goals of
building quantitatively sound and rigorous methods.

References for this Chapter


Bare, Jane, Gloria, Thomas, and Norris, Gregory, "Development of the Method and U.S.
Normalization Database for Life Cycle Impact Assessment and Sustainability Metrics",
Environmental Science and Technology, 2006, Vol. 40, pp. 5108-5115.
Finnveden, G., Andersson-Skld, Y., Samuelsson, M-O., Zetterberg, L., Lindfors, L-G.
"Classification (impact analysis) in connection with life cycle assessmentsa preliminary
study." In Product Life Cycle AssessmentPrinciples and Methodology, Nord 1992:9, Nordic
Council of Ministers, Copenhagen. 1992.
"ILCD Handbook: Analysing of existing Environmental Impact Assessment methodologies
for use in Life Cycle Assessment", First edition, European Union, 2010.
IPCC Fourth Assessment Report: Climate Change 2007. Available at www.ipcc.ch, last
accessed October 30, 2013.
"Life Cycle Assessment: Principles And Practice", United States Environmental Protection
Agency, EPA/600/R-06/060, May 2006.
Hischier, Roland and Weidema, Bo (Editors), "Implementation of Life Cycle Impact
Assessment Methods Data v2.2 (2010)", ecoinvent report No. 3 St. Gallen, July 2010.
Homework Questions for Chapter 10
e. TBA

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 10: Life Cycle Impact Assessment

Advanced Material LCIA in SimaPro


Follow along with lecture notes

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

301

302

Chapter 11: Uncertainty in LCA

Chapter 11 : Uncertainty and Variability Assessment in LCA


Every number we measure or estimate is uncertain. In this chapter, we discuss issues related
to uncertainty and variability in life cycle data, as well as in LCA and LCIA results. These
issues continue to
We also discuss the implications of uncertainty and variability in terms of interpreting study
results. These issues are perhaps most critical when doing comparative assessments where
our qualitative conclusions may be dependent upon the quantitative strength of our data and
results.
As already motivated in several chapters, uncertainty and variability play a big role in the use
of data and models in LCA, and recognition of their issues should be addressed when using
data, creating models, and interpreting results. These activities range from qualitative
identification of uncertainty and variability, to sensitivity analysis and use of quantitative
ranges, up to probabilistic definitions of data and results.

Chapter Quote: A decision made without taking uncertainty into account is barely worth
calling a decision. Wilson 1985. Also Finkel 1990 on good decisions and good outcomes?

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:
1. Describe why uncertainty and variability affect LCA model results
2. Describe the various sources and types of uncertainty and variability for data and
methods
3. Develop methods that incorporate uncertainty into LCA Models
4. Select and justify LCIA methods for a study, and perform a classification and
characterization analysis using the CED and/or IPCC methods for a given set of
inventory flows

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

303

Comments/placeholders/to do
Maybe re-look at various places in book to here where uncertainty, variability mentioned.
Chap 2, 3, 5, etc.
Review what to keep at end of Chap 5 (cth Jun3 2014 comments).
(CW - better framing example than paper v plastic? Lots of SDP choices that lead to the differences)

Why Uncertainty Matters


(keep this here?)
To help us frame where our work so far in the book has led us, Figure XX shows results
from the actual Hocking (XXX) study, which continue to be the typical result of a generic
comparative LCA. In typical LCA studies, a long sequence of assumptions and citations
leads to a "one off" LCA model (and results) that can be expressed either in table or figure
form. Such results are typically the total LCI (or LCIA, if we are lucky) results expressed as a
certain value. When "A is less than B" for a particular LCI result or impact, we say that in
comparison, A is better. Our threshold for making such an observation is generally not
stated, but typically is a simple less than comparison. It doesn't matter whether A is only less
than B in the third or fourth significant digit (as shown in Figure XX), its lower, so it wins
the comparison test and is concluded to have better performance than B.

Chapter 1 introduced several of the landmark studies in the field of LCA. Notable amongst
these were those associated with the "paper versus plastic" debate of the 1990s. These
debates raged in terms of trying to promote paper or plastic as the material of choice for
items such as cups and shopping bags. As summarized then, the general answer back then,
and for many years since, in relation to a question of "which is better for the environment,
paper or plastic?" has been a resounding "it depends", i.e., the results have been
inconclusive. Similar "it depends" conclusions have resulted for comparisons of cloth and
disposable diapers, internal combustion and hybrid-electric engine vehicles, as well as
petroleum-based or bio-based fuels. While the particular reasons these comparisons failed to
make a specific conclusion "depend" on many things, in this chapter we begin by more
substantively and quantitatively - discussing how the broad issues of uncertainty and
variability (and various methods to appreciate them) raised in earlier chapters affect our
studies, in the hopes that we can both better appreciate the causes to make better models,
and also to better interpret our results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

304

Chapter 11: Uncertainty in LCA

Before diving deeper into uncertainty and variability, we refocus on the practical reasons why
they are important in LCA. Studies of many different product systems, not just those
mentioned above, have led to inconclusive results. In other cases results that should have
been deemed inconclusive have been touted as showing a significant difference. In a field
that seeks to perform accounting of impacts, inconclusive results can be seen as a failure of a
method. Put bluntly, the fact that the answer presented in LCA studies continues to be "it
depends" in so many cases has led to observations that the domain of LCA is either unable
to answer significant questions, or more seriously, that when faced with significant questions,
the methods are not strong enough to help support these important decisions.
But even the relatively important question of "paper vs. plastic", while important from a
scale perspective given the massive quantities of each material used in the technosphere,
pales in comparison to some of the more important and timely questions for which LCA has
been sought to offer advice. These questions have been related to biofuels (as in "gasoline
versus ethanol"), hybrid-electric vehicles (as compared to internal combustion engine
vehicles), and others whose motivations lie beyond merely the environmental issues. These
latter examples are policy questions for which society needs answers so as to more effectively
decide how to allocate resources to incentivize investments that we believe will have farreaching benefits. At these levels, the importance is far greater than just promoting an ecofriendly drinking cup. These are the "decisions that matter" implied in the title of this book.
In reality, it is rarely a failure of the LCA Standard when studies are not able to produce
conclusive answers. The underlying failure is more often a lack of substantive and
quantitative effort by the study authors to details provided in the data and methods. The
main goal of this chapter is to better understand how we can leverage existing practices and
methods to comfortably inform hard decisions -- the so-called decisions that matter. We
seek more robust methods where we can feel comfortable with our stated conclusions about
the performance of a product system, especially as when comparing it to other systems. We
seek methods to show more specifically what our conclusions might depend on. Doing so
will require that we reopen our introductory mentions of uncertainty and variability from
earlier in the book, and also look at the wealth of available data so that our results can be
informed by 'all of the data' rather than just data from a single known source that we choose
for our study. In the remainder of this chapter, we will simply refer to 'uncertainty' as it
pertains to both uncertainty and variability, and will explicitly call out issues only related to
variability. Note that in many domains, uncertainties are also referred to as sources of error.
(keep this here instead of earlier chapter? Needed here or repetitive? Put it below?)
Sadly, in the field of LCA there are many practitioners who actively or passively ignore the
effects of uncertainty or variability in their studies. They treat all model inputs as single
values and generate only a single result. The prospect of uncertainty or variability is lost in
their model, and typically then that means those effects are lost on the reader of the study.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

305

How can we support a big decision (e.g., paper vs. plastic?) if there is much uncertainty in
the data but we have completely ignored it? We are likely to end up supporting poor
decisions if we do so.

Measurement vs. Accounting


One of the challenges in performing the accounting task of an LCI is that the input and
output flows for a product system cannot be independently measured in the same way as
scientists are accustomed to in other fields.
Before discussing the specific underlying issues relevant to uncertainty and variability in
LCA, lets consider how scientific methods are used in other domains to produce useful
quantitative results to help support analyses and comparisons.
(reviewers skeleton in next para but struggling to find good example XX ideas or suggestions?)

Consider the case of XX, where scientific instrumentation is available to measure the XX by
using a machine called an X. This machine leverages the known science of X to
quantitatively measure X and produces a value with several significant figures and with an
uncertainty range given in the machine's technical specifications. When considering the
overall amount of X in the product, a standard could be developed for repeatedly testing the
X in the X. Each measurement would represent an independent attempt. In the end the
test procedure could dictate that the test result is the average of N repeated measurements.
If we wanted to compare the X in multiple products, we might use the test procedure and
compare the scientifically and quantitatively derived averages to decide which one was the
best.
In LCA, ideal measurement devices do not exist - in many cases, primary data for
key underlying processes do not even exist. Our own internal goals quickly revert to
merely producing the best possible study given the realities of data availability.

In LCA, an array of techniques may be used to create data used in databases or modules.
Transportation process data may use no directly measured data. For example, the data for a
truck transportation process may use similar methods as used in the hypothetical fruit
delivery truck in Chapter 6. That example used average load and fuel economy assumptions
to derive an estimate of the fuel needed to deliver a certain amount of product over a certain
distance. The result was an estimate of diesel use (xx) per ton-km. By applying an emissions
factor of approximately 20 pounds CO2 per gallon, an estimate of CO2 emissions (xx) per
ton-km was estimated. When looking at the metadata for process data, it is often difficult to
tell whether any input or output flows were directly measured versus estimated, but if the
sources provided are all reports, then typically estimates were created using methods as
described in Chapter 2, as shown above for trucks. For the sake of typical LCI studies, such
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

306

Chapter 11: Uncertainty in LCA

effort is sufficient. However it is not the same as attaching a measuring device in a truck's
fuel tank to see the exact amount of diesel used, or measuring emissions rate through the
truck's tailpipe. This inevitably means that all LCA data, especially data not produced via
measurements, has an appreciable uncertainty factor that should be considered when looking
at the results of models. Given that the complexity of product systems in LCA could
include tens, hundreds, or more processes, there could be substantial uncertainty associated
with the final result. This is a much different modeling outcome than might be typical in
environmental studies based on measurement.
Repeated text:
It may not be clear generally from the metadata provided with a data module what type of
measurement was used. Many data modules do not use actual measurement technologies.
Many estimate values, and then allocate them on a functional unit basis. For example, the
process data module for a delivery truck (see example in Chapter 6) may use no measured
data in its reporting of flows. Such a data module could use assumptions on fuel economy
and vehicle capacity to determine flows of fuel and emissions per ton-kilometer. It is
unlikely that one ton of product was driven one kilometer in the truck while the diesel fuel
input and air emission outputs were measured which is the type of scientific measurement
that might be done in different circumstances and may be presumed to be behind such data.
Even though well-developed and appropriate scientific measurement technology exists, in
these cases they still may not be used.
Problems that could arise from using such methods would not be about measurement! They
would be related to imperfectly applying assumptions or calculations in the method used to
create the data module values.
Sidebar about Railroad ads on NPR "we can ship XX on a gallon of diesel fuel"?
Before developing the uncertain nature of LCA further, consider the emerging development
of a breakthrough product that rapidly provides measurements relevant to LCA
practitioners.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

307

Emerging Development: The Carbon Calorimeter


Imagine a scientific device a "carbon calorimeter" - that can measure the embodied fossil
CO2 emissions of a manufactured product to four significant figures. The user opens the lid
of this device and puts in a computer, or a smartphone, and after several minutes it returns a
precise measure of the fossil CO2 needed to produce all of its subcomponents, raw material
and energy inputs, and transportation of all of these components through the global supply
chain until received by the customer. Imagine that additional value-added features were that
it could quantify specific greenhouse gases emitted
(e.g., methane and non-fossil CO2), and could
measure emissions of use phase and disposal.
This would be an amazing device, and even with
some modest measurement error rates, it would
render the LCA Standard and all of its uncertainties
obsolete.
Of course, you probably realize that such a device
does not exist, and further, is impossible to create. It
is impossible to measure embodied CO2, as there is
insufficient residual carbon left in the product to link
that to the carbon needed to make it. Likewise it is
not possible to know a product's journey through the global supply chain by analyzing just
the final product in its current location.
Inevitably, the challenge faced in the LCA world is that various stakeholders assume that
such a device not only exists (or at least, the underlying science needed to create it), but can
be regularly used to inform a range of questions. Some of these stakeholders think LCA is
such a device. In the absence of a calorimeter, we instead use LCA to 'measure' values like
embodied fossil CO2 in a product. Of course, our methods are far more primitive than what
the idealized calorimeter might do.
As such, uninformed stakeholders and critics of LCA have unrealistically high expectations
on the scientific and quantitative results of our studies they expect perfect numbers with
little or no uncertainty. And every time an LCA is done that does not sufficiently deal with
issues of uncertainty and variability and does not report such factors, for example by
ignoring uncertainty and providing only point estimate results, a disservice is done to the
LCA world and opportunities are lost to educate the various stakeholders about the practical
reality of LCA specifically and environmental system modeling in general.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

308

Chapter 11: Uncertainty in LCA

This "lack of true measurement tools" and the thought example above is an important aspect
to consider regarding how rigorously to compare results across multiple product systems,
and for how an audience may interpret your results.
The calorimeter example hopefully inspires reflection on the feasibility of meeting goals of
an LCA, as well as highlighting the inevitable limitations of such a Standard. Results
comparable to those available from measurement systems are impossible. An appropriate
second-best goal is to seek results that are robust enough to overcome known sources of
uncertainty and variability. That is also the goal of this chapter.
Now that a distinction has been drawn between measurement methods and approaches, and
accounting methods, such as LCA, the next section discusses the topics of uncertainty and
variability to be addressed in this context.

Types of Uncertainty and Variability Relevant to LCA


Prior chapters have in several places already introduced the concepts of uncertainty and
variability and also discussed examples where uncertainty and variability issues affect results,
whether through issues associated with LCA data or models. These prior discussions were
intentionally terse, seeking only to introduce and motivate important connections when
discussing other concepts. In this section, more substantive discussion of uncertainty and
variability as related to data and modeling are discussed.
In Chapter 2, variability was defined as related to diversity and heterogeneity of systems, and
uncertainty as resulting from a lack of information or an inability to measure. But having
seen examples of data and methods used in LCA, we can more specifically define these
terms in the context of LCA.
Thoughts on these definitions?
Uncertainty in LCA occurs as a result of using data or methods that imperfectly capture
the effects in the product system.
Variability in LCA occurs as a result of the data or methods used yielding results being
unable to produce consistent results.
As noted in Chapter 2, uncertainty can generally be reduced by performing additional work
or research, while variability cannot be similarly reduced. These statements remain true in
the context of LCA, but we should seek ways to either manage or represent uncertainty and
variability in our work.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

309

Figure 11-1 shows a general representation of a model, where data are the inputs into
methods, and the combination of data and methods produce results. All three of these
components can be uncertain or variable, and the forms of uncertainty can be similar across
the three. Heijungs and Huijbregts (2004), Williams, et al (2009) and Lloyd and Ries (2007)
provide excellent descriptions of the various uncertainties relevant to LCA. From these
sources the following generalizations are made in the remainder of this section. Uncertaintyrelated issues are always relevant in LCA studies, but perhaps most important when building
comparative models (especially those that will lead to comparative assertions). The
discussion begins with uncertainty related to data.

Figure 11-1: General Model Flow Diagram

Data Uncertainty
We start with several general definitions that refer largely to data uncertainty.
Measurement uncertainty generally refers to the case where a 'ground truth' or perfect
measurement is possible using a particular technology, and measurement using an alternative
technology will lead to differing degrees of imperfect results. This is analogous to the
graduated cylinder example in Chapter 2 if it were possible to produce a cylinder with
more gradual lines on it, we would expect to be able to produce measurements that were less
uncertain. In LCA terms, this might more specifically refer to
The problems may be more than just determining the appropriate number of significant
figures to report.
But in the context of LCA, the flows reported are emissions or releases, or quantities of
energy and resource use.
Even though it was noted above that LCA data may not be from measured sources,
Nonetheless, measurement problems can still have an effect on LCA

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

310

Chapter 11: Uncertainty in LCA

Add: Williams measurement versus complex system. Which is an LCI? Connect to


"idealized LCI" in Williams et al JIE paper?
Parameter uncertainty exists when the parameters used in a model are uncertain. Typically
all parameters in a model have some degree of uncertainty, except for physical constants. In
LCA
Beyond these two underlying definitional relations about uncertainty, more specific types of
uncertainty are relevant in LCA, as summarized in Figure 11-2. Each of these types are
discussed in more detail below.

Uncertainty Type

Brief Description

Data

Due to errors or imperfections in model inputs

Cutoff

Due to choices in modeled product system boundaries

Aggregation

Due to similar higher- or lower-level process data being used as a


proxy for desired process.

Geographical

Due to variations in where processes occur as related to


(potentially uncertain) data available and used to model the
processes

Temporal

Due to technological progress not being fully able to be


represented in (potentially old) data
Figure 11-2: Categorization of Sources of Uncertainty in LCA (Modified from Williams et al 2009)

Just because table says process doesn't mean it only applies to PLCA. Unfortunately, many
of these types of errors are inherent in any LCA study, including process or IO-based
methods. The remainder of this section describes the other uncertainty types shown in
Figure X.
More on Sources of Data Uncertainty Specific to LCA
Survey Errors: Separate from measurement uncertainty, uncertainties in basic source data can
result from sampling and reporting errors. Data sources used in government reports (as
inputs into process data modules or IO models) often come from surveys of firms or
individual facilities. Surveys are not sent to (and responses are not required of) all relevant
firms or facilities in an industry statistical sampling methods are used. These sampling
methods may lead to unrepresentative sets of facilities asked. Survey questions can be
misinterpreted, and data requested can be incorrectly provided. Minimizing survey errors
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

311

depends upon the actions of the firms surveyed and the data compilers in the government,
so users cannot reduce this uncertainty.
Incomplete and missing data: The databases used as sources are limited by their accuracy and
completeness of coverage (aside from survey errors). For example, US EPA's Toxic Release
Inventory is not collected for some industrial sectors or for plants below a specified size or
threshold level of releases. As a result, estimates of toxic emissions tend to be
underestimated. The largest problem with missing data comes from data that are not
collected.
Incomplete and missing data certainly contribute to uncertainty in results, but reducing this
uncertainty requires considerable effort.

CW #3) price uncertainty in EIO. sector average price vs. indivual product price. You know
I like to harp on this one :)

Cutoff Uncertainty
The process model approach is particularly vulnerable to missing data resulting from choices
of analytical boundaries, sometimes called the truncation error. Lenzen (2000) reports that
truncation errors on the boundaries will vary with the type of product or process considered,
but can be on the order of 50%. To make process-oriented LCA possible at all, truncation of
some sort is essential.
CW: 2) link with goal and scope: Can to some extent define away uncertainty by narrowing
goal and scope. Role of PCRs, particularly related to cutoff error--if goal and scope is set to
PCR and PCR defines allocation schemes, you're basically left with parameter uncertainty

Aggregation Uncertainty
Aggregation errors arise primarily in IO-based models and occur because of heterogeneous
producers and products within any one inputoutput sector.
Aggregation: Even the nearly 500 sectors do not give us detailed information on particular
products or processes. For example, we might like data on lead-acid or nickel-metal hydride
batteries, but have to be content with a single rechargeable battery sector.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

312

Chapter 11: Uncertainty in LCA

For example, the battery sector contains tiny hearing aid batteries as well as massive ones
used to protect against electricity blackouts in telephone exchanges. $1 million spent on
hearing aid batteries will use quite different materials and create different environmental
discharges than $1 million spent on huge batteries. But the EIO-LCA model assumes that
the products within a sector are uniform. Some sectors group a large number of products
together, such as all industrial inorganic and organic chemicals in one sector. Again, the
environmental impact of producing different types of chemicals may vary.
The best means to compensate for these types of uncertainty is to use detailed information
about particular products. Formally, this can be accomplished with the hybrid approach, as
described in Chapter 2. Less formally, the environmental impacts or input requirements
calculated by the EIO-LCA model may be adjusted to reflect actual information about
specific products.

Geographic Uncertainty

Imports: EIO-LCA represents typical U.S. production within each sector, even though some
products and services might be produced outside the United States and imported.
Leather shoes imported from China were probably produced with different processes,
chemicals, and environmental discharges than leather shoes produced in the United States.
Lacking data on the production of goods in other countries, the Department of Commerce
assumes that the production is the same as in the United States. The magnitude of this
problem is diminished by the fact that many imports are actually produced by processes
comparable to those in the United States and with similar environmental discharge
standards.
As a general note, the uncertainty associated with imports is larger as the fraction of
imported goods increases within the economy generally or within a particular sector.
International trade as a percentage of Gross Domestic Production ranges from nearly 200%
(Singapore) to 2% (Russia) (Economist 1990). The U.S. ratio is 12%. Smaller countries tend to
have higher ratios. Sectors that have very little domestic production deserve particular
attention since the EIO-LCA tables may be inaccurate.

Temporal Issues
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

313

(move NREL LCI graph from chapter 5 in here instead?)


Old data: The latest economic inputoutput table developed by the Department of
Commerce may be up to seven years old. Similarly, there is a lag in receiving environmental
data. The lag matters little for some data, such as the economic inputoutput coefficients for
existing industries, but will be more important for other data, such as air emissions from
vehicles. Fortunately, modern information technology is speeding up the process of
compiling inputoutput tables so the lag in data is getting shorter, particularly for annual
models, which are based upon aggregate updates to the benchmark models estimated every
five years.
It takes considerable time to assemble the data required for national inputoutput tables and
the various environmental impacts. During this time, many elements of the economy may
change. There may be technological change in some sectors, as new techniques of
production are introduced; replacing human labor with robotics is a typical example. There
may be changes in the demand for certain sectors, resulting in capacity constraints and
changes in the production mix. New products may be invented and introduced. Relative
price changes may occur which lead manufacturers to change their production process.
LCA analysts compound the problems of change over time by extrapolating into the future.
LCA users are usually most interested in impacts in the future, after the introduction of new
designs and products.
While the national economy is dynamic, there is considerable consistency over time. For
example, an electricity generation plant lasts 30 to 50 years and variations from year to year
are small. Inputoutput coefficients are relatively stable over time. Carter (1970) calculated
the total intermediate output in the economy to satisfy 1961 final demand using five
different U.S. national inputoutput tables. The results varied by only 3.8% over the
preceding 22 years:

Using 1939 Coefficients: $324, 288


Using 1947 Coefficients: $336,296
Using 1958 Coefficients: $336,941
Actual 1961 Output: $334,160

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

314

Chapter 11: Uncertainty in LCA

Environmental discharges change more rapidly. Table 4-1 shows several impacts for the
generation of $1 million of electricity as calculated by EIO-LCA using the 1992 and 1997
and 2002? benchmark models. The economic transactions expected in the supply chain are
comparable for the two periods, with only a 3% difference, even though the sector
definitions changed over time. The 1997 benchmark separated out corporate headquarters
operations into its own economic sector; energy use and greenhouse gas emissions each
declined from 1992 to 1997 by about 30%, suggesting that the sector and its supply chain
became somewhat more efficient and cleaner over time. However, the generation of
hazardous wastes and the emission of toxic materials increased. Both of these effects may be
due to changes in reporting requirements rather than changes in the performance of
different sectors. In particular, the electric utility industry was not required to report toxic
releases in 1992. Users of the EIO-LCA model can use the different model dates to assess
these types of changes over time for sectors of interest.

[Table 4-1] INSERT from EIOLCA Book?

Model or Method Uncertainty


CW: 1) model vs. parameter uncertainty. Many existing chapters mostly discuss parameter
uncertainty, which is well handled by MC methods, but less on model uncertainty, which
requires alternative model structures
Beyond issues associated with uncertainty in data, LCA can also be affected by uncertainties
related to the methods used.
Input-output based models assume proportionality in production (e.g., the effects of
producing $1000 from a sector are exactly 10 times more than producing $100 from the
same sector), implying that there are no capacity constraints or scale economies. Likewise,
the environmental impact vectors use average impacts per dollar of output, even though the
incremental or marginal effects of a production change might be different. For example, a
new product might be produced in a plant with advanced pollution control equipment.
The linear assumptions of the inputoutput model may be thought of as providing a firstorder, linear approximation to more complicated nonlinear functions. If these underlying
nonlinear functions are relatively flat and continuous over the relevant range, then the firstorder approximation is relatively good. If the functions are changing rapidly, are
discontinuous, or are used over large changes, then the first order approximations will be
relatively poor.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

315

Reducing errors of this kind requires more effort on the part of the LCA practitioner. A
simple approach is to alter the parameters of the EIO-LCA model to reflect the user's beliefs
about their actual values. Thus, estimates of marginal changes in environmental impact
vectors may be substituted for the average values provided in the standard EIO-LCA model.
This requires substitution of the average emissions in particular sectors with the marginal
emissions.
An analytically elegant approach to such updating is available through Bayes' Theorem, in
which posterior parameter estimates are obtained by combining the existing EIO-LCA
parameters with a user's beliefs about the actual parameter values (Morgan 1990).
Unfortunately the formal use of Bayes' Theorem requires information about the distribution
of the existing parameter estimates, which is not generally available from the underlying
sources (as described in Chapter 5).
A second approach is to combine detailed analysis of processes not expected to follow the
EIO-LCA model format with EIO-LCA for other inputs. For example, suppose a new
product will be assembled in a new manufacturing plant. The expected air emissions from a
new manufacturing plant may be used to replace the average emissions for the entire sector.
This is more easily accomplished using a hybrid approach in which the manufacturing, use,
and disposal of the new product is considered on its own, and the EIO-LCA model is used
to estimate the environmental impacts of the inputs to manufacturing, use, and disposal.
Relevant methods are described in Chapter 2.

Result Uncertainty
The final main area where uncertainty affects LCA studies is through the results generated
(the interaction of data and method). Estimates of effects, or results, developed in LCA
studies inevitably have considerable uncertainty.
Computers know nothing about uncertainty and will print estimates to as many significant
figures as desired. We generally limit reports of impacts to two significant digits, and rarely
believe that the estimates are this accurate.
As one indication of the degree of uncertainty, Lenzen (2000) estimates that the average total
relative standard error of inputoutput coefficients is about 85%. However, because
numerous individual errors in inputoutput coefficients cancel out in the process of
calculating economic requirements and environmental impacts, the overall relative standard
errors of economic requirements are only about 1020%. Point to advanced material?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

316

Chapter 11: Uncertainty in LCA

Fortunately, deciding which of two products or processes is more environmentally desirable


will be less uncertain than characterizing the impact of any single product. Because two
competing alternatives share many characteristics, their associated uncertainty is usefully
positively correlated (varying together), so that the impact differences between the two
alternatives will be known with greater certainty than either impact separately. Suppose one
product uses slightly less electricity than another. There is considerable uncertainty in the
environmental impact of producing the electricity for either product. However, since the
electricity use is less for the more energy efficient product, we can more confidently predict
that the environmental impact due to electricity use is better as well.
As a numerical example, Cano-Ruiz (2000) compared the environmental impact of chloralkali processes using a mercury cell, a membrane cell, and a diaphragm cell. If correlations
among errors were ignored, then the difference in estimated impacts for the three methods
was not statistically significant. However, if the positive correlations were considered, two
alternatives were still similar in performance, but the mercury cell alternatives had only an
8% chance of being better than the others.
As LCA study results are typically the most featured component of a study, with prominent
graphics and references in the summary, much of the focus on managing uncertainty in LCA
in this chapter is centered around the effects of uncertainty on study results, and how to
manage them.
Transition.. Now that the types of uncertainty have been categorized, qualitative and
quantitative methods to manage them are discussed.

Methods to Address Uncertainty and Variability


As introduced at the beginning of the chapter, our primary reason for caring about the
implications of uncertainty is because we need to be aware of whether the uncertainties
affect our results, and more specifically, the conclusions written as interpretation of the
results. Two prominent motivations for LCA studies are 1) studies that seek to identify 'hot
spots' of a single product system in support of improvements, and 2) studies that week to
compare alternate technologies, processes, or approaches. These two prominent types of
studies are also critically connected to consideration of uncertainty. We would want to
ensure our interpretation of hot spots is focused on the appropriate components, or that we
are able to confidently assess which of multiple systems is expected to have the lowest
impact. If uncertainty is substantial, a particular process could be wrongly tagged a hot spot
(or not tagged when it should be). Likewise, in comparative studies we are concerned about
the robustness of the model or the level of confidence possible for a conclusion of whether
A can be said to be better than B in the face of the uncertainty.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

317

There are a variety of alternative approaches, ranging from qualitative to quantitative, that
can be used in studies.

Qualitative Methods
As defined in Chapter 2, qualitative methods are those that qualify, rather than quantify,
results. They do not generally use numerical values as part of the uncertainty analysis, but
instead focus on discussion and text-based representations of uncertainty.

Discussion of Sources of Uncertainty Specific to a Study


An initial example of a qualitative assessment of uncertainty in an LCA study would be to
describe in words the expected effect of the various kinds of uncertainty (e.g., those in
Figure X). This could include separate discussions pertaining to data, geographical, and
other uncertainties. A specific description might look like Figure X, which builds on the
concept of data quality indicators from Chapter X.
Figure here showing DQI example..
While such a description cannot give specific quantitative support to uncertainty assessment,
it is useful in ensuring that the reader is aware that the study was done with knowledge of the
stated uncertainties (as opposed to being ignorant of them).
As mentioned earlier (where?), any study should discuss data quality issues, including aspects
of uncertainty and variability of data.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

318

Chapter 11: Uncertainty in LCA

Figure 11-3: Comparison of LCI Effects for Two Options

Semi-quantitative methods
As listed here, semi-quantitative methods are those that use numerical values in support of
uncertainty assessment, but do not incorporate the quantified values into the LCA modeling.

Pedigree matrix
A development over time relevant .. don't do it! Why? Arbitrary, etc.
Pedigree matrix approach values?

Significance Heuristics
Note to readers of this draft the next few paragraphs are verbatim copies of text appearing
in earlier chapters. I am still deciding where to put this material, and at what level of detail.
Any suggestions or comments will be appreciated. First set here are copied from Chapter 2:
Thus you will see many studies create internally consistent rules that define "significance" in
the context of comparing alternatives. These rules of thumb are rooted in the types of
significance testing done for statistical analyses, but which are generally not usable given the
small number of data points used in such studies. Often used rules will suggest that the
uncertainty of values such as energy and carbon emissions are at least 20%, with even higher
percentages for other metrics. When implemented, that means our values for Alternatives A
and B would need to be at least 20% different for one to consider the difference as being
meaningful or significant. The comparative results would be "inconclusive" for energy use
using such a study's rules of thumb.
In the absence of study rules of thumb for significance, what would we recommend?
Returning to our discussion above an LCA practitioner should seek to minimize the use of
significant digits. We generally recommend reporting no more than 3 digits (and, ideally,
only 2 given the potential for a 20% consideration of uncertainty). In the example of the
previous paragraph that would mean comparing two alternatives with identical energy use
i.e., 7.6 kWh. The comparison would thus have the appropriate outcome that the
alternatives are equivalent.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

319

Copied from Chapter 5


While on the subject of assessing comparative differences, it is becoming common for
practitioners in LCA to use a "25% rule" when testing for significant differences. The 25%
rule means that the difference between two LCI results, such as for two competing products,
must be more than 25% different for the results to be deemed significantly different, and
thus for one to be declared as lower than the other. While there is not a large quantitative
framework behind the choice of 25% specifically, this heuristic is common because it
roughly expresses the fact that all data used in such studies is inherently uncertain, and by
forcing 25% differences, then relatively small differences would be deemed too small to be
noted in study conclusions.
Helpful to use EPA Warm project like classification? Signs or orders of magnitude different,
values >20%?

Examples from studies is this pasted from somewhere?


Now that we have come so far in our discussions about LCA, hopefully this simple less than
test is able to constructively and effectively demonstrate how short-sighted is such a
comparison criteria. Our first suggestion for an improved metric might be to require that
any pronouncement of improved performance must first pass through a simple difference
threshold. Many LCA consulting firms, who perform LCA work under contract for
sponsors, now have a series of such thresholds pre-set and documented so that comparative
results must prove to be greater than this threshold so as to be considered sufficiently
different to merit a distinction of one being lower than the other. For example, Figure X
shows a hypothetical summary of threshold differences documented in a study.. meaning
that the study requirements are such that the comparative emissions between A and B must
be more than 20% different for the study to conclude that one is better than the other. Any
cases where the difference is less than 20% causes an in-conclusive result, which translates
into "the uncertainty or variability in the underlying model or data is too great so as to
prevent a clear conclusion on which is better". Armed with such a list going into a study is a
useful practice, and is in line with the goals we have in this Chapter of making sure that we
are sufficiently aware of the uncertainties in our work when preparing our results.
Conceptually.. We would read such a graph differently, i.e., if we were to put +- 20% error
bars on each of our results shown above, then we would require that the top of one error
bar does not touch the bottom of another bar even with uncertainties considered, we
would feel comfortable than one was better than the other.
Where can we get summaries of the threshold values to use? Where do they come from?
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

320

Chapter 11: Uncertainty in LCA

Sources?
This minimum difference in values idea is becoming mainstream..

Alternative Model Approaches


Another useful approach to assessing uncertainty in LCA studies is to compare alternative
approaches and assumptions. For example, in estimating the impact of nanotechnology on
automobile catalysts (Chapter 8), we compare EIO-LCA results to those available from a
commercial, process-based LCA tool, GaBi. The results are fairly close for each scenario
analyzed, increasing our confidence in the conclusion.
But it is still not sufficiently robust for being the quantitative support we need or want when
trying to use LCA for our goal supporting big decisions (or decisions that matter)
While the qualitative approaches above can help convey considerations of uncertainty,
quantitative methods are needed to fully represent the uncertainty in studies.

Quantitative Methods to Address Uncertainty and Variability


In each of the quantitative approaches discussed in this section, attention is paid to
developing visual aids that serve to express the quantified uncertainty in inputs, model, or
results, so that the audience is better able to quickly appreciate it.
Keep eyes on qualitative goal how robust is model does answer change, etc?

Figure 11-4: General Flow for Model with Parameter Ranges

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

321

Ranges

Figure 11-4 shows the general flow for a model with ranges, which is otherwise similar to
Figure 11-1 except that the inputs and results are expressed with ranges (here shown with a
box-whisker plot like representation).
When introduced in Chapter 2, ranges were suggested as a simple way of representing
multiple estimates or sources instead of reporting only a single value. Ranges can also be
used in the LCA context to express values from different sources or data modules. In cases
where multiple data points or value exist, ranges can be useful. They can pertain to just
inputs or inputs and outputs of LCA models.

Figure 11-5: Comparison of Results with Ranges

xx

Process Flow Diagram-based Example with Ranges

Pasted From chap 5: As noted in Chapter 2, ideally you would identify multiple data sources
(i.e., multiple LCI data modules) for a given task. This is especially useful when using
secondary data because you are not collecting data from your own controlled processes.
Since the data is secondary, it is likely that there are slight differences in assumptions or
boundaries than what you would have used if collecting primary data. By using multiple
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

322

Chapter 11: Uncertainty in LCA

sources, and finding averages and/or standard deviations, you could build a more robust
quantitative model of the LCI results. We will discuss such uncertainty analysis for
inventories in Chapter 10.
Considering again the initial process flow diagram example (as shown in Figure 5-5), the
study boundary included mining coal, shipping it by rail, and burning it at a coal-fired power
plant. That initial example provided only single sources from the NREL US LCI database,
and since the GWP characterization factors for greenhouse gases had not been discussed,
methane emissions were excluded. As a first demonstration of using ranges in LCA, Figure
XX.. shows CO2e values from the NREL US LCI database including only fossil CO2 (as in
Figure XX) as well as including previously excluded methane emissions using the IPCC 2007
method (with a 100-year CO2e characterization factor of 25). The columns show.. explain
LCI factor use (see Equation 5-X)..
Process (functional
unit)

LCI factor

Fossil CO2e /
funct unit

Fossil CO2 and


Methane

Coal-fired electricity
generation (kWh)

1 kWh / kWh

0.994

0.994+8.31e-06*25
= 0.994

Coal mining (kg)

0.442 kg coal / kWh

0.00399*25 = 0.1

Rail transport (ton-km)

0.461 ton-km / kWh

0.0189

0.0189 + 9.05e07*25 = 0.0189

Total

1.003
Figure 11-6: INSERT TITLE HERE

1.04

Thus if our study scope were broadened to alternately include the effect of methane
emissions, our model would estimate CO2e emissions using the IPCC 2007 method of
(1.003-1.04) kg CO2 / kWh. This is not a very dramatic example of how ranges could be
used, both because the difference is only 4% and because we have just expanded the scope
to include more CO2e emissions, but demonstrates .. (results could be
A more complex example could consider multiple data sources for the three main processes,
as summarized in Figure 11-7. The benefit of using ranges in this way is that the (relatively
basic) process flow diagram model can be used to structure .. The individual values shown
are real, but not attributed directly to specific studies.
Do this instead with Pauli's original NG/LNG paper, etc?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

Process (CO2-e per


functional unit)

Fossil CO2 and Methane

Flow factor

0.994+8.31e-06*25 =
0.994

1 kWh / kWh

Coal mining (kg /

0.00399*25 = 0.1

0.442 kg coal / kWh

Rail transport (kg /


ton-km)

0.0189 + 9.05e-07*25 =
0.0189

0.461 ton-km / kWh

Coal-fired electricity
generation (kg / kWh)

Total

323

Range

1.04
Figure 11-7: INSERT TITLE HERE

Process Matrix-based Example with Ranges

Despite demonstrating how to use ranges, the examples above have been limited by using
only process flow diagrams as opposed to process matrix approaches, which were shown to
be more comprehensive in Chapter 9.

IO-LCA-based Example with Ranges

HERE..
Considering more advanced petrurbations in either process or IO-based matrix approaches,
see Advanced Material for Chapter 11, Section 1.

Visualizing Ranges
The results of the range-based assessments above were tabular. The appropriate graphical
representation of ranges uses error bars.
Chris H's comments: -best reporting means (sig figures, bars, etc.)

Sensitivity Analysis

As mentioned earlier in the book, sensitivity analysis is a means of assessing the effect on
model outputs (results) from a percentage change in a single input variable. Chapter X
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

324

Chapter 11: Uncertainty in LCA

noted that sensitivity analysis is explicitly called for in the LCA Standard as a means of
considering uncertainty.
Beyond LCA models, sensitivity analysis is generally used (and automated by software) to
automatically check the sensitivity of model outputs on all modeled input variables.
Software such as DecisionTools Suite has Microsoft Excel plug-ins to automate sensitivity
analysis in spreadsheet-based models. The key feature of these ..
Sens anal is one input at a time.
Example? Building on which one from earlier in book something where we can look up
multiple electricity grid factors?

Beyond just input parameters, sensitivity analysis can also be done for different assumptions.
These assumptions could relate to choices amongst alternative allocation or system
expansion schemes, assumptions about the use or availability of renewable electricity in a
process, etc.
If the sensitivity of all inputs or assumptions can not be assessed in an automated fashion,
then various key parameters should be selected from the study, and tested for the
quantitative effect.

Look at variability of results for some common processes (in US LCI, look at electricity)
Electricity, eastern, at grid 0.2 kg / MJ
Texas similar
Western 0.15
Similar example for a transport process train, barge?
Do uncertainty analysis of using AR5 uncertainty ranges to see if GWP values change?

Figure of such values?


Grid emissions factors/ tehnology emissions factors / etc
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

325

Probabilistic Methods and Simulation

Finally, in this section, we discuss the most complex approach to managing uncertainty. In
this section, we use data from various sources to generate probability distributions for
inputs, and then use spreadsheets or other techniques to track the effect of these
probabilities through the model to generate results that also have probability distributions.

If we are trying to convince a stakeholder that the results of our LCA are "good enough to
support their decision" then the final possible step discussed relates to considering the
available data by creating probabilistic distributions of data instead of point estimates or
ranges. By doing this, we explicitly aim to create models where the output could support a
probabilistic assessment of the magnitude of a hot spot, or the percentage likelihood that
Product A has less impact than Product B.
Relatively Basic
Type 1 vs type 2 error, etc.
Aranya's comments:
for case 2)
1. When the uncertainty is really large, and spans across an order of magnitude like in the case of Kim's biofuel work. Uncertainty becomes important while
comparing LCA results to other baselines, and you need to get a good
understanding of what drives the underlying uncertainties and variabilities, and
whether this can be reduced either through technical improvements or policy
decisions.
2. When the uncertainty isn't as significant, but the difference between life cycle
results and a baseline is much smaller - like in the case of our fossil fuel
uncertainty results. The difference between CNG and gasoline life cycle
emissions for transport is small on average, ~5%, but this isn't a robust enough
result since the difference is not statistically significant. So, it isn't a good idea
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

326

Chapter 11: Uncertainty in LCA

to make any decisions based on those comparisions.

The most commonly used approach is a Monte Carlo simulation to include key
uncertainties in inputs parameters to the LCA model. I think this method can be
extended to decision-support frameworks as well. For example, if you used an
optimization model and one of your objectives or constraints was an
environmental life cycle metric, you could still incorporate these uncertainties to
identify robust 'optimal' solutions wherever possible. i.e. solutions that work
across all or most of the scenarios that characterize uncertainty in the LCA
'space'.

Figure 11-8: General Diagram for Model with Probabilistic Inputs and Results

To numerically estimate the uncertainty of environmental impacts, Monte Carlo simulation is


usually employed. Several steps are involved:

The underlying distributions, correlations, and distribution parameters are estimated for each
inputoutput coefficient, environmental impact vector, and required sector output.
Correlations refer to the interaction of the uncertainty for the various coefficients.
Random draws are made for each of the coefficients in the EIO-LCA model.
The environmental impacts are calculated based on the random draws.
Steps 2 and 3 are repeated numerous times. Each repetition represents another observation
of a realized environmental impact. Eventually, the distribution of environmental impacts
can be reasonably characterized.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

327

Steps 24 in this process rely heavily upon modern information technology, as they involve
considerable computation. Still, the computations are easily accomplished on modern
personal computers. Appendix IV illustrates some simulations of this type on a small model
for interested readers.
Step 1 is the difficulty in applying this method. We simply do not have good information
about distributions, correlations, and the associated parameters for inputoutput matrices or
environmental impact vectors. Over time, one hopes that research results will accumulate in
this area.
A simpler approach to the analysis of uncertainty in comparing two alternatives is to conduct
a sensitivity analysis; that is, to consider what level some parameter would have to attain in
order to change the preference for one alternative.
Mullins curves / -probabilities of drawing incorrect conclusions about signs (pos or
neg) or comparisons.

Troy: Finally, I would say the field seems to be moving toward more detailed
models based on stock datasets built up over time. These will reduce some
sources of variability within LCA results and facilitate the calculation of end point
metrics practitioners tend not to use very often at present, ie DALYs.

Add text on meta-analyses (eg JIE special issue?)

Still deciding - Move stochastic text from Chapter 3 back here too? Pretty big jump for chap
3 as is..

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

328

Chapter 11: Uncertainty in LCA

Unfortunately, the software tools for LCA do not make it easy to practice uncertainty
assessment for students. The course license version of SimaPro does not include uncertainty
analysis features (although a "student research" version of the program does). OpenLCA
does include uncertainty assessment tools. The Advanced Material Section X shows..

Copied from Chapter 3..

Deterministic and Probabilistic LCCA


Our examples so far, as well as many LCCAs (and LCAs, as we will see later) are
deterministic. That means they are based on single, fixed values of assumptions and
parameters but more importantly it suggests that there is no chance of risk or uncertainty
that the result might be different. Of course it is very rare that there would be any big
decision we might want to make that lacks risk or uncertainty. Probabilistic or stochastic
models are built based on some expected uncertainty, variability, or chance.
Let us first consider a hypothetical example of a deterministic LCCA as done in DOT
(2002). The example considers two project alternatives (A and B) over a 35-year timeline.
Included in the timeline are cost estimates for the life cycle stages of initial construction,
rehabilitation, and end of use. An important difference between the two alternatives is that
Alternative B has more work zones, which have a shorter duration but that cause
inconvenience for users, leading to higher user costs as valued by their productive time lost.
Following the five-step method outlined above, DOT showed these values:
Without discounting, we could scan the data and see that Alternative A has fewer periods of
disruption and fairly compact project costs in three time periods. Alternative B's cost
structure (for both agency and user costs) is distributed across the analysis period of 35
years. Given the time value of money, however, it is not obvious which might be preferred.
At a 4% rate, the discounting factors using Equation 3-1 for years 12, 20, 28, and 35 are
0.6246, 0.4564, 0.3335, and 0.2534, respectively. Thus for Alternative A the discounted life
cycle agency costs would be $31.9 million and user costs would be $22.8 million. For
Alternative B they would be $28.3 million and $30.0 million, respectively. As DOT (2002)
noted in their analysis, "Alternative A has the lowest combined agency and user costs,
whereas Alternative B has the lowest initial construction and total agency costs. Based on
this information alone, the decision-maker could lean toward either Alternative A (based on
overall cost) or Alternative B (due to its lower initial and total agency costs). However, more
analysis might prove beneficial. For instance, Alternative B might be revised to see if user
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

329

costs could be reduced through improved traffic management during construction and
rehabilitation."
For big decisions like that in the DOT example, one would want to consider the ranges of
uncertainty possible to ensure against a poor decision. Building on DOT's recommendation,
we could consider various values of users' time, the lengths of time of work zone closures,
etc. If we had ranges of plausible values instead of simple deterministic values, that too
could be useful. Construction costs and work zone closure times, for example, are rarely
much below estimates (due to contracting issues) but in large projects have the potential to
go significantly higher. Thus, an asymmetric range of input values may be relevant for a
model.
We could also use probability distributions to represent the various cost and other
assumptions in our models. By doing this, and using tools like Monte Carlo simulation, we
could create output distributions of expected life cycle cost for use in LCCA studies.
The use of such methods to aid in uncertainty assessment are discussed in the Advanced
Material at the end of this chapter.

Chapter Summary
The practice of Life Cycle Assessment inevitably involves considerable uncertainty. This
uncertainty can be reduced with careful analysis of the underlying data and production
processes. Fortunately, the relative desirability of design alternatives can be assessed with
greater confidence than the overall impact of a single alternative because of the typical
positive correlation between the impacts of alternatives. In general, we recommend that
users of the EIO-LCA model use no more than two significant digits in considering results
from the model.

As first introduced in Chapter 4, life cycle impact assessment (LCIA) is the final quantitative
phase of LCA. LCIA allows us to transform the basic inventory flows created from the
inventory phase of the LCA and to attempt to draw conclusions related to the expected
impacts of these flows in our product systems. While climate change and cumulative energy
demand tend to dominate LCA studies, various other impact categories of broad interest
have had characterization models created that are scientifically credible and available for use.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

330

Chapter 11: Uncertainty in LCA

Despite the availability of these credible models and tools, many LCA studies continue to
focus just on generating inventory results, or at most, use only climate and energy impact
models.
Now that we have reviewed all of the important phases of LCA, in the next few chapters we
focus on ways in which we can create robust analyses that will serve our intended goals of
building quantitatively sound and rigorous methods.

References for this Chapter


Williams, Hawkins, et al (xxx) and Lloyd (xxxx)
Morgan and Henrion
Homework Questions for Chapter 11
here

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

331

Advanced Material for Chapter 11 Section 1


Uncertainty in Leontief InputOutput Equations: Some Numerical Examples
(need to fix vector/matrix notation throughout..)
In this Advanced Material section, issues associated with propagation of uncertainty through
matrix-based methods is demonstrated via perturbing values in an IO transactions matrix.
Such perturbations would be useful if considering the effect of structured uncertainty ranges
in a product system to assess whether changes have significant 'ripple through' effects. In
general, this methods shows that making small changes in these matrices leads to varying
levels of effects.
(More trans here?)
Tables of direct requirements and total requirements are often reported to six significant
figures. It is said that these tables allow one to calculate to a single dollar the effect on a
sector of a one million dollar demand. It should go without saying that we do not believe
that we have information that permits us to do this kind of arithmetic. Little advice is given
to the novice EIO analyst about how many significant figures merit attention. We have
advised our colleagues and students to be careful when going beyond two significant figures,
and in this book we restrict virtually all of our impact estimates to two significant digits. This
paper explores some ways to investigate systematically the uncertainty in Leontief input
output analysis.
We can judge the results of using the inputoutput estimates by considering the effects of
errors or uncertainties in the requirements matrix on the solution of the Leontief system.
Here, we will generally assume that we know the elements of the final demand vector Y
without error. Uncertainty in the values of the elements of the total output vector X will
result from uncertainty in the elements of the requirements matrix A. Uncertainties in X will
result from the propagation of errors through the nonlinear operation of calculating the
Leontief inverse matrix.
The Hawkins-Simon conditions require that all elements in the A matrix are positive and
less than one, and at least one column sum in the A matrix must be less than 1. The
determinant of the A matrix must be greater than zero for a solution to exist. For empirical
requirements matrices, such as are reported by the Bureau of Economic Services (BES) for
the United States, values of the determinants of A are very small (on the order of 1012).
Values of the determinant of the associated Leontief matrices are greater than 1.
We will use the U.S. 1998 BES 99 tables for the direct requirements matrix A and the
calculated Leontief inverse matrix to illustrate the meaning of the sector elements (see Table
IV-1). The two tables are given below. Of the 81 terms in A, five are zero. The numbers in
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

332

Chapter 11: Uncertainty in LCA

any column of the A matrix represent the amount, or direct requirement, that is purchased
in cents from the sector-named rows to produce one dollar of output from the sector-named
column. For example, for a one dollar output from the construction sector, 29.8 cents of
input is required from the manufacturing sector. The column sum for the construction
sector is 53.8 cents for all nine input sectors; this says that 1.00 0.538 = 46.2 cents is the
value added by the construction sector for one dollar of output. The values in the [I-A]-1
matrix are the total requirements or the sum of the direct and the indirect requirements.
Hence, from [I-A]-1 we see that a one dollar direct demand from the construction sector
requires a total of 52.4 cents to be purchased from the manufacturing sector to cover both
the direct and indirect requirements. The sum of the A values in the construction column
shows that one dollar demand for construction results in $2.09 of economic transactions for
the whole economy.
[Table IV-1] - INSERT

Deterministic Changes in the [IA] and [I-A]-1 matrices


Sherman and Morrison (1950) provide a simple algebraic method for calculating the
adjustment of the inverse matrix corresponding to a change in one element in a given matrix.
Their method shows how the change in a single element in A will result in the change for all
the elements of the Leontief inverse, and that there is a limit to changing an element in [A]
that requires that the change does not lead to A becoming singular. We use the 1998 99 A
and Leontief inverse matrices to demonstrate the numerical effect of changing the value in
A on the Leontief inverse matrix. We do not use the Sherman-Morrison equation for our
calculation, but instead use the functional term for calculating the inverse matrix in the
spreadsheet program Excel.

Example 1. The effect of changing one element in the A matrix on the elements in the
Leontief matrix.
If element A4,3 is increased by 25%, we want to know the magnitude of the changes in the
Leontief inverse matrix. The original A4,3 = 0.298 in the 1998 table and the increased value is
A4,3 = 1.250.298 = 0.3725. All other elements in A are unchanged. All calculated elements
in the new Leontief matrix show some change, but the amount of change varies. For an
increase of 25% in A4,3, the inverse element [I A]-14,3 increases by nearly the same amount.
The manufacturing column and the construction row elements all change by the same
percentage. In the Leontief matrix system, the sum of the column is called the backward
linkage and the sum of the row is called the forward linkage. As a result of the 25% increase
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

333

in the direct requirement of the manufacturing sector on the construction sector, each
backward element in the construction sector increases by 0.17%. There is a similar increase
in each element in the forward linkage of the manufacturing sector. Other sectors change by
different amounts. In percentage terms, the changes for the new Leontief matrix compared
to the original 1998 matrix are shown in Table IV-2.
[Table IV-2]- INSERT
Example 2. The effect of a small change in a single cell of A on the Leontief matrix.
The value of A4,3 is rounded from three decimal places to two, from 0.298 to 0.30. The
relative change in the new Leontief matrix is small for all cells, and no cell has a positive
change (see Table IV-3). The largest change is in [I-A]-14,3 of 0.6%.

[Table IV-3]- INSERT

Example 3. The effect of rounding all cells of A from three decimal places to two.
The changes in the new Leontief matrix are both positive and negative, and are larger than
for the single cell rounding change illustrated in the previous example (see Table IV-4). [I
A]14,3 changes by 1.7% when all cells in A are rounded down. Rounding all the cells in A
to two decimal places results in large changes in many cells of the Leontief matrix. The
largest negative change is 71.1% in [IA]17,1, and the largest positive change is 54.1% in [I
A]19,2.

[Table IV-4]- INSERT

Modeling Changes in the [IA] and Leontief matrices with Probabilistic Methods

The literature dealing with uncertainty in the Leontief equations is not extensive. The earliest
work that we have found that deals with probabilistic errors was from the PhD thesis of
R.E. Quandt (1956). Quandt's (1958, 1959) analysis of probabilistic errors in the Leontief
system was limited by the computing facilities of the late 1950s. His numerical experiments
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

334

Chapter 11: Uncertainty in LCA

were confined to small (33) matrices. He developed equations to calculate expected values
for means and variances of the Leontief matrix based on estimates of these parameters for
the [A] matrix.
Quandt investigated changes in the Leontief equations by examining them in this form:

[I A E]1 y = x

(IV-3)

Quandt specified conditions on his errors E that each element [Aij + Eij] > 0 and that
column j, sum of all elements [Aij + Eij], must be less than 1, that is, the uncertain A
elements satisfied the Hawkins-Simon conditions. His work examined eleven discrete
distributions for E, (eight were centered at the origin and two were skewed about the
origin.) The probabilities of these errors were also modeled discretely with choices of
uniform, symmetric, and asymmetric distributions.
For each distribution Quandt selected a sample of 100 33 matrices. Each sample set
represented about 0.5% of the total population of 39 = 19,683 matrices. From this set of
experiments he calculated the variance and the third and fourth moments of the error
distributions and the resulting vector x. Quandt used a constant demand vector y for all his
experiments. The mean values of x were little changed from the deterministic values and the
variance of A had little effect on the mean values of x.
Quandt concluded the following:
1. the skewness of the errors in A are transmitted to the skewness of the errors in the x
vector.
2. The lognormal distribution provides a fairly adequate description of the distribution
of the x vector elements irrespective of the distribution of the errors in A.
3. One can use the approximate lognormal distribution to establish confidence limits
for the elements in the solution x.

West (1986) has performed a stochastic analysis of the Leontief equations with the
assumption that elements in A could be represented by continuous normal distributions. He
presents equations for calculating the moments of the elements in A. West's work is
critically examined by Raa and Steel (1994) who point out shortcomings in his choice of
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

335

normality for the A elementsmainly that elements in A cannot be less than zeroand
suggest using a beta distribution limited to the interval for A elements between 0 and 1 to
keep elements in A positive.

Some Numerical Experiments with Stochastic Input A + E matrices


The examples presented in this section are constructed using Microsoft Excel spreadsheets
and @Risk software. They illustrate the ease by which we can study the effects of changes in
the form of elements of A on the Leontief matrix and some of the multipliers and linkages
commonly used in Leontief analysis. Numerical simulation is easy, and results from more
than a thousand iterations are obtained quickly. Still, the critical issues are the formulation of
good questions and the interpretation of the results of numerical experiments. As we have
pointed out before, the lack of a detailed empirical database to support our assumptions
about the statistical properties of the elements in the direct requirements A matrix is the
most important limitation of this analysis.
For each of our numerical experiments, we compare the properties of the input A + E direct
requirements matrix and the output [I A E]-1 total requirements matrix, where E is an
introduced perturbation. The results of stochastic simulations for the [I A E] inverses are
compared to the deterministic calculation of [I A] inverse. We report some representative
results for four scenarios.
1) In each scenario, the means of A are the 1998 values reported to three decimal
places.
2) Four types of input distributions are examined: a uniform distribution and two
triangular distributions with both positive and negative skewness, and a
symmetric triangular distribution.
An Excel spreadsheet is constructed with the 1998 99 matrix A, and used to calculate the
Leontief inverse. @Risk uses the Excel spreadsheet as a basis for defining input and output
cells. The chosen probabilistic distribution functions can be selected from a menu in @Risk;
the number of iterations can be set, or one may let the program automatically choose the
number of iterations to reach closure. We expect changes from the results of the
deterministic calculation of the Leontief matrix from A; Simonovits (1975) showed that

Exp (I A)1 > (I Exp (A))1

(IV-4)

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

336

Chapter 11: Uncertainty in LCA

where Exp is the expected value operator. We use the software to numerically simulate the
values of the Leontief matrix given an assumed distribution for A.
Table IV-5 shows the results of our simulations for the manufacturing:construction element
in A, namely A 4,3. Each of the 81 values in A is iterated over l000 times for each simulation.
Here only the distribution of values for one cell, the manufacturing:construction
intersection, is reported.

[Table IV-5]- INSERT

The numerical simulations show that the mean value for [I-A]-14,3 for the symmetric
uniform distribution is identical to the deterministic value for this sector pair of
manufacturing:construction. The mean values for [I-A]-14,3 for the two skewed distributions
are both lower than the deterministic value for [I-A]-1 4,3, and so is the mean value for [A 4,3]
for the symmetrical triangular distribution. The coefficient of variation (COV: the ratio of
the standard deviation to the mean) of [I-A]-14,3 is smaller than the COV for [A 4,3] for all
simulations except for the uniform input distribution. Consistent with Quandt's conjecture,
the skewness for every [I-A]-1 4,3 increases except for the uniform distribution input.
Additional work remains to show the patterns for the entire distribution of cells in [A].

Energy Analysis with Stochastic Leontief Equations


The following table is representative of the 99 U.S. economy with nearly $15 trillion of
total transactions and the total value added, or GDP, of more than $8 trillion. We have
included a column called the percentage value added for this economy. If we think in terms
of $100 million of value added, or final demand, for the U.S. economy, we can also think of
this demand in disaggregated sector demands of nearly $18 million for manufactured
products, more than $5 million of construction, etc.
For this energy analysis, we show three columns of energy data in mixed units, energy per $
million of total sector output. Hence, the manufactured products sector uses nearly 3.5 TJ of
total energy per $ million of output, 0.24 million kWh of electricity per $ million of output,
and 0.44 TJ of coal per $ million of output.

[Table IV-6]
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

337

The total energy use and the direct energy use for each sector are given by:

r = [R diagonal] x = [r diagonal] [I A]1 y

(IV-5)

and
r direct = [R diagonal] [I + A] y

(IV-6)

where r is a vector of energy use by sector, [R diagonal] is a matrix with diagonal cells equal
to the energy use per dollar of sector output and off-diagonal terms equal to zero, and A is
the requirements matrix.

Example 1. For this example, we use Excel and @Risk to build a model to calculate the
uncertainty in the physical units both the direct and the total energy use for $ 100 million of
final demand for the U.S. economy. The demand distributed among the nine sectors
proportionally to the distributions of value added for the economy. We assume we know the
demand vector with certainty, and that we know the physical energy use with certainty. All
uncertainty for this example is in A.
Assume that the entries in A may be represented by a symmetric triangular distribution with
a low limit of zero, a mode equal to the three decimal place value reported by BES, and a
high limit of two times the mode. The mean of this distribution is equal to the mode. This is
equivalent to saying that the coefficient of variation is constant for all entries with a value of
0.41. Previously, we presented the results of simulations for this triangular distribution on
the Leontief matrix In this example, we examine the distribution of the r direct and the total
r.
@Risk performed 5000 iterations to calculate the mean value and the standard deviation of
the energy use for each of the nine sectors for a $100 million increment of GDP
proportionally distributed across the economy. The uncertainty in the energy output for each
sector is shown by the standard deviation and the COV. The sum of the total energy use for
the whole economy is 730 TJ and the direct energy use is 563 TJ. The sector values for r
direct are smaller than the total r values, and the r values have more uncertainty than the r
direct values as shown by the COVs. The COVs for all sectors are smaller than the constant
COV of 0.41 assumed for A. Direct energy use is lowest as a percentage of the total energy
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

338

Chapter 11: Uncertainty in LCA

use for the agricultural products and minerals sectors. For all other sectors, the direct energy
use is more than 70% of the total energy use.
[Table IV-7]

Summary
Uncertain values in the cells of the requirements matrix generate uncertain values in the cells
of the total requirements or Leontief matrix. Three cases have studied. Two deterministic
cases are presented; in one case only a single value in A is modified, and in the second case
all the values in A are changed. For a set of probabilistic examples we used Excel and @Risk
to calculate the Leontief matrix for a uniform and three triangular distributions of A as
input. The simulations show small effects on the mean values of the Leontief matrix and
larger changes in the second and third moments.
An example of an energy analysis for a 99 sector model of the U.S. economy shows the
effect of uncertainty from A on the total energy use r and the direct energy use r direct for
each sector.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 11: Uncertainty in LCA

Advanced Material how does SimaPro do Uncertainty?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

339

340

Chapter 12: Advanced Hotspot and Path Analysis

EXTRA PAGE FOR FORMATTING

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

341

Chapter 12 : Advanced Hybrid Hotspot and Path Analysis


In this chapter, we describe advanced LCA methods that consider all potential paths of a
modeled system as separate entities, instead of summarizing aggregated results. For process
matrix-based methods, these methods are often referred to as network analyses, and for IObased methods, as structural path analyses. These methods are helpful in support of
screening level analyses, as well as in helping to identify specific processes and sources of
interest. In addition, we discuss hybrid methods that allow us to consider changes to the
parameters used in the default network or structural path analysis in order to estimate the
effects of alternative designs, use of processes, or other assumptions.

Learning Objectives for the Chapter


At the end of this chapter, you should be able to:
1. Explain the limitations of aggregated LCA results in terms of creating detailed
assessments of product systems.
2. Express and describe interdependent systems as a hierarchical tree with nodes and
paths through a tree.
3. Describe how structural path analysis methods provide disaggregated LCA estimates
of nodes, paths, and trees.
4. Interpret the results of a structural path analysis to support improved LCA decision
making.
5. Explain and apply the path-exchange method to update SPA results for a scenario of
interest, and describe how the results can support changes in design or procurement.

Results of Aggregated LCA Methods


While the analytical methods described in the previous chapters are useful in terms of
providing results in LCA studies, these results have generally been aggregated. By
aggregated results, we mean those that have been 'rolled up' in a way so as to ignore
additional detail that may be available within the system.
In process-based methods, aggregated results express totals across all processes of the same
name that are modeled within the system boundary (or within the boundary of the process
matrix). For example, Figure 9-5 (revisited here as Figure 12-1) showed aggregated fossil
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

342

Chapter 12: Advanced Hotspot and Path Analysis

CO2 emissions to air across the entire inverted US LCI process matrix system by process in
the production of bituminous coal-fired electricity.
Process
Total
Electricity, bituminous coal, at power plant/US

Emissions
(kg)
1.033

Percent of
Total

1.004

97.2%

Diesel, combusted in industrial boiler/US

0.011

1.0%

Transport, train, diesel powered/US

0.009

0.9%

Electricity, natural gas, at power plant/US

0.002

0.2%

Residual fuel oil, combusted in industrial boiler/US

0.002

0.2%

Transport, barge, residual fuel oil powered/US

0.001

0.1%

Natural gas, combusted in industrial boiler/US

0.001

0.1%

Gasoline, combusted in equipment/US

0.001

0.1%

Electricity, lignite coal, at power plant/US

0.001

0.1%

Transport, ocean freighter, residual fuel oil powered/US

0.001

0.1%

Bituminous coal, combusted in industrial boiler/US

0.001

0.0%

Figure 12-1: Top products contributing to emissions of fossil CO2 for 1kWh of bituminous coal-fired
electricity. Those representing more than 1% are bolded.

The total emissions across all processes are 1.033 kg per kWh of electricity, with the top
processes contributing 1.004 kg CO2 to that value from producing coal-fired electricity and
0.011 kg CO2 from producing diesel fuel that is combusted in an industrial boiler. These
two values generally include all use of coal-fired electricity and diesel in boilers though the
system, not just direct use.
Similarly, in IO-based methods, the aggregated results expressed as the output of using an
IO model with the Leontief equation provides totals across all of the economic sectors
within the IO model. Figure 8-5 (revisited here as Figure 12-5) showed total and sectoral
energy use for producing $100,000 in the Paint and coatings sector. The total CO2 emissions
across the supply chain are 107 tons, with the top sectoral sources being electricity (25 tons)
and dyes and pigments (17 tons).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

343

Total
($ thousand)

CO2 equivalents
(tons)

Total across all 428 sectors

266

107

Paint and coatings

100

Materials and resins

13

Organic chemicals

12

Wholesale Trade

10

Management of companies

10

<1

Dyes and pigments

17

Petroleum refineries

Truck transportation

Electricity

25

Sector

Figure 12-2: EIO-LCA economic and CO2 emissions results for the 'Paint and Coatings' Sector

While these results may seem detailed, and to some extent they are, they fail to represent
additional detail that exists within the models i.e., the specific processes and sectors at the
various tiers that lead to these emissions. In the process-based results, multiple processes
may contribute to the CO2 emissions from coal-fired electricity or diesel fuel, but the
aggregated totals provided do not show which of these underlying processes lead to the
most emissions. It may be important to know whether most of the emissions are from final
production or from a specific upstream product or process. Instead, only the aggregated
totals across all instances of production of coal-fired electricity and diesel are shown.
The IO-based results are also aggregated, e.g., across all sectors throughout the economy
where electricity or dyes are produced. There are likely thousands of sectors in the economy
that use electricity that lead to the provided estimate of emissions, and we may be interested
in knowing which of them lead to the most. Knowing that a handful of specific facilities
within the supply chain purchase the most electricity and thus lead to the most emissions
may be an important finding. But the aggregated results cannot show this they can at
most provide estimates of direct and indirect effects. Supply chain management could be
greatly informed by better information at the upstream facility level, and knowledge of
impacts of specific facilities could augment IO-based screening (as will be seen later in the
chapter).
Aggregated results are neither problematic nor inappropriate to use. In many studies,
aggregated results may be sufficient. Previous chapters have shown that acquiring the
aggregate results can be done quickly. Disaggregating into additional detail will require more
time and effort. "Unrolling" aggregated results from either process-based or IO-based
methods into the underlying flows (i.e., for a specific individual connected process or sector)
is the goal of this chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

344

Chapter 12: Advanced Hotspot and Path Analysis

A Disaggregated Two-Unit Example


Before demonstrating the methods to provide the additional detail sought, consider a simple
example with two 'units', as shown in Figure 12-3. We use 'unit' to generally refer to
processes or sectors that may exist in process-based or IO-based systems. The circles, 1 and
2, represent activities of the two units of the system. This is equivalent to a process matrix
model with only 2 processes, or an IO model with only 2 sectors. The directional arrow at
the top of Figure 12-3(a) shows that unit 2 requires a certain amount of output from unit 1
(A12), and the circular arrow on the right represents the amount of output required from
itself (A22). Unit 1 requires output from unit 2 and itself. These four flows (represented as
directional arrows) would be the elements of the A matrix in this two-unit example.

Figure 12-3: Overview of Hierarchical Supply Chain for Two-Unit Economy. (a) represents the flows
between the two sectors as would be represented in a 2 x 2 A matrix. (b) represents the flows as an
interdependent two-sector economy. Adapted from Peters 2006.

Figure 12-3(b) is an alternative representation of the economy, expressed as a hierarchical


tree of the supply chain. At the top is the final demand vector Y. Producing that final
demand will require some amount of production in the two sectors (there are two separate
values Y1 and Y2, potentially zero valued). Using the nomenclature introduced in Chapter 8,
this initial demand for production is classified as Tier 0 in the hierarchy. Producing the Tier
0 output in each unit may require some amount of production from units 1 and 2 (as
expressed in Figure 12-3(a)) in Tier 1. Likewise, the Tier 1 production requires amounts of
product from units 1 and 2, etc. The arrows at the bottom of Figure 12-3(b) remind us that
this tiered production graph would continue on and on beyond what is shown. The
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

345

hierarchical tree continues to double in complexity since there are two units in the economy,
and at each tier the relative amount of product needed from the other unit is found by the
relationships in Figure 12-3(a), a.k.a., the direct requirements matrix A.
Even in the abridged Figure 12-3(b), we see how the supply chain looks like a tree, and how
at each tier the flows are branches and the circles are nodes - or apples on the tree. Going
further down the tree to subsequent tiers, we see more and more apples. This is a far more
comprehensive view of the product-system. If we generally extended Figure 12-3 to many
more levels, we could represent the entire supply chain. Of course quantitatively many of
the apples would have zero values, because either there was no interlinked demand or
because the needed flow was zero. But if we added up all of the individual apples' values, we
would get the same aggregated results we get from using the inverted process or IO-based
model, which otherwise sums all effects for each of the two units. In IO terms, summing all
of the node 1's would give us the IO result for Sector 1 and summing all of the node 2's
would yield the rolled-up total for Sector 2 (as in the Tabular results in Chapter 8).
Continuing with the simple two-unit system example, imagine that unit 2 represents
electricity production. If we were considering the impacts of electricity across our supply
chain, then every time there was a non-zero value for output in unit 2, we could find the
amount of electricity needed (and/or use the R matrix values to estimate greenhouse gas or
other emissions effects of that production) for each of the apples on the tree. Continuing to
go down Figure 12-3(b) would allow us to estimate electricity use for the entire supply chain.
The sum of all of them would, as described above, provide the aggregated result available for
the entire electricity sector using the Leontief IO equation. There is great benefit, though, in
being able to separately assess the various amounts of electricity of each of the apples on the
tree. The largest apples (in terms of values of effects like emissions) could be very
important. It is likely that there are "electricity apples" in lower tiers that are bigger than
some in higher tiers.
Let's now extend our vision of the system beyond two units. While not drawn here, we
know from earlier discussion that the 2002 US Benchmark IO Model has 428 sectors. The
equivalent Figure 12-3 for this model would have a final demand Y with 428 potential
values. Tier 0 would have 428 potential node values (apples), Tier 1 would have 428
squared, etc. The total amount of apples is quickly in the billions. That means if we wanted
to separately estimate the economic or emissions values for all of the apples, we would need
to do a substantial amount of work.

Structural Path and Network Analysis


The goal of an advanced hotspot analysis is being able to identify the 'large apples' in our
hierarchical tree. In reality, we would want to know more than just the size of the apples, we
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

346

Chapter 12: Advanced Hotspot and Path Analysis

would also want to know the branches of the tree that lead to the apples. In a more
traditional hierarchical tree (or graph theory) discussion, the collection of the branches from
Y down to a specific node (apple) would be referred to as a path. In a 428-sector economy,
a path could represent, for example, the electricity consumed by the engine manufacturer of
a final demand automobile, or the production of rubber in tires needed by a new automobile.
The path (as expressed in the directional arrows of Figure 12-3) "leads" from top to bottom,
or from final demand down to a specific apple. It could be expressed from bottom to top as
well the important aspect of the path is the chain of nodes between the final demand and
the final node. Depending on our goals, we may be interested in the relevance of all paths,
or a specific path (e.g., the path represented by Y-1-2-1 in Figure 12-3).
A structural path analysis (SPA) is a method that helps to explain the embedded effects in
a system studied by an input-output model16. It was originally described for economic
models (see Crama 1984 and Defourny 1984), and later proposed for use in LCA by Treloar
(1997) and is generally described by a list of paths that describe how energy or
environmental effects are connected in the upstream supply chain. An SPA will be
comprised of a variety of paths with varying lengths, e.g., there may be paths of length 1 that
go only from Y to a node in Tier 1, or paths of length 2 that go from Y down to Tier 2.
Similarly, an impact at "path length 0" is associated with the final production of the
demanded good. When visualized, SPAs are usually written as hierarchical trees with the
product at the top of the tree. Network analysis, which is typically done for process matrix
methods, is analogous to SPA and was shown as a visualization feature of SimaPro in
Chapter 9.
The value in performing SPA is in identifying the most significant effects embodied (perhaps
deeply) within the supply chain of a product. Whereas a typical aggregated IO-LCA, for
example, could estimate the total effect of all electricity consumed across the supply chain,
SPA can show the specific sites where electricity is used (e.g., electricity used by particular
suppliers or nodes).
To help understand the mechanics behind SPA, we explore further the two sector economic
system used throughout Chapter 8, where the units of R were waste in grams per $billion:
=

. 15
.2

. 25
50
, =
. 05
0

0
5

In our analysis of that system, we estimated that for Y1= $100 billion (a final demand of
$100 billion in sector 1) the total, or aggregated, requirements were ~$152 billion and the
total waste emissions were 6.4 kg. But what if we wanted to know which particular
interconnected paths led to the largest economic flows and/or waste? SPA works by

16

This kind of analysis is also referred to as a structural decomposition, but for consistency will be referred to as SPA in this chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

347

looking for all of the effects from individual nodes and then creating paths connecting them
to the top of the tree. Returning to the Leontief system expansion Equation 8-1,
X = [I + A + AA + AAA + ] Y = IY +AY + A2Y + A3Y +

(8-1)

The economic-only contribution of any particular node in the hierarchical tree can be
decomposed from the elements on the right hand side of Equation 8-1. The node with path
length 0, a.k.a., from the IY part of Equation 8-1, has a site value of Y1 or $100 billion.

Figure 12-4: Hierarchical Supply Chain Emphasizing Path to Specific Tier 2 Node

The value of the leftmost node 1 in Tier 1 of Figure 12-3, from unit 1 into unit 1, is
represented by the product Y1A11 = $100 billion*0.15 = $15 billion. Figure 12-4 is a
modified form of Figure 12-3(b) with the path to a specific node highlighted. The value of
the highlighted node in Figure 12-4, which goes from unit 1 in Tier 0 to unit 2 in Tier 1 to
unit 1 in Tier 2, is represented by the product Y1A21A12 = $100 billion*0.2*0.25 = $5 billion.
Generally, the economic value of nodes in the supply chain are represented by:
Economic node value =Yk Ajk Aij

(12-1)

where i, j, and k represent various industries in the system, and the subscripts follow the path
back to the top as noted in the description above (e.g., A21 is the value representing a path
going from a node for sector 2 to a node for sector 1). Figure 12-5 shows the economic
values of various nodes in Figure 12-3 using Equation 12-1. Note that the node values are
not found by applying the general A2 and A3 matrices as implied from Equation 8-1. The
economic path values are found by using the relevant economic coefficients of the A matrix
of each step in the path from top to bottom.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

348

Chapter 12: Advanced Hotspot and Path Analysis

Node

Equation

Y1 in Tier 0
Leftmost 1 in Tier 1
Rightmost 2 in Tier 1
Leftmost 1 in Tier 2
Highlighted Node in Tier 2
Rightmost 2 in Tier 2

Y1
Y1A11
Y1A21
Y1A11A11
Y1A21A12
Y1A21A22

Economic Value
($billions)
$100
$15
$20
$2.25
$5
$1

Figure 12-5: Economic Node Values for Two-by-Two System (as referenced in
Figure 12-3)

As motivated above, we might care about the nodes that generate the most waste. These are
easy to find because we know the R matrix values of 50 and 5 (units of grams/$billion). If
the R matrix values are put into a vector R (i.e., [50 5]) we can multiply Equation 12-1 (or
the values in Figure 12-5) by the R values:
Effect node value =RkYk Ajk Aij

(12-2)

Using Equation 12-2, the waste from direct production is 50 g per billion*$100 billion =
5,000 g = 5 kg. Likewise, Figure 12-6 shows the waste output values for the same nodes of
our two by two system.
Node
Y1 in Tier 0
Leftmost 1 in Tier 1
Rightmost 2 in Tier 1
Leftmost 1 in Tier 2
Rightmost 2 in Tier 2

Equation
R1Y1
R1Y1A11
R2Y1A21
R1Y1A11A11
R2Y1A21A22

Waste Value (grams)


5000
750
100
112.5
5

Figure 12-6: Waste Node Values for Two-by-Two System (as referenced in
Figure 12-3)

These generalized equations help to illustrate the mathematics that are foundational to
performing SPA. We can use these equations to find the most important nodes for a variety
of effects, so long as we have the needed data. One additional part of SPA that we will
explore later is that SPA routines typically also provide 'LCI' values for each node, i.e., the
total (cumulative) economic and/or environmental effects under a node. The LCI value at
the top of a tree is equal to the Leontief inverse. More generally, LCI values aggregate all
nodes underneath it (including the node itself).
Before presenting the typical outputs of an SPA, we reconsider the typical level of detail
provided in a standard aggregated IO-LCA analysis. Figure 12-7 shows the aggregated
results via EIO-LCA to estimate the various greenhouse gas emissions of producing $1
million of soft drinks (from the Soft drink and ice manufacturing sector).

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

Total

349

tons CO2 equivalent emissions


Fossil
Process
CH4
N2O
CO2
CO2
721
48.8
75
63.2

Other

Total for all sectors

940

32.6

Power generation and supply


Wet corn milling
Alumina refining and primary aluminum
production
Grain farming
Oil and gas extraction
Soft drink and ice manufacturing
Other basic organic chemical manufacturing
Truck transportation
Aluminum product manufacturing from
purchased aluminum
Iron and steel mills

314
57.7
55

309
57.7
12.5

0
0
19.5

0.85
0
0

1.92
0
0

1.99
0
23

46
37
35.7
33.8
33
25.4

6.78
10.4
35.7
30.3
33
25.4

0
6.79
0
0
0
0

3.76
19.8
0
0
0
0

35.5
0
0
3.47
0
0

0
0
0
0
0
0

21.1

7.95

13

0.128

Figure 12-7: Aggregated EIO-LCA Results for $1 Million of Soft drink and ice manufacturing,
2002 EIO-LCA Producer Price Model

From Figure 12-7, the total CO2-equivalent emissions from producing $1 million of soft
drinks are 940 tons. Of that total, 721 tons are from fossil-based sources (such as burning
fuels), and of those fossil CO2 emissions, the largest sectoral contributor is from electricity
use. As previously noted, these aggregated values are not very helpful in understanding
where the 'hot spots' are in the system. We might be interested in knowing which of the
various factories across the supply chain of making soft drinks lead to the most electricity
use (and thus the most CO2 emissions). To answer such questions, we need to use SPA.
MATLAB code is available (see Advanced Material at end of this Chapter) to perform a
complete, economy-wide SPA on a particular input-output system. This code takes as inputs
a final demand Y, a direct requirements matrix A, an R matrix, and a series of truncation
criteria, and produces a sorted list of the paths (ordered by their path through the system)
with the highest impact. The truncation criteria ensure that the path analysis performed is
not infinitely complex.
Figure 12-8 shows abridged SPA results, only for the top 15 paths of the 2002 benchmark
US IO model, using the MATLAB code estimating the total greenhouse gas emissions of $1
million of soda (from the Soft drink and ice manufacturing sector) and a truncation parameter of
0.01% of the total tons CO2e in the SPA for a path's LCI value17.

These results were found using the MATLAB code provided as of the time of writing, and may differ from those found using the most
current data. The values presented herein should be considered as assumed values for the discussion and diagrams in this chapter.
17

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

350

Chapter 12: Advanced Hotspot and Path Analysis

LCI
@
Site

S1

S1
Name

S2

Length

GWP
Site

79.6

83.6

70

SDM

31

S2
Name
Power generation and
supply

52.1

119.4

70

SDM

44

Wet corn milling

36.0

940.4

70

SDM

173
31

Alumina refining
and primary
aluminum
production
Power
generation and
supply

Grain farming

S3

32.4

48.1

70

SDM

174

Aluminum product
manufacturing from
purchased aluminum

32.0

33.6

70

SDM

148

Plastics bottle
manufacturing

29.8

42.9

70

SDM

44

20.0

166.0

70

SDM

174

Wet corn milling


Aluminum product
manufacturing from
purchased aluminum

15.2

22.8

70

SDM

11

Milk Production

15.0

21.7

70

SDM

324
174

Truck transportation
Aluminum product
manufacturing from
purchased aluminum

173

14.7

15.5

70

SDM

172

Power
generation and
supply
Alumina refining
and primary
aluminum
production
Plastics material
and resin
manufacturing
Power
generation and
supply
Secondary
smelting and
alloying of
aluminum

106

Paperboard Mills

31

13.2

13.9

70

SDM

174

Aluminum product
manufacturing from
purchased aluminum

11.6

44.5

70

SDM

148

Plastics bottle
manufacturing

127

11.0

11.6

70

SDM

44

Wet corn milling

31

Aluminum product
manufacturing from
purchased aluminum
Paperboard container
manufacturing

9.3

9.7

70

SDM

174

8.8

17.1

70

SDM

107

S3
Name

S4

S4
Name

31

Power
generation
and
supply

31

Power
generation
and
supply

Figure 12-8: Example SPA of Total Greenhouse Gas Emissions for $1 Million of Soft drink and ice
manufacturing Sector. GWP Site and LCI Values in tons CO2e. SDM = Soft Drink Manufacturing

Full results for $1 million of Soft drink and ice manufacturing have about 1,100 paths, and is
in the web-based resources for Chapter 12 (as SPA_SoftDrinks_GHG_Tmax4.xls).
SPA spreadsheets for several other sectors using the same parameters are also posted there.
It is worth browsing them to see how the same parameters lead to vastly different numbers
of paths (e.g., tens versus thousands) and thus how concentrated the hot spots are.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

351

Figure 12-8 is sorted by the results for the nodes or 'apples on the tree', which in this case
are the GHG emissions from any particular apple in the economic system. This is column 2,
noted as the 'GWP site', referring to the greenhouse gas emissions of each apple (emissions
at the facility represented by the node). The next column shows the total LCI value at the
node (i.e., the emissions of the apple and all of the branches and apples beneath it). The
remaining columns describe the path from Y down to the node for these top GHG
emissions paths. All values in S1 are for sector 70, soft drink and ice manufacturing (SDM).
Sector numbers and names for S2-S4 are also listed for paths of length higher than zero.
Across all of the many apples, the first row of Figure 12-8 shows that the discrete activity in
the supply chain of making soda with the highest GHG emissions comes from the path for
producing the electricity purchased to make soda. It is in Tier 1 (path has length 1 from
final demand Y) and the site emissions are 79.6 tons CO2e. The entire LCI (including the
site) for this path is 83.6 tons CO2e. That means there are about 4 tons CO2e more GHG
emissions further down that chain (e.g., from mining or transporting the coal needed to
make electricity). The second largest apple is from the path for wet corn milling needed to
produce soda (also path length 1). The third biggest discrete source of GHG emissions
would be the emissions at the soda manufacturing facility itself (path length 0, about 36 tons
CO2e). The LCI value for this row estimates that
It is common that the percent
there are about 940 tons CO2e below this node.
contributions of paths will
Since it is path 0, this LCI value comprises all of
diminish quickly. With an overall
the other apples in the SPA, and so it equals the
LCI for $1 million of soda of
rolled up aggregated results for $1 million of soft
approximately 940 tons CO2e, by
drinks from Figure 12-7. Each SPA has only one
the 15th highest path, the
path length 0 entry, and its site value is commonly
contribution of the total is
not the largest value in the sorted SPA. The final
already less than 1% (8.8 / 940).
row we highlight is the eleventh largest, which is
This further reinforces our
one of only two paths of length 3 in the abridged
discussion from Chapter 8, which
SPA (top 15) summary. It is the electricity that
showed that aggregated results
goes into alumina refining that is needed to make
also diminish quickly. Thus, an
SPA sorted with the top 15 or 20
aluminum! The sum across the abridged top 15
paths will typically yield most of
GWP site values is only 380 tons CO2e, which is
the "hot spots" for discrete
only 40% of the total 940 tons CO2e. Summing
activities with the highest impact
site values ensures we do not double count
in the supply chain.
emissions represented in the LCI values.
An unabridged SPA for an activity in a 428-sector economy would be massive, with billions
of rows representing all of the apples in the hierarchical tree. But, if we had an unabridged
SPA with all site values for all paths, the sum of all of the GWP site values would equal 940
tons. However, there are many nodes with zero or very small values, many of them deep
into the tiers, and these nodes can be ignored, reducing the size of the SPA. While the
complete method to create the SPA excerpted in Figure 12-8 is beyond the scope of the
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

352

Chapter 12: Advanced Hotspot and Path Analysis

main chapter (but is discussed in the Advanced Material for this Chapter, Section 1), it was
generated by truncating the SPA by cutting off paths with a cutoff of 0.01% of total GHG
emissions and followed paths only down to Tier 4 (i.e., including up to nodes with path
lengths of 3). The cutoff means that the minimum LCI value at a node required so as to be
included in the results had to be greater than 0.01% * 940, or 0.094 tons CO2e. Even this
truncated SPA, however, returns about 1,100 paths (i.e., imagine 1,100 rows in Figure 12-8).
Regardless, using SPA software with truncation parameters will under-represent the total
aggregate results because of the excluded paths. Not shown here (but visible in the posted
spreadsheet) is that the sum of all site values in the resulting 1,100-path SPA, which still
represents only about 75% of the entire 940 tons CO2e in the supply chain. Thus, 25% of
the total greenhouse gas emissions are comprised of many, many very small site and LCI
values not large enough to surpass the threshold or with path lengths greater than 3.
One caveat is that despite the more disaggregated results that SPA provides, it still does not
necessarily give facility level information. The estimates provided in Figure 12-8 for truck
transportation is for all contracts and purchases of truck transportation by the soft drink
manufacturer, which are likely from many different individual trucking companies.
The format of Figure 12-8 arises from the output used by the MATLAB code available to
generate SPA results from an economic system. A generally used description of paths is to
list them from beginning to end (bottom to top in a hierarchical tree), e.g., the first one in
Figure 12-8 would be written as 'Power generation and supply > Soft drink and ice
manufacturing'. This nomenclature will also be used in this chapter.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

353

Figure 12-9: Hierarchical Tree View of Structural Path Analysis for Soft Drink and Ice Manufacturing

The PowerPoint template used to make Figure 12-9 is in the Chapter 12 folder.
Despite providing significant detail, SPA results expressed in tabular form can be difficult to
interpret. Figure 12-9 shows a truncated graphical representation of most of the SPA results
shown in Figure 12-8 (several of the values come from the underlying SPA, not shown).
The values in the blue rectangles represent the sector names of the activities and the GHG
emissions at the site, similar to Figure 12-3(b), but streamlined to only include a subset of
them at each tier. The numerical values above the rectangles are the GHG LCI values from
Figure 12-8. For example, at the very top of the hierarchy is the rectangle representing the
initial (Tier 0) effects of Soft drink and ice manufacturing, which Figure 12-8 says has a value
of 36 tons CO2e, and the LCI emissions, rounded off to 940 tons. Likewise, the Tier 1 site
emissions of Wet corn milling are 52 tons, and the LCI of 119 tons CO2e.
As promised, SPA, unlike aggregate methods, shows a far richer view of where flows occur
in the product system. By using SPA, we could improve our understanding of our product,
or the design of our own LCA study. For example, in the soft drink example above, we
could ensure that key processes such as wet corn milling and other high emissions nodes are
within our system boundary. If such nodes were excluded, we would be ignoring significant
sources of emissions.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

354

Chapter 12: Advanced Hotspot and Path Analysis

Web-based Tool for SPA


Visual structural path analyses similar to those shown above can be generated online via the
EIO-LCA SPA tool (accessed at http://www.eiolca.net/abhoopat/componentviz/ ).
The SPA tool has four elements, which display results of the 2002 benchmark model:
1) A search or browse interface to find the input-output sector you want to analyze
2) A pull down menu for selecting the effect you want to analyze (e.g., energy, carbon
emissions, water use, etc.)
3) A sector hierarchy with categories of products you want to browse amongst to
choose a sector to analyze
4) A Structured Path Analysis graphic displaying the results (shown after the three
above are chosen)
A key component of the structural path visualization is the ability to 'cut off' the many small
paths in the supply chain (e.g., all paths with impacts less than 0.1%) in order to more
effectively and efficiently focus on visualizing the results of the larger paths for decision
making. The SPA tool allows the user to select any of the 428 IO sectors in the 2002 model,
and then visualize the hierarchy or chain of effects that lead to significant impacts. Figure
12-10 shows the initial screen displayed when using the SPA tool to select a sector.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

355

Figure 12-10: Home Screen of the Online SPA Tool

Once a sector is chosen in the online interface, either by searching (i.e., by starting to type it
in words and then selecting from a set of auto-completed options) or browsing (i.e., using
the categorical drill down + symbols), the SPA display begins. Figure 12-11 shows the initial
SPA screen for energy use (by default) associated with Electronic computer manufacturing18. The
user can click the 'Change Metric' button to instead display SPA results for greenhouse gas
emissions or other indicators of interest.

Figure 12-11: Initial SPA Interface Screen for Electronic Computer Manufacturing
(Showing Energy, by default)

The tool also provides in-line help when the cursor is moved over screen elements. For
example if the metric is changed to 'Greenhouse Gases', and the cursor is hovered over any
of the elements in the top row (the top 5 sources), Figure 12-12 shows how the tool
summarizes why those values are shown, i.e., they are the top sectors across the supply chain
of producing computers that emit greenhouse gases.

18

Note: For consistency, a revision of this web tool example will be updated to show the same soft drink example described above.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

356

Chapter 12: Advanced Hotspot and Path Analysis

Figure 12-12: SPA Interface for Greenhouse Gas Emissions, with Help Tip Displayed

Likewise, Figure 12-13 shows how moving the cursor over elements in the first row of the
structural pathway display will spell out the acronyms of sectors chosen (and which are
abbreviated to fit on the display). It also explains the concept of depth introduced above
which is relevant to how deep in the supply chain the path is being displayed. At the first
level (or depth) of the visualization, all of the top-level activities that go into final assembly
of a computer are represented as boxes in the large horizontal rectangle of activities. On the
left hand side of this top level is always the sector chosen (in this case, computers), and then
sorted to the right of that choice are the top sectors that result in emissions associated with
the highest level of the supply chain. These include computer storage devices,
semiconductors (shown below), etc.

Figure 12-13: SPA Interface Showing Detail of Elements in First Tier of Display

Each of the boxes in the lower (pathway) portion of the SPA tool shows the respective
percentage of effects from that sector in the overall path. Of all of the highest level
processes in the path of a computer, 17% of the emissions come from computer storage
device manufacturing (CSDM), 16% comes from semiconductors (S&RDM), 13% from
printed circuit assembly (PCAAM), etc. But each of those activities themselves also has an
upstream supply chain.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

357

The red bar at the bottom of each of those grey process boxes is denoting that it has a
further upstream process that may contribute effects in the overall SPA. By clicking on any
of these boxes in the top level, the visualization drills down to all of the activities associated
with that specific pathway. Figure 12-14 shows the SPA visual that would result from
choosing computers, and then subsequently choosing the semiconductor manufacturing
process at the first level.

Figure 12-14: SPA Interface with Second Tier Detail Shown

In this case, the largest emitting activity in the upstream supply chain of semiconductors is
power generation, showing 20% of the relevant upstream emissions from that process. All
other basic inorganic chemicals (AOBICM) would be next highest at 14%. Selecting the
power generation box at this level would again drill down to the next level of the SPA,
resulting in the display in Figure 12-15. At the third path level, the result is that almost all of
the emissions come from the generation of electric power itself, with a few smaller upstream
processes like coal mines, etc., to the right.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

358

Chapter 12: Advanced Hotspot and Path Analysis

Figure 12-15: SPA Interface with Third Tier Detail Shown

Finally, the SPA visual can connect the top impacts with the results shown in the SPA
display at the bottom. By moving the mouse over any of the top 5 sectors on the top of the
screen, the SPA will highlight in the same color all of the processes in your selected
structural path that are associated with that sector. In Figure 12-16, all power generation
boxes are shown. The point of this feature is to visually reinforce the importance of these
top 5 sectors.

Figure 12-16: SPA Interface with top sources highlighted as nodes in levels

Note that the example drill down of the SPA shown above is just one of thousands of
combinations that could be done for any particular top-level sector. For example, instead of
choosing semiconductors in the first row, the SPA could have elected to follow computer
storage devices, etc. The resulting visualizations would be different and are not shown here.
The discussion and demonstration above hopefully provide further motivation as to why
one might be interested in path specific results in LCA. In the next section, methods are
presented that incorporate additional data to update the structural path results available from
IO models. These methods represent the most detailed hybrid methods available to support
detailed questions at the level of specific nodes in the supply chain.

The Structural Path Exchange Method


While the results of an SPA may be inherently tied to sectoral average data and methods, the
path and node-specific information may be useful when considering the effects of alternative
design or procurement decisions on the relative impacts of product systems. For example,
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

359

we may use generic IO-based SPA results to develop a baseline for a product system, and
replace various baseline results with our own data or assumptions as they relate to alternative
designs or purchases.
At the simplest level, path exchange (PXC) is an advanced hybrid LCA method conceived
by Treloar and summarized theoretically in Lenzen (2009) that 'exchanges' path values from
a baseline SPA, e.g., from a national level model, with data related to alternate processes, i.e., that
differ from those modeled by average and aggregate IO data. The alternate data may be
from primary data, supplier information, assumptions, or locally-available data on specific
paths. The values exchanged may be only for a specific node, an entire subtree of the node,
or more. The main purpose of path exchange is to create an improved estimate of effects
across a network / supply chain, with the alternate exchanged data exploiting the
comprehensiveness of the SPA. The PXC method targets specific nodes in the supply chain.
Baboulet (2010) provides an excellent practical demonstration of path exchange in support
of decision making and policy setting for a university seeking to reduce its carbon footprint.
The steps of path exchange can thus be summarized as:
(1) Perform a general national IO-based SPA for a sector and effect of interest to
develop a baseline estimate.
(2) Identify paths where alternate process data would be used (e.g., paths with relatively
high values in the baseline, or where process values are significantly different than
averages), and where data is available to replace the baseline path values in the SPA.
For each of these exchanged paths, do the following steps:

Develop a quantitative connection between the alternative process data and


the nature of the relationship of the chosen paths, including potential unit
change differences (e.g., mass to dollars).

Normalize available process data to replace information in the default path.

Update the path value.

(3) Re-calculate the SPA results with path exchanges and compare the new results to the
baseline SPA.
As a motivating example, consider trying to estimate the carbon footprint of a formulation
of soda where renewable electricity has been purchased in key places of the supply chain
(instead of using national average grid electricity everywhere). You would (1) run a baseline
SPA on the soda manufacturing sector, (2) look for nodes in the SPA where electricity is
used and has large impact, and use alternate data on renewables, derive alternate path values,
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

360

Chapter 12: Advanced Hotspot and Path Analysis

and (3) change the path values and recalculate and compare to the baseline SPA to see the
overall effect of the green power. Example 12-1 shows a brief example to inspire how PXC
works before we dive into more details and scenarios.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

361

Example 12-1: Use the path exchange method to estimate the GHG emissions reductions of using
renewable electricity on site for soft drink and ice manufacturing (SDM) in the US.
Answer:
Consider that an estimate is needed for the total GHG emissions of a physical amount of
soda that, when converted to dollars, is $1 million (maybe this is one month's worth of physical
production from a facility). Further, the facility making the soda buys wind power.
The results from Figure 12-8 can be used as the baseline since they were generated for $1 million of soda
manufacturing. Figure 12-17 shows an abridged version of Figure 12-8 with the top three paths, and
excludes several unused columns, sorted by site CO2e emissions. Recall that the path in the third row
(path length 0) shows that the site emissions of the soda manufacturing facility are 36 tons CO2e and the
total LCI (for the whole supply chain below it including the site) are 940 tons CO2e. Row 1 shows that the
path for electricity directly used by the soda factory (path length 1), represents 79.6 tons of CO2e (83.6
tons considering the whole supply chain below this node).
Baseline SPA Results
LCI @
Site
Path Description
(tons)
83.6
Power generation and supply > SDM

GWP
Site
(tons)
79.6

52.1

119.4

Wet corn milling > SDM

36.0

940.4

SDM

Length

Total

940.4

Path Exchanges
GWP
Site
(tons)
0

LCI @
Site
(tons)
4.0

Reasons

Share Exch'd

Green Power

-100% site

860.8

Figure 12-17: Abridged PXC Worksheet (Top Three Site Values only) for $1 million of Soda Manufacturing

The right hand side of Figure 12-17 shows a worksheet for annotating path exchanges. If we assume that
the green power purchased by the soda factory has no direct greenhouse gas emissions, we note that 100%
of site emissions would be reduced, and record a path-exchanged value of 0 for the site CO2e emissions.
The upstream (LCI) portion of the renewable energy system may or may not have 0 emissions. The
existing upstream LCI value of 4 tons CO2e is for average US electricity, involving a weighted average for
the upstream emissions of extracting and delivering fuel to power plants, equipment manufacture, etc. If
we did not have specific information on the generation type and/or upstream CO 2e value for our green
power, we could choose to maintain the 4 tons CO2e LCI value from the baseline SPA. Of course, if we
did have specific information we could justify an alternate value, like an assumption for 0 in the LCI
category as well.
If we made no other changes to the baseline SPA results, our path-exchanged total system would be 860.8
(940.4 79.6) tons CO2e emissions the extra significant figures shown to ensure the math is clear. This
is a fairly significant effect for only one path exchange an 8% reduction from the baseline SPA results.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

362

Chapter 12: Advanced Hotspot and Path Analysis

The basic path exchange in Example 12-1 also lays the foundation for the general PXC
method. PXC does not manipulate the underlying A or R matrices of the IO model used for
the SPA, and thus does not make broad and consistent changes to the entire economic
system. Following Example 12-1, if PXC changed the R matrix for GHG emissions of
electricity (in this case, made them 0), then all purchases of electricity by all facilities in the
entire economy would be exchanged. Such a change of course would overestimate the effect
of the decision by a single facility. Instead, PXC adjusts specific uses of A or R matrix values
for nodes of a specific path (e.g., those used in Equation 12-2). This is a benefit of SPA and
path exchanges we can target very specific places in the network.
Equation 12-2 helps to motivate that there are only two general kinds of exchanges to the
transaction coefficients (e.g., Aij) or intensity coefficients (Ri) underlying the SPA results
used to generate the site and LCI values for specific paths. Transaction coefficient-based
exchanges are those rooted in a change in the level of purchases made. If we remember
what the values in an IO Use Table look like that eventually become elements of an A
matrix, then we can consider that the 'production recipe' for a particular path can be
presented as a value in units like cents per dollar. In the drill-down generated by an SPA, we
might be able to assess that the economic value of a particular node is 10 cents per dollar. If
we make a decision to change consumption of this particular node in a future design or
decision, then we would edit the underlying 10 cents per dollar value to something more or
less than 10. Buying 50% less would change this transaction coefficient to 5 units.
On the other hand, changing intensity coefficients is done to represent different decisions or
opportunities where the degree of effect is different. The waste example at the beginning of
the chapter had intensities of 50 and 5 grams per $billion. Again, a path exchange could
increase or decrease these values.
Finally, an exchange can involve both transaction and intensity changes. Regardless of the
type of exchange, and depending on the depth of the path you are trying to exchange, you
may need to perform significant conversions so that you can determine the appropriate
coefficients to use in the exchange. This could take the form of estimation problems (see
Chapter 2), dealing with several physical to monetary (or vice versa) unit conversions, or
other issues.
In the end, what you will be exchanging is the path value from the baseline to the exchanged
value (e.g., from 79.6 to 0 in Figure 12-17). You may be able to determine the appropriate
exchanged path value without describing all of the transaction or intensity conversions
(Example 12-1 exemplifies this in showing the exchange to 0).
Building on the prior examples in the chapter about soda manufacturing, Example 12-2
shows how to use SPA to consider the effects of reducing the amount of corn syrup used in
soda, in support of a more natural product.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

363

Example 12-2: Use the path exchange method to estimate the GHG emissions reductions of using 50%
less corn syrup on site for $1 million of soft drink and ice manufacturing (SDM) in the US.
Answer:
Corn syrup (e.g., high fructose corn syrup) is one of the primary ingredients of soda and
is the product of wet corn milling processes. The results from Figure 12-8 can again be used as the SPA
baseline. Figure 12-18 shows our PXC worksheet that includes separate columns to track transaction or
intensity coefficient changes. If we assume that the second row of the table represents all of our direct
purchases of corn syrup, then the values we choose to exchange will fully represent the effect. Behind the
scenes, this would be equivalent to reducing our Aij cell value by 50%. A 50% reduction would reduce the
GHG site and LCI values by 50%. Of course this would have the equivalent effect as finding an
alternative corn syrup supplier with 50% less site and LCI emissions.
Baseline SPA Results

GWP
Site
(tons)
79.6

52.1

119.4

Wet corn milling > SDM

36.0

940.4

SDM

Length

Total

LCI @
Site
Path Description
(tons)
83.6
Power generation and supply > SDM

940.4

GWP
Site
(tons)

LCI @
Site
(tons)

26.0

59.7

Path Exchanges
Trans
Reasons
Share
Exch'd
Reduce syrup

Intensity
Share
Exch'd

-50% site,
LCI

880.7

Figure 12-18: Abridged PXC Worksheet (Top Three Site Values Only) for $1 million of Soda Manufacturing

If we made no other changes to the baseline SPA results, our path-exchanged total system would be 880.7
(940.4 59.7) tons CO2e emissions the extra significant figures shown to ensure the math is clear. This
is a fairly significant effect for only one path exchange a 6% reduction from the baseline SPA results.
The same result occurs if the syrup is purchased by a supplier able to make the same amount of syrup with
50% lower emissions. The exchange would instead be entered in the intensity share exchange column.

Example 12-2 shows a comparable GHG reduction as shifting our soda factory to 100%
green power. An important part of a decision of pursuing one or the other alternative would
be the relative costs (not included here). This further demonstrates why SPA and PXC are
such powerful tools the ability to do these kinds of 'what if' analyses to compare alternative
strategies to reduce impact.
When performing PXC, it is important to be careful in tracking the site and LCI values.
While the examples above show both site and LCI values for the abridged baseline SPA for
soda, recall that the full SPA has about 1,100 paths, including separate row entries for nodes
upstream of some of the nodes with large LCI values. Tracking and managing effects in
upstream nodes may be more difficult than these examples imply. In Examples 12-1 and 12Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

364

Chapter 12: Advanced Hotspot and Path Analysis

2, this was done by showing the resulting exchanged values for site and LCI in the same row.
It may be easier to track site and LCI changes separately. For example, Figure 12-8 shows
the top 15 paths of the soda SPA. Row 2 shows the path 'Wet corn milling > SDM', and
Row 6 shows the path 'Grain farming > wet corn milling > SDM'. The LCI value for row 2
is 119 tons, while the site value for row 6 (which falls under the tree of the node in row 2) is
30 tons. Row 6 represents a significant share of row 2's LCI value. When Example 12-2
reduced the purchase of syrup by 50%, we also reduced the LCI value by the same amount,
which makes sense given the transactional nature of the choice. However, there may be
other path exchanges where we want to independently adjust these connected site and LCI
values (i.e., separately edit site values in the PXC worksheet for rows 2 and 6). Example 123 shows how to represent multiple, offsetting exchanges via PXC.
Example 12-3: Use the path exchange method to estimate the GHG emissions reductions of shifting
50% of direct truck delivery of soda to rail.
Answer:
Results from Figure 12-8 can again be used as the SPA baseline. Figure 12-19 shows
our PXC worksheet for path length 0 (the entire LCI of system), and the paths of length 1 for truck and
rail transportation (the latter not previously shown in Figure 12-8 but available in the supplemental
resources).
To reduce 50% of direct deliveries by truck, we exchange a value in Row 2 of the worksheet. This 50%
transactional reduction would reduce the site and LCI values, or about 10.9 tons total. The offsetting
increase in rail may not be simple, as the baseline amount of soda shipped by rail is not given, and the
underlying physical units are not known (e.g., tons/$). Physical or monetary unit factors for the two
transport modes are needed to adjust the rail value. If we assume that truck and rail emit 0.17 kg and 0.1
kg of CO2 per ton-mile, respectively, then the original 15 tons of CO2 from delivery by truck (row 2)
equates to (15,000 kg / 0.17 kg CO2 per ton-mile), or 88,200 ton-miles. A 50% diversion is 44,100 tonmiles, which at 0.1 kg CO2 per ton-mile of rail emits 4.4 more tons CO2 than what is already shipped by
rail in the baseline. Relative to the SPA site baseline of 2.5 tons (row 3), this is a factor of 2.76 (176%)
increase, which we could apply to both the site and LCI values for rail.
Baseline SPA Results
LCI @
Site
Path Description
(tons)
940.4
SDM

Path Exchanges

GWP
Site
(tons)
36.0

15.0

21.7

Truck Transportation > SDM

7.5

10.9

2.5

3.2

Rail Transportation > SDM

6.9

8.8

Length

Total

940.4

GWP
Site
(tons)

LCI @
Site
(tons)

Reasons

Divert
truck to
rail

Trans Share
Exch'd

Intensity
Share
Exch'd

-50% site, LCI


+176% site, LCI

935.1

Figure 12-19: Abridged PXC Worksheet for $1 million of Soda Manufacturing

If we made no other exchanges, our path-exchanged total system would have 935.1 tons CO2e emissions
a fairly insignificant effect for what is likely a large amount of logistical planning. From an economic
perspective, though, it is likely much cheaper.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

365

Software and code exists to help with PXC activities, for example the University of Sydney's
BottomLine. These provide detailed interfaces showing path summaries, transaction and
intensity coefficients, etc., to be edited for path exchanges. Without such software, PXC
must be done with exchange worksheets (potentially done in Microsoft Excel) as in Figure 1217.
Exchanges will sometimes involve more than one path. A substitution in a design may
involve a reduction of transaction or intensity from one path and an increase in another.
For example, if our company elected not to use direct truck transportation for soda, we
could not reasonably deliver our product, and could not capture the full effects of such an
exchange by zeroing out truck transportation. We would need to increase the use of some
other mode of transportation (e.g., rail, which was not shown in Figure 12-8).
While this chapter has focused on IO-based structural path analysis, network analysis of
process matrix models is analogous. The same matrix definitions and techniques are used,
and the main inputs needed are the raw matrices used for the process model. See the
Advanced Material for additional help on network analysis of process matrices.

Chapter Summary
Structural Path Analysis (SPA) is a rigorous quantitative method that provides a way to
disaggregate IO-LCA results to provide insights that are otherwise not possible. These
disaggregated results can be very useful in terms of helping to set our study design
parameters to ensure a high quality result. Path exchange is a hybrid method that allows
replacement of results from specific paths in an SPA based on available monetary or physical
data. These advanced hot spot analysis methods provide significant power, but remain
critically dependent on our data sources.

References for this Chapter


Baboulet, O., and Lenzen, M. Evaluating the environmental performance of a university,
Journal of Cleaner Production, Volume 18, Issue 12, August 2010, Pages 11341141. DOI:
http://dx.doi.org/10.1016/j.jclepro.2010.04.006
Crama, Y.; Defourny, J.; Gazon, J. Structural decomposition of multipliers in input-output or
social accounting matrix analysis. Econ. Appl. 1984, 37 (1), 215222.
Defourny, J.; Thorbecke, E. Structural path analysis and multiplier decomposition within a
social accounting matrix framework. Econ. J. 1984, 94 (373), 111136.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

366

Chapter 12: Advanced Hotspot and Path Analysis

Lenzen, M. and R. Crawford, The Path Exchange Method for Hybrid LCA, Environ. Sci.
Technol. 2009, 43, 82518256.
Peters, G. P. & Hertwich, E. G. Structural analysis of international trade: Environmental
impacts of Norway. Economic Systems Research 18, 155-181, (2006).
Treloar, G. Extracting embodied energy paths from input-output tables: towards an inputoutput-based hybrid energy analysis method. Economic Systems Research, 1997, 9 (4), 375391.

Homework Questions for Chapter 12


For questions 1-4, use the Microsoft Excel file 'SPA_Automobiles_1million_GHG.xls'
posted in the Homework files folder for Chapter 12 to answer the questions. This file
shows the results of a baseline SPA for $1 million of Automobile manufacturing in the 2002
EIO-LCA producer model with respect to total GHG emissions (units of tons CO2e).
1. Draw a hierarchical tree (either by hand or by modifying the posted PowerPoint
template for soft drinks) for the top 10 paths that is similar in layout to Figure 12-9.
2. Find the percent of the total emissions in the system specified by the path analysis,
and describe in words what the path analysis results tell you about the GHG hot
spots in the supply chain for automobiles.
3. Use the path exchange method to estimate the net CO2e effects of each of the
following adjustments to the 2002 baseline SPA. Without doing a cost analysis,
discuss the relative feasibility of each of the alternatives.
a. Use renewable electricity at automobile assembly factory
b. Use renewable electricity at all factories producing motor vehicle parts
c. Reduce use of carbon-based fuels by 50% at the automobile assembly factor
(assume all site GHG emissions are from use of fuels)
d. As done in Chapter 3, consider substituting aluminum (top path 17) for steel
(top path 1) in 50% of all motor vehicle parts. Assume $17,000 of steel and
$1,000 of aluminum per $million in parts currently, and that prices are $450
per ton of steel and $1,000 per ton of aluminum. Aluminum can substitute
for steel at 80% rate.
4. Discuss the limitations of using SPA and path exchanges to model the life cycle of an
automobile.
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

367

Advanced Material for Chapter 12 Section 1 - MATLAB Code for SPA


(Put the theoretical stuff Figs 12-1 and 12-2 here instead?)
The MATLAB code used to generate structural path analysis (SPA) results throughout
Chapter 12 is available in the Web-based resources for Chapter 12 on the lcatextbook.com
website (SPAEIO.zip). The core code was originally developed by Glen Peters and is
provided with his permission. Use of alternative SPA tools or code could lead to different
path analysis results than those presented in the chapter. To use the code, unzip the file into
a local directory.
The specific .m file in SPAEIO.zip that is used to generate the results for the waste example
in Chapter 12 is called RunSPAChap12Waste.m, and uses the code below to generate the
values for Figure 12-5 and Figure 12-6:
clear all
F = [1 1];
% for econ SPA paths ('1' values just return L matrix)
%F = [50 5];
% for waste paths
A = [0.15 0.25; 0.2 0.05];
filename = 'chap12example';
% code to make default sector names if needed (comment out if not)
[rows, cols] = size(A);
sectornames=cell(rows,1);
for i=1:rows
sectornames{i}=['Sector' num2str(i)];
end
L = inv([eye(2)-A]);
F_total = F*L;
y = zeros(2,1);
y(1,1) = 100;

% The $100 billion of final demand

percent = 0.01;
T_max = 4;

% 'cut-off' of upstream LCI (as % of total emissions)


% Max tiers to search

% this prints the T_max, percent, etc. params in the file


% change to 0 or comment it out if not needed
thresh_banner=1;
% this last command runs a function in another .m file in the zip file
% parameters of function are the data matrices and threshold parameters
SPAEIO02(F, A, y, F_total, T_max, percent, filename, sectornames,
thresh_banner);
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

368

Chapter 12: Advanced Hotspot and Path Analysis

The rest of the .m files in the ZIP folder provided build the hierarchical tree, sort it, traverse
it across the various paths, and return results, printing only those that meet the threshold
criteria (e.g., if T_max=4, it will not output any paths at or below that tier). The other .m
files should not generally need to be modified19. To use the code, you use or edit the
matrices and parameters in RunSPAChap12Waste.m, and then run it in MATLAB. It will
generate a CSV text file (named chap12example here) with the economic path results
below, where the intermediate calculations are summarized in Figure 12-5. You may want to
check the math for some of the paths below and ensure you see which of the nodes they
correspond to.
Paths = 15, T_max = 4, percent
1:
0:100.0000:151.8152
2:
1:20.0000:29.0429 :
3:
1:15.0000:22.7723 :
4:
2:5.0000:7.5908 : 1
5:
2:3.0000:4.3564 : 1
6:
2:2.2500:3.4158 : 1
7:
3:1.0000:1.4521 : 1
2 ; Sector2
8:
2:1.0000:1.4521 : 1
9:
3:0.7500:1.1386 : 1
1 ; Sector1
10:
3:0.7500:1.1386 : 1
1 ; Sector1
11:
3:0.4500:0.6535 : 1
2 ; Sector2
12:
3:0.3375:0.5124 : 1
1 ; Sector1
13:
3:0.2500:0.3795 : 1
1 ; Sector1
14:
3:0.1500:0.2178 : 1
2 ; Sector2
15:
3:0.0500:0.0726 : 1
2 ; Sector2

=
:
1
1
;
;
;
;

0.01000, Total Effects = 1.518152e+02


1 ; Sector1
; Sector1 : 2 ; Sector2
; Sector1 : 1 ; Sector1
Sector1 : 2 ; Sector2 : 1 ; Sector1
Sector1 : 1 ; Sector1 : 2 ; Sector2
Sector1 : 1 ; Sector1 : 1 ; Sector1
Sector1 : 2 ; Sector2 : 1 ; Sector1 :

; Sector1 : 2 ; Sector2 : 2 ; Sector2


; Sector1 : 2 ; Sector2 : 1 ; Sector1 :
; Sector1 : 1 ; Sector1 : 2 ; Sector2 :
; Sector1 : 1 ; Sector1 : 1 ; Sector1 :
; Sector1 : 1 ; Sector1 : 1 ; Sector1 :
; Sector1 : 2 ; Sector2 : 2 ; Sector2 :
; Sector1 : 1 ; Sector1 : 2 ; Sector2 :
; Sector1 : 2 ; Sector2 : 2 ; Sector2 :

The format of this output is as follows: the first row displays all of the threshold parameters,
total paths given the thresholds, and the total effects - in this case, economic results in
billions. The rows below this row show results for each path (sorted by the site effect value):
the path number (here 1-15), the path length, the site and LCI effects (here, $billions), then
the ordered path, e.g., path #1 is the top level purchases from sector 1 of the final demand,
and path #15 is the purchases in the path Sector 2 > Sector 2 > Sector 2 > Sector 1). The
Print_sorted_EIO2.m optionally displays the threshold criteria in the output file, printing of the sector names,
and number of significant digits to display. These could all be edited if desired.
19

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

369

extraneous digits in the site and LCI values come from the SPA code (which, by default, are
4 post-decimal digits). The CSV text file results generated by MATLAB can be imported
into Microsoft Excel by opening the text file in Excel and using the import wizard with
colon and semicolons provided as delimeters (a colon needs to be typed into the 'other'
field).
The economic SPA results represent 99% of all economic effects throughout the supply
chain in only 15 paths. Since the variable percent is 0.01, the SPA code searches for
paths up to T_max where the LCI values are greater than 0.01% of $151.8 billion, or $0.0152
billion. If there were a 16th path that had been identified, it was ignored because its LCI
value was less than that amount (but path 15 was not ignored).
If you comment out the second line of code in RunSPAChap12Waste.m (F = [1 1];)
and un-comment the third line (F = [50 5];) and re-run the .m file, it will instead return
the waste path results (as summarized in Figure 12-6):
Paths = 15, T_max = 4, percent = 0.01000, Total Effects = 6.402640e+03
1:
0:5000.0000:6402.6403 : 1 ; Sector1
2:
1:750.0000:960.3960 : 1 ; Sector1 : 1 ; Sector1
3:
2:250.0000:320.1320 : 1 ; Sector1 : 2 ; Sector2 : 1 ;
Sector1
4:
1:100.0000:442.2442 : 1 ; Sector1 : 2 ; Sector2
5:
2:112.5000:144.0594 : 1 ; Sector1 : 1 ; Sector1 : 1 ;
Sector1
6:
2:15.0000:66.3366 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2
7:
3:37.5000:48.0198 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2
: 1 ; Sector1
8:
3:37.5000:48.0198 : 1 ; Sector1 : 2 ; Sector2 : 1 ; Sector1
: 1 ; Sector1
9:
3:16.8750:21.6089 : 1 ; Sector1 : 1 ; Sector1 : 1 ; Sector1
: 1 ; Sector1
10:
3:5.0000:22.1122 : 1 ; Sector1 : 2 ; Sector2 : 1 ; Sector1 :
2 ; Sector2
11:
2:5.0000:22.1122 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2
12:
3:12.5000:16.0066 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2
: 1 ; Sector1
13:
3:2.2500:9.9505 : 1 ; Sector1 : 1 ; Sector1 : 1 ; Sector1 :
2 ; Sector2
14:
3:0.7500:3.3168 : 1 ; Sector1 : 1 ; Sector1 : 2 ; Sector2 :
2 ; Sector2
15:
3:0.2500:1.1056 : 1 ; Sector1 : 2 ; Sector2 : 2 ; Sector2 :
2 ; Sector2

Note that the path numbers in the economic and waste path results are different, as they are
sorted based on total economic and waste effects, respectively. Path #1 in both happens to
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

370

Chapter 12: Advanced Hotspot and Path Analysis

be the path of length 0. But Paths #2-15 do not refer to the same paths. Economic path
#2 (Sector 2 > Sector 1) corresponds to waste path #4.
Connecting back to the chapter discussion on the coefficients to be changed in the path
exchange method, the economic and waste path results above provide the node values,
which are products of the underlying coefficients (Equations 12-1 and 12-2). The path
results do not show the various individual coefficients. For example, the node value for
economic path #3 is $15 billion (second row of Figure 12-5) and the effect node value for
the corresponding waste path #2 is 750 g (second row of Figure 12-6).

Using the SPA Code for Other Models


In terms of edits to the code shown in RunSPAChap12Waste.m, the y vector and/or F, A,
and L matrix assignments can be modified. For example, to use the SPA code in
conjunction with the 2002 US benchmark EIO-LCA model (described in the Advanced
Material for Chapter 8 Section 5), the load command can be added to load the .mat file
containing the EIO-LCA F, A, and L matrices, and then edit the lines of code defining F, A,
and L to point to specific matrices in that model. RunEIO02SPA.m, also included in the
SPAEIO.zip file, does a path analysis of sector #70, Soft drink and ice manufacturing in the 2002
EIO-LCA producer model (code different than RunSPAChap12Waste.m is in bold):
clear all
load('Web-030613/EIO02.mat')

% relative path to 2002 EIOLCA .mat file

F = EIvect(7,:);
% matrix of energy & GHG results, 7 is total GHGs
A = A02ic;
% from the industry by commodity matrix
filename = 'softdrinks_1million';
% sector names in the external .mat file
sectornames = EIOsecnames;
L = L02ic;

% industry by commodity total reqs

F_total = F*L;
y = zeros(428,1);
y(70,1) = 1;

% sector 70 is soft drink mfg (soda), $1 million

percent = 0.01;
T_max = 4;

% 'cut-off' of upstream LCI (% of total effects)


% Max tiers to search

% this prints the T_max, percent, etc. params in the file


% change to 0 or comment it out if not needed
thresh_banner=1;
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

371

% this last command runs two other .m files in the zip file
% the parameters on the right hand side are the threshold parameters
SPAEIO02(F, A, y, F_total, T_max, percent, filename, sectornames,
thresh_banner);

The load command looks for the path to the EIO02.mat file, which in the code above is in a
directory within the same directory as the .m file. You would need to edit this to point to
where you put it. The next few lines of code set the inputs to various components of the
EIO02.mat file. The F vector is set to a column in the matrix EIvect of the EIO02.mat file,
which, as stated in Chapter 8, contains all of the energy and GHG multipliers for the R
matrix (row 7 is the total GHG emissions factors across 428 columns and is thus
transposed), A points to A02ic (the 2002 industry by commodity direct requirements
matrix), L points to the already inverted L02ic, and sectornames is set to the vector of sector
names (EIOsecnames), and y has a 1 in row 70 and 0's in all other 427 rows. The
RunEIO02SPA.m code is run the same way as the RunSPAChap12Waste.m code, and yield
the following excerpted results (only first 10 paths shown, used to make Figure 12-8):
Paths = 1095, T_max = 4, percent = 0.01000, Total Effects =
9.403932e+02
1:
1:79.6233:83.5978: 70 ; SDM : 31 ; Power generation and
supply
2:
1:52.0585:119.4311: 70 ; SDM : 44 ; Wet corn milling
3:
0:36.0017:940.3932: 70 ; SDM
4:
2:32.4451:48.0935: 70 ; SDM : 174 ; Aluminum product
manufacturing from purchased aluminum : 173 ; Alumina refining and
primary aluminum production
5:
2:32.0096:33.6074: 70 ; SDM : 148 ; Plastics bottle
manufacturing : 31 ; Power generation and supply
6:
2:29.7694:42.8620: 70 ; SDM : 44 ; Wet corn milling : 2 ;
Grain farming
7:
1:19.9944:166.0256: 70 ; SDM : 174 ; Aluminum product
manufacturing from purchased aluminum
8:
1:15.1703:22.8200: 70 ; SDM : 11 ; Milk Production
9:
1:14.9953:21.7487: 70 ; SDM : 324 ; Truck transportation
10:
2:14.7441:15.4801: 70 ; SDM : 174 ; Aluminum product
manufacturing from purchased aluminum : 31 ; Power generation and
supply

Since the format was discussed above, we note only that the first line of results shows that
the total LCI for $1million of soft drink manufacturing is 940.4 tons CO2e (as shown in
Figure 12-7 or Figure 12-8). The other 10 rows show the path-specific results, which were
reformatted and rounded off to one decimal digit for Figure 12-8). Soft drink and ice
manufacturing has again been abbreviated SDM to conserve space. Summing all of the site
values in the 1,095 paths would give a value of 684 tons CO2e, which is 73% of the total
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

372

Chapter 12: Advanced Hotspot and Path Analysis

940.4 tons CO2e. As motivated earlier, this is an expected outcome when using threshold
parameters to limit the runtime of the code and the number of paths produced. Increasing
T_max and/or reducing the percent parameters in the SPA code will always increase the
number of paths and the total site emissions in the paths, and consequently, the percentage
coverage of the SPA compared to the total will increase.
Homework Questions for this Section:
1. Using the provided RunSPAChap12Waste.m file for the 2-sector economy example from
the chapter, fill in the table below for the total waste effects across paths as T_max ranges
from 2 to 5 and as percent ranges across 0.01, 0.1, and 1. The total waste effect found above
is already entered into the table. Describe in words what the results in the table tell you
about the SPA of this 2-sector economy.
percent
0.01
0.1
1

T_max
4

5
6,403 g

2. Perform the same analysis as in question 1, but using the RunEIO02SPA.m file and the
same soft drink and ice manufacturing sector, and tracking total CO2e emissions. Describe
in words what the results in the table tell you about the SPA for this sector.
percent
0.01
0.1
1

T_max
4
940.4

3. Change the provided RunEIO02SPA.m code to that it uses the 2002 commodity by
commodity A and L matrices (keeping all other values the same). How different are the
GHG results as compared to the 940.4 tons CO2e with the industry by commodity values?
Discuss why they change.
4. Use the 2002 EIO-LCA producer model to create an expanded SPA for $1 million of
automobiles that includes lifetime gasoline purchases with respect to total GHG emissions.
Assume year 2002 cars cost $25,000, have fuel economy of 25 mpg and are driven 100,000
miles. Assume 2002 gasoline price was $1.30 per gallon. Discuss how this SPA differs from
the SPA for $1 million of automobile manufacturing only. Also discuss the limitations of this
Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

373

model for the life cycle emissions of gasoline-powered vehicles. (Hint: all of the MATLAB
code for SPA has discussed entering just a single value into Y).
5. Modify the provided MATLAB code to use the US LCI process matrix (from Chapter 9)
and present the processes with the top 10 site LCI values for fossil CO2.

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

374

Chapter 12: Advanced Hotspot and Path Analysis

: Reporting, Peer Review, Claims, etc.


Role of chair, members
Typical questions asked of a peer reviewer
How to do the review, what to look at, etc.
ISO 14071
From LCA framework slides:
Goal:
(1) Intended application, (2) audience
Are the intended applications or audience relevant (or the most
relevant) possible? Is it missing the point?
Does chosen goal/audience preclude use by others?
Example from actual study: "..The findings of the study are intended
to be used as a basis for educated external communication and
marketing aimed at the American Christmas tree consumer."
Is it useful for supporting actual purchasing decisions? Is it useful for
non-consumers (retailers?)
When reviewing studies- who did the work, who provided data, and
who paid for it are critically important
Everyone has biases, conflicts of interest.
whether they affect the work done

Question is

Always useful to be skeptical and force study to convince you


ACTA is a trade association of artificial tree manufacturers

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Chapter 12: Advanced Hotspot and Path Analysis

375

Haven't shown final results but what do you predict


they show?
Don't panic. Sometimes sponsors want single credible study
out there when result is well known (i.e., maybe artificial is
better!)
Not required to stipulate any of this in goal statement
(3) Purpose, (4) whether it will make comparisons
a.k.a. "why we did it and what will we do with it?")
Can we expect the audience to use it to make these decisions? (I
don't choose Xmas trees based on this)
Example from actual study: "The goal of this LCA is to understand
the environmental impacts of both the most common artificial
Christmas tree and the most common natural Christmas tree.."
Inevitably results will be shortened / generalized in secondary
sources (e.g., "artificial trees are better")
Are those the study author's fault? What could be done to
ensure best possible attribution of results?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

376

Chapter 12: Advanced Hotspot and Path Analysis

Glossary?

Life Cycle Assessment: Quantitative Approaches for Decisions That Matter lcatextbook.com

Potrebbero piacerti anche