Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
An Unprecedented Transformation of
This months interviewee Geoff Zeiss is a man of broad geospatial and IT knowledge with a great talent to transform it into clear and inspiring thoughts. Being a chief technology guru at Autodesk his thoughts and ideas reach far beyond the limits of his company. In the interview we discussed about several critical trends, thus enabling you to get an excellent first hand insight in the present and future developments in the geospatial industry.
By Joc Triglav
What this means in the design world is that there is much greater demand for design data incorporating location. In other words, you cant design buildings, highways and roads, and other infrastructure any longer in isolation from their location. Since Autodesks products are used for creating most of the worlds building, roads, and facilities design data, the requirement for location is one dimension of the trend to converged design applications. For the past decade Autodesk has been investing in the technologies enabling this fundamental business transformation, such as 3D, model-driven design, geospatial, and 3D visualization and gaming which I am convinced is going to change how architects, engineers, utility and telecommunications, local government, urban planners, emergency responders and others model and design our urban worlds.
including Oracle support spatial data types. This trend also applies to architectural and engineering design, where the trend is toward designing buildings and infrastructure in their geographic environment. A number that has been bandied about in the geospatial industry for many years is that 80% of IT applications could benefit from spatial enabling. I think the mass market geospatial phenomenon has confirmed the geospatial industrys 80% estimate and illustrated the tremendous benefits of integrating spatial data and functionality with general IT.
Which are the strategic and operational strengths of Autodesk in the field of geospatial convergence? Where does Autodesk fit in convergence completely with its product line already and which are the main steps that still have to be made?
Several years ago the National Institute of Standards and Technology (NIST) commissioned a study to attempt to quantify the efficiency losses in the U.S. capital facilities industry resulting from inadequate interoperability including design, engineering, facilities management, business processes, software systems and redundant paper records management across the entire facility life-cycle. NIST estimated that poor interoperability cost the capital facilities industry $15.8 billion in 2002
Geoff Zeiss
The developments of the last few years show that spatial is not so special any more as it used to be. Are we finally reaching the long expected point of entering the mainstream?
In the last two years the geospatial industry has undergone an unprecedented transformation. I see this as a point of inflection for several reasons, one of which is the widespread recognition that geospatial is no longer special because geospatial has joined the IT mainstream. What this means in reality is that geospatial has become one of the core enabling technologies that is available to everyone in IT, not just to GIS specialists. An important example is relational database management systems (RDBMSs). RDBMSs used to be restricted to numeric and text data types. Now virtually every RDBMS
How does the so called mass market geospatial technology, like Google Maps/Earth, Microsoft Virtual Earth and others, influence Autodesk's behaviour and visions in the geospatial business? Do you see this technology more as a competition or as some kind of a booster of Autodesk's development or as something else?
Many more people are using spatial data and software in their day to day life than ever before in history, though most of these people wouldnt recognize what the letters GIS stand for. Well-known examples include MapQuest, Google Earth and Maps, Yahoo Maps, and Microsoft Virtual Earth. Google Earth has had 200 million downloads, an incredible statistic. Geospatial-enabling and GIS
July/August 2007
Interview
Geospatial Industry
architectural, mechanical, and civil. Secondly, spatial technology is a well developed platform technology at Autodesk. Thirdly, we made the decision a decade ago to begin investing in 3D technologies and model-driven design. In the area of architectural design Building Information Modeling (BIM) is a model-driven design technology that was introduced by Autodesk. In the area of mechanical design Inventor has been a leader in solid modeling, and in civil engineering Civil3D has introduced the concepts of modeldriven design and 3D. Thirdly, Autodesks Media and Entertainment Division is a major player in the gaming and special effects market for films and television. For example, 3ds MAX is a de facto industry standard gaming engine. Because Autodesk has access to these technologies in-house including architectural and engineering design, geospatial, and gaming and 3D visualization, we are uniquely positioned to provide the desktop and webbased design and digital prototyping tools that will be required in a converged world.
With Autodesk's strong position in the architectural, engineering, and construction (AEC) and 3D geospatial business, is it reasonable to expect that Building Information Modelling/Management (BIM) solutions are one of the Autodesk's key paths of development?
The annual spend of the worlds construction industry construction worldwide is estimated to be US$ 2.3 trillion. The construction industry is highly competitive, and firms must continually improve their productivity to remain competitive. This challenge of continual productivity improvement has reached crisis proportions in the US where statistics published by US Bureau of Labor Statistics show that the productivity of the construction industry has actually declined in the last 40 years while non-farm productivity has increased by over 200% in the same period. To improve productivity we need to change how we design and construct buildings. In addition we have to address the challenges of global climate change, more efficient use of energy, and minimizing environmental impact. Traditional CAD is used to produce pieces of paper, often called construction drawings or
Traditionally disciplines such as architecture, engineering, and construction, civil engineering, and GIS have been classic information silos. Each has maintained its own island of design applications and data. To break down these barriers many people see convergence as a key part of the solution. Another dimension of convergence is to be able to experience a building, road, or facility before building it. 3D visualization is a critical component of converged solutions. An implication of convergence for emergency responders and urban planners is that will
have access to comprehensive data about the facility inside, outside, and under. For example, when a first responder enters a building he or she will have his or her fingertips seamless access to all of the existing architectural, engineering, infrastructure and geospatial data including structural, h/v, plumbing, underground cables and pipes, aerial photography, and roads and highways for that structure. Autodesk is uniquely positioned to lead this transformation of the construction industry for several reasons. We provide design tools in all of the major design disciplines, including
July/August 2007
Interview
Islands of Information
blueprints. Traditional CAD drawings are not very intelligent because they lack a model or intelligent simulation of real world objects. Models make it possible to do intelligent things like a downstream trace from a failed transformer to determine the customers affected, changing the footprint of a 50 story building without having to redesign every floor, or designing an engine that can be animated it to visualize the moving parts. In the context of architectural design this is called a Building Information Model or BIM, and many people in the industry believe that BIMs not only reduce the costs of design and construction for new structures, but also significantly reduce the downstream costs associated with operation and maintenance. In a nutshell the business drivers for this transformative technology advance are productivity and efficiency in the construction and facilities management industry by improving the performance of facilities over their full life-cycle. Because these are key business drivers for our customers, Autodesk has been at the forefront in introducing building information modeling into the AEC market.
tion to 750,000 other types of properties, such as condominiums, for which the Budapest District Land Office maintains data and legal registry information. The Land Office has just completed a pilot in three districts of Budapest using Autodesk Topobase. Among the features of Topobase that attracted the Land Office was that all data is stored in Oracle Spatial, a vendor neutral, spatiallyenabled relational database management system, so that Budapests cadastre and land registry is not locked into a single vendor and is open and accessible to other applications. Secondly, Topobase was also attractive because its desktop client is based on AutoCAD, which is the de facto industry standard precision data creation desktop. The Land Office was already familiar with AutoCAD, so this minimized the training was required. The combination of the de facto industry standards, Oracle Spatial and AutoCAD, simplifies automating a cadastre and land registry system. In the future I also see increased interest in 3D cadastres especially in the Asia Pacific region, and Autodesks investment in 3D technologies such as Building Information Modeling (BIM) provides a foundation for creating and maintaining this new form of cadastre.
required to address the problems that I've outlined are a spatially-enabled relational database management system (RDBMS), CAD/GIS integration, and Web 2.0 technology to enable field force participation. I see Web 2.0 as a key enabling technology, because it enables remote field staff to participate directly in the maintenance of spatial facilities data. The result is greater productivity and improved data quality, which will make it more feasible for utilities and telecommunications firms to employ younger and less experienced field staff to replace more experienced staff as they retire. MapGuide Open Source and FDO provide a Web 2.0 platform enabling participation of the field force in maintaining and improving the quality of facilities data.
Which further developments can we expect in the field of geospatial intellectual property rights? What do you think of GeoCommons, a recently announced repository for geodata available for mashups, with regard to the Web 2.0 world of mapping and Creative Commons licensing?
First of all I would like to clarify an important point and that is that intellectual property rights (copyright and licensing) and price are separate issues and that what I will discuss here is IP, copyrighting and licensing of spatial data, which is getting a lot attention recently. In the UK where spatial data collected by the Ordnance Survey, which in the UK is called a trading fund and is expected to generate a financial return for the Government, is copyrighted, access to public sector information is perceived to be so restrictive financially that there is an initiative Free Our Data supported by The Guardian (freeourdata.org.uk). I was recently in Tasmania where I heard a very interesting presentation by two members of the Government of the State of Queensland, a Crown lawyer and a statistician. Their objective is to develop a standard set of licenses for the Government of Queensland to be used for digital spatial data. What is being recommended in Queensland is that all
Which are the necessary near future geospatial developments in the world of network infrastructure, like the power lines, telecommunications, water, waste water and other public utilities?
Throughout the world utilities and telecommunications firms manage infrastructure in basically the same way and are facing similar challenges. If you look at the information flow in these organizations, the most obvious thing that strikes you is the problem of silos or islands of information. The second thing is that the information flow in these organizations is for the most part based on paper. For example, the Engineering group uses CAD tools, the Records or Network Documentation group uses GIS tools, and the flow of information between these two groups is paper. The result is redundant processes and backlogs. In addition the aging of the work force exacerbates what is already a critical problem because there is no effective mechanism for transferring the knowledge in the heads of experienced workers to the facilities database where it can be accessed by younger, less experienced workers. The challenge for IT is how to help organizations make progress in solving these business problems. Most IT people would agree that technology is no longer the excuse. From a technical perspective the critical components
Is the fast growing field of building cadastres in many European countries influencing your technology development plans? The right product in this field, functionally exploiting the AEC, geospatial, BIM and database synergies, is still missing on the market. Based on the huge 3D knowledge base at your company do you feel that Autodesk will grab the opportunity and fill the gap?
I see automating the management of cadastres and land registry as an important priority for Autodesk, especially in Eastern Europe because of the legacy of a political system where land was state owned, South America, and Asia. For example, Budapest is a city of approximately two million people. It occupies 52,000 hectares and there are 230,000 separate land parcels registered in the city in addi-
3D technologies
July/August 2007
Interview
more spatial RDBMSs. Another spatial standard that has been widely adopted is the OGC's open web services (OWS), for example, WMS and WFS. Again both of these are supported by all major geospatial vendors. So there has been tremendous progress. There are two kinds of standards, de iure or open standards and de facto standards. Geographic Markup Langauge (GML) is an example of the former, and Google Earth's KML of the latter. Many people believe that for standards to be widely adopted, they must not only address real world problems, but they must be simple. The 80:20 rule is relevant here. The consumer market has made clear that KML has the right level of complexity to address 80% of the world's spatial problems. The good news is that KML has already been adopted as an OGC Best Practices and appears to be well on its way to becoming an OGC standard.
Building Information Modeling (BIM) is a model-driven design technology that was introduced by Autodesk.
government spatial data would be available under Creative Commons (CC) licenses. This is possible in Australia because copyright is automatic and because government data is covered by Crown copyright. I believe this is also the case in Canada. In the US, in contrast, data emanating from the Federal Government is not covered by copyright. I think that their ground-breaking research and recommendations will get a lot of attention nationally in Australia and even more attention worldwide. There are several sites that offer data collected voluntarily by participants including OpenStreetMaps (www.openstreetmap.org) and GeoCommons (www.geocommons.org) under a Creative Commons share alike with attribution license (creativecommons.org). Another interesting site is www.malsingmaps.com, where you will find street maps of Singapore and Malaysia, which are collected voluntarily but which are also copyrighted, and which can be freely downloaded for personal use.
In various parts of the world an interest in commercial opportunities using open source geospatial technology is growing substantially. In your opinion, what should one consider most, be it benefits or dangers, when entering the open source community?
does better in areas where software is being commoditized, where the opportunity for different vendors to differentiate their software is limited. Well-known examples are operating systems (Linux), web servers (Apache), relational database management systems (MySQL), and scripting languages (PHP, Perl, and Python). Another rule of thumb is that where you find well-developed standards, such as POSIX, SQL, HTTP, HTML, POP, and SMTP, you will often find commoditization. For example, MapServer has been among the leaders in supporting Open Geospatial Consortium (OGC) open web services standards (OWS) such as WMS and WFS. Another common misconception is that youre left to your own devices when it comes to support. The reality is that there are many companies that provide support for open source software. Perhaps the best known is Red Hat whose primary business is providing support for Linux. The last time we checked Red Hats market capitalization was $4.3B. Similarly in the geospatial arena firms such as DMSolutions and Orkney are providing support for open source geospatial products.
Which are Autodesk's experiences so far with the open source technologies? Will the company expand its open source initiatives and if, in which directions or products?
What is your opinion on geospatial standards? We are lost without standards and we need them simple, effective and efficient, but why do they seem increasingly diversified and complicated to the geospatial layman and professional as well?
A famous saying we've all heard is that the best thing about standards is that there are so many to choose from. In the context of the geospatial industry, I believe that over the past two years we have seen a strong trend toward recognizing the importance of standards and standards bodies such as the Open Geospatial Consortium (OGC) and ISO, and toward adopting geospatial standards. I would suggest that the Simple Feature Specification for SQL(SFS) has had a tremendous and positive impact in opening up access to spatial data. Most of the world's relational databases (RDBMSs) have implemented the SFS in some form and all of the major geospatial vendors support one or
Before February of last year, open source I see an analogy between the current situation geospatial was a "quiet reality". Most people in web mapping and the early days of the web were surprised by how extensive and when the initial web servers were being develwidespread the use of open source geospatial oped. In the mid 90s eight core contributors software had become. The formation of supporting the NCSA HTTP Server got together the Open Source Geospatial Foundation for the purpose of coordinating their fixes and (www.osgeo.org) with the support of Autodesk "patches" to the HTTP Server and formed the reflects the maturity of open source geospatial original Apache Group, which was little more software and is contributing to bringing open than a shared mailing list. The industry, includsource geospatial software into markets where ing major IT players like IBM, Sun, HP and othit has had limited penetration in the past. There ers, had to decide whether to develop and supare two key requirements for successful open port their own proprietary web servers and source projects, a grass roots developer compete in this arena. In 1999, with IBM community and a thriving business sector which encouragement, the members of the Apache relies on the technology. Both of these conditions have been realized in the open source geospatial community. A common misconception about open source software is that open source software is the opposite of commercial software. The reality is that there are two types of commercial software, open source and closed source (often called proprietary). Many commercial companies such as Red Hat base their business entirely around open source software. As a rule of thumb, open source Construction and Non-farm Productivity Index
10
July/August 2007
Interview
Group formed the Apache Software Foundation, a legal entity, to provide organizational, legal, and financial support for the Apache web server. Since then the Apache Web Server has been adopted by IBM and others and is running over 70% of the worlds web servers. I foresee that the future of open source web mapping will be similar to what happened with web servers. In the short period since MapGuide was donated to the OSGEO, we have seen over 25,000 downloads of MapGuide Open Source and over 4,000 of the Feature Data Object (FDO) API. We are also seeing non-Autodesk developers actively contributing to both projects. For example, most of the FDO data providers currently available were developed by non-Autodesk developers.
tion (www.asce.org/reportcard/2005/index.cfm) requiring trillions of dollars of investment, by more environmentally friendly, more energy efficient, and more efficiently maintainable infrastructure. I expect it will be exciting times especially for the much more digitally savvy younger generation, because I am convinced that gaming technology is going to be a critical technology in enabling the new converged world. We are going to realize what Roger Tomlinson, often referred to as the Father of GIS, foresaw in 1975, being able to simulate a model of the earth on a computer that functions like the earth, including all people, places, things and processes...we've got a tool that allows us describe the world with much greater facility than we ever have before. And, by definition, that's going to what we understand about it. In other words, SIMCity(tm) but with real data. Secondly, I remember attending the last European Oracle Open World conference in London several years ago and listening to Lars Wahlstrom's, head of Oracle's EMEA Telecommunications Industry Group, presentation on the impact of IT in telecommunications. He said that compared to the banking industry (ATM's replacing tellers), the low cost airline industry segment (like Ryanair and Southwest), and the automobile industry (SAP), the telecommunications industry hadn't even begun to scratch the surface in applying IT to streamline its business processes. I also remember a very experienced director of operations at a major US telecommunications company who was convinced that by breaking down islands of information and streamlining the lifecycle involved in maintaining facilities data, he could reduce costs by 70-90 %, make his company much more agile in deploying new services, keep the regulator happy, and be much more responsive to his customers. In the intervening five years or so since, compared to what is achievable, the progress has been slow. But the aging workforce issue, the difficulty that utilities and telecommunications companies are having in finding planners, field staff, and others to replace staff who are retiring and the aging of the infrastructure itself are coming to a head, and I foresee that utilities and telecommunications companies and
local governments are going to be forced to address the fundamental issue of having to do a lot more with less. I foresee utilities, telecommunications firms and others investing in IT like the banking industry, low cost airline industry, and German automobile industry did years ago. In the immediate future I see Tim O'Reilly's concept of Web 2.0, harnessing the collective intelligence, as a key technology enabling participation of the field force in maintaining infrastructure data as a key to addressing the aging work force challenge. Thirdly, I foresee that government regulation and legislation is going to force the capture of a lot more digital data about infrastructure including buildings, roads and highways, and other critical infrastructure. The Traffic Management Act in the UK is a harbinger of what is to come. I also foresee that homeland security and global warming, human population trends, and the cost of energy will force the digital earth (www.isde5.org). There is a history of utilities, telecommunications, and local government having to be nudged by government legislation or regulation, but realizing tremendous business value from digitalizing their business processes after the initial nudge from government. Legislation will force standards like INSPIRE in the EU and standardized data models like the FGDC's Geospatial Data Model (www.fgdc.gov/dhsgdm) in the US. Finally, I believe that the semantic web, which has been championed by Tim Berners-Lee and others, is going to start delivering real business value. In the building, highways and roads, and utilities and telecommunications sectors, I foresee that the semantic web will help these organizations reduce the cost of maintaining their infrastructure and to optimize the build out of new infrastructure and new technologies. The economic driver for this is clear because as a rule of thumb 90% of the cost of facilities whether buildings, highways and roads, or network infrastructure like telecommunications, power, gas, water, and waste and storm water is incurred in the operating and maintenance phase, and we are going to have to increasingly design infrastructure to reduce operating and maintenance costs, both in terms of dollars and environmental impact.
Joc Triglav (jtriglav@geoinformatics.com) is editor and columnist of GeoInformatics. For additional information: www.autodesk.com.
At the end, please share with us your vision on the grand picture of the global geospatial market in five or ten years from now?
The single most important trend that I see in the future is convergence, breaking down islands of information based on traditional disciplines or professional categories and those created by the traditional organization of the construction, transportation, and utility and telecommunications industries. This is going to create points of inflection in the construction industry, the utility and telecommunications industries, urban planning, and the emergency response and disaster management sectors. For example, I foresee the replacement of much of our existing infrastructure, which in the US is in dire condi-
July/August 2007
11
for the implementation of information systems according to the model-driven architecture approach. The same model can be the basis of a database schema (SQL Data Definition Language) and the exchange format (XML schema). This model can also define the user interface and associated behaviour in an edit environment. Within Bentley, the XFM technology is an important example of this development.
European SII
Clearly, Bentley is committed to advancing GIS for infrastructure and invites others to get involved in the process through its annual Geospatial Research Seminar, which encourages open, in-depth technical discussions. The hosts of the 2007 seminar were Oscar Custers from Bentley and Peter van Oosterom of the Delft University of Technology. The theme this year was Creating Spatial Information Infrastructures towards the spatial semantic web. Agreeing on the syntax and formats of spatial data and the development of systems handling these is the first step towards Spatial Information Infrastructures (SII). Recently, there have been a number of large initiatives encompassing spatial information infrastructures for example, INSPIRE and the U.S. Department of Homeland Security (DHS) Geospatial Data Model. These harmonised models can be used
All in All
Of course, there was some overlap in the content at BE Conference Europe in London with that of BE Conference 2007, which took place five weeks earlier in Los Angeles. However there was certainly enough to be seen and heard that was new and different, especially considering the information presented during the Geospatial Research Seminar. Not unimportantly, London is only a couple of hours away from most places in Europe.
Remco Takken (rtakken@geoinformatics) is editor of GeoInformatics. Have a look at www.bentley.com. for more information on the topics discussed in this article.
July/August 2007
13
Article
Galileo
GNSS Update
Probably everybody knows by now that Galileo has hit rough weather. There is still much discussion about financing, although by now it is clear that it will be financed using public money. Technically, however, progress is being made.
By Huibert-Jan Lekkerkerk
Galileo
In the previous update I reported that EU transport minister Jacques Barrot had set a deadline for the consortium of eight companies with respect to progress of the project. The problem is the contract the private consortium was to sign with the EU for exclusive exploitation of Galileo. In return for a two-year concession, the consortium would finance two-thirds of the costs of Galileo, in total about 2.5 billion dollars. In November 2006 a partial deal was closed. The March 14, 2007 ultimatum from the EU declared that the contract should be finalized before May 10, 2007. This was not achieved, resulting in the EU ending negotiations and investigating the possibility of completing the project on its own, preferably with public funding.
The consortium, in turn, has responded that it does not believe the market for fee-based Galileo services is large enough, especially with the well-established GPS system freely available.
Financing
Rubidium atomic clock (source: www.esa.int).
Additional funding from the EU member states. The ESA budget, in return for which member states would get compensatory orders for the national space industries. Financing within the current EU budget. No decision has yet been made on the method of funding; even partial private funding is not ruled out although project control will remain with the EU parliament from now on.
On June 8, the European parliament decided that Galileo should be funded with public money. The total amount needed to finish the project is estimated at 2.4 billion Euro. It is still unclear where this money is to come from. Three options have been mentioned:
14
July/August 2007
Article
operational in January 2006 the signals emitted were test signals only. Those signals caused some commotion as they did not conform to the interface control document. The signals transmitted now are conforming to the interface document and can be used by manufacturers for testing receivers. Finally, the first results from the Rubidium clocks on GIOVE-A have been made public. The clocks function as expected and will probably exceed their design life of 12 years. A comparison with identical clocks on the ground show that the clocks behave exactly as expected. This is important since the clock is the most important part of the satellite and will eventually determine the accuracy of Galileo.
meant to test the (future) expansion of the GPS constellation to 32 or more satellites. Further, Lockheed Martin has been awarded a six-million-dollar contract for building a module that will temporarily transmit on the L5 frequency from a Block IIR-M satellite. Earlier, the plan was to install this frequency on the Block IIF satellites. This frequency is important for aerospace purposes but will also enhance survey applications.
Glonass
In the previous update we reported that there was a possibility Glonass would switch to the same radio technique (CDMA) used in GPS. A recent press release, however, stated that no decisions would be made before the end of 2007. Switching to CDMA is costly and it is rumored that Russian military circles consider the change unnecessary due to the robustness of the current FDMA technique. Premier Putin has indicated that Glonass will become fully useable for civil use in the near future. Exactly what this means is not completely clear, though. Communication about Glonass in the English language has, however, improved lately.
Egnos
Test aircraft during an Egnos-supported landing at Limoges (source: www.esa.int).
citizens support the conclusions of the commission. They say Galileo should proceed and should be funded with public money. One of the consequences of the current financial status is that the deadline, earlier shifted from 2008 to 2011, will shift again to the end of 2012. At the end of May the European space policy document was published. This document states that all space activities have an important defense and security role as well as a public role. Earlier there were rumors that Galileo would fulfill military purposes and, without explicitly mentioning Galileo, this policy document seems to confirm the rumours.
On May 16, the ESA and the GNSS Supervisory Authority (GSA) signed an agreement concerning further cooperation between Egnos and Galileo. Although the Egnos system currently augments GPS, it is envisioned that in the future Egnos will be included completely within the Galileo infrastructure. Tests with Egnos during an aircraft landing at the French airport of Limoges were successful. Similar to the American WAAS, an important application of Egnos is airline navigation. The results of these tests have brought the system one step closer to official use in 2008.
Beidou / Compass
On April 13 China launched the fifth Beidou / Compass satellite. The previous four satellites were geostationary satellites. This fifth satellite, however, was put into a medium earth orbit as is the case with the other GNSS systems. The fourth satellite, which was discussed in the previous update, seems to have problems with its solar panel and has not been activated yet as far as can be determined from the communication that is available from China.
Huibert-Jan Lekkerkerk (hlekkerkerk@geoinformatics.com) is editor of GeoInformatics and a freelance writer and trainer in the field of positioning and hydrography.
GPS
Tests with the GPS satellite broadcasting PRN32 have been going on for a while. In December 2006 the satellite was activated and as of April 1, 2007 it transmits on a continuous basis. The satellite should, however, not be used for navigation and is set to unhealthy. Starting June 27, this satellite is also included in the almanac, thereby increasing the chance that a receiver will try to lock onto it. It should, however, be excluded from any position calculation and is essentially
July/August 2007
15
Article
By Sam Bacharach
Virtual 3D city model of Ettenheim in Germany, automatically derived from an IFC dataset and manually enriched with respect to the employed CityGML feature types. www.citygml.org
DI advocates promote SDI as public infrastructure, like roads and telephone systems. Like other public infrastructure, SDI provides a reliable, shared, supporting environment that makes individuals more effective in the world, businesses more profitable, and governments more efficient.
ease of arranging face-to-face meetings for these purposes should make it easier for advocates of local data sharing to cooperatively apply their leadership skills and authority. Logically, it would appear that larger, regional SDIs ought to be much more difficult to establish, because distance discourages face-to-face meetings, and because many more people must be brought into agreement. In the real world, however, regional SDIs are appearing at a rapid rate. Sometimes region means a world region, or group of nations. Sometimes region means a group of cities, counties, states, or provinces. Sometimes region means a group of institutions working within a particular domain, such as oceanography (e.g. www.openioos.org), and also within a geographic region. There are examples of each of these types of regional
Technically To understand or advocate SDI development, it helps to think of SDI as an entirely social phenomenon. To a technically-minded person, an SDI appears as a data-sharing network with many nodes, each comprised of computing devices that can produce, transmit, receive and/or process spatial data. The technical interoperability that is a prerequisite for Webbased, real-time access to multiple data and processing resources may appear to the technical person or the SDI user as merely a set of software features. But interoperability is in fact obtained through social processes. Across the information technology (IT) industry, consensus standards have made steady progress in the last 15 years in dethroning proprietary standards. Previously, in any subdomain of information technology, a single dominant vendor usually set the standard. No longer. The Internet and the Web are only the most prominent of the examples that have shown technology users and providers the commercial advantages of a more democratic and global approach to standard setting. Now, agreements on software interfaces, data encodings and best practices are increasingly the result of formal social processes, usually global, involving technical committees that include both users and providers. In the geospatial domain, the OGC and ISO TC/211 are the most visible facilitators of these formal social processes, but their work builds on the work of standards organizations in the broader IT domain. Their work also involves coordination often face to face -- with standards organizations in neighboring domains such as transportation, emergency response, 3D animation, databases, computer-aided design (CAD), and location based services. SDI depends on a sequence of social processes that begins with the social processes that produce technical interoperability. After everyones computer systems work together to share geospatial data, the remaining policies and institutional arrangements are much easier to implement. As we can see from the rollout of regional SDIs, the mutual bene-
16
July/August 2007
Article
fits are, in a growing number of cases, sufficient to overcome institutional obstacles to implementing new data-sharing policies and institutional arrangements.
July/August 2007
17
Article
(http://inspire.jrc.it/), so a decentralized approach is favored. The project uses a service-oriented architecture based on components that implement OGC standards.
Specification (for Web-based query and delivery of vector-based data) and the OpenGIS Web Coverage Service (WCS) Specification (for Web-based query and delivery of raster-based data) are also active. This services framework is offered to other institutions and organizations as a platform to which others can add value, sharing and reusing the services for specific applications. The IDEC strategy has been to promote SDIbased Catalonian themes such as environment, coastal information, transportation, etc. This thematic approach, based on the IDEC platform, has had a clear impact on the models upon which other projects have been planned. Some important initiatives have changed their initial conceptualization: from a centralized model to an open and distributed architecture; from a proprietary system to a standardized one based on interoperable technologies. One example is the EUROSION Project, a European initiative funded by the EC to promote better management of the coastal zones. Others include UNIVERS, a regional initiative in the framework of an INTERREG European Project to connect WMS of the university departments in Catalonia to share land information and other geospatial information; and LOCAL, a recently-launched project that aims to incorporate the municipalities in the
Regional SDI. All are clear samples of a new era in managing GI technologies. The open SDI paradigm demonstrates the importance of interoperability concepts and technologies. A regional approach helped the IDEC developers to set up and more easily promote projects based on SDI concepts and technologies, because of its intermediate position between the large scale of the State and the smaller scale of local government.
References
Spatial Information Management in the Context of SDI and e-Government The German Approach, Dr. Jens Riecken, Germany. The SDI of Catalonia (IDEC): Geo interoperability at a regional level, Jordi Guimet Project leader. GIM, June 2005. Inspire Technical Architecture http://inspire.jrc.it/reports/position_papers/inspire_ast_pp_v4_3_en.pdf. Australian Spatial Interoperability Demonstration Project Reference Model www.sidp.com.au/.
18
July/August 2007
Article
Development of user-friendly services is a process of continuous evolution based on user statistics and feedback. Therefore, ambitious and well-functioning solutions are developed through long-term commitments rather than short-lived projects.
support their establishment. The INSPIRE directive of the European Union identifies a range of services that each EU country should provide. Shared metadata, viewing and downloading services are consequently being planned and considered in each member state. This implementation process includes much more than just technological decisions, as novel solutions are needed in conceptual, administrative and societal terms.
20
July/August 2007
Article
al Communication
Lounaispaikka Portal and Map Service
A regional portal for GI network and services. Lounaispaikka Map Service contains an extensive variety of information and easy-to-use map interfaces for members of the public and environmental professionals. It has a distributed architecture and a shared metadata service. Special attention has been paid to the organization and cartographic presentation of the vast array of data sets. www.lounaispaikka.fi/kartta/
attention and the most users, and attract volunteer developers to take them further. Google Earth was an innovation of that kind. The package is brilliant: free of charge, an easy-to-use interface, and content with enough detail to fulfil local needs. From the decision makers viewpoint, however, the package is unsatisfactory. After the initial excitement, no tools are provided to evaluate the quality of the contents.
The work done in the Envifacilitate project aimed to contribute content-driven development based on established spatial data infrastructures and services (textbox). The primary focus was on facilitating access to high-quality geographic information at various steps of the information-processing chain. The project presented end-user solutions for easier access to metadata, downloading and viewing. As the work was carried out by
July/August 2007
21
Article
companies of providing demo versions of their programs for test use for a given period. Accordingly, spatial data from volunteer data producers is lent through the Paikkatietolainaamo facility to registered users to use free of charge for a year. This mechanism has proven to be successful as it increases the availability of spatial data and supports empirical work on it. More details about this facility and its operational mechanisms will be provided in a later issue of GeoInformatics. Another mechanism to increase the availability of spatial data is the University of Turkus Spatial Data Archive, a repository that increases opportunities to reuse spatial data produced by individual researchers and projects. Its development stems from the fact that beyond the large spatial data sets produced by big actors, there are also a number of smaller spatial data registers that should not be allowed to lie dormant. Data producers deposit their data into the archive and sign a contract covering usage rights, after which the data producer does not need to intervene in its further dissemination. Through browsing the archives metadata register, users may find
interesting data content which they can acquire after signing a usage contract with the archive. The service does not involve costs for either the data producer or the users. Both of the above facilities are examples of rather simple service concepts that add to the amount and variety of spatial data available to users. The potential of such mechanisms is obvious, as they lower obstructions to availability and widen the use of geographical information overall. Enhanced data access will in turn attract more usage and promote new spatial service innovations.
22
July/August 2007
Article
Lounaispaikka Map Service combines local views with the utilization of national information resources. Distributed architecture supports the currency of frequently changing data, such as bird observations. Photo: Sampo Kunttu.
thus combines a vast collection of nationally and locally-produced spatial information content in a single map service. Additionally, this service presents some new ways of viewing the abundant spatial data using an ordinary Internet browser. Several independent map engines operate in a joint interface. Different map viewers are organized under separate tabs which the user may change on the fly. Each tab provides a collection of data layers and the appropriate tools to view and study them. There are dozens of pre-arranged combinations of data layers, visualized cartographically as clearly and intuitively as possible. Advanced users may also perform individual layer combinations. The Lounaispaikka Map Service has succeeded in overcoming some of the typical problems of map service designers, such as the limited size of the computer screen and the contradiction between a high degree of freedom for users and the clarity of the user interface. Ordinary users typically expect to see easy-to-understand map presentations, while professional users may desire tools to create their own map compositions, even at the cost of sacrificing cartographic clarity.
are just about to emerge, and the process has its unique characteristics in each country. In many cases, different parties within a region look forward to increased collaboration in general terms, which also involves the sharing of their spatial data, but unclear data access and usage rights prevent the transformation of these intentions into practical measures. This hesitation may lead to stagnation that hinders innovative work with spatial data. The mechanisms piloted in the Envifacilitate project revealed new ways to overcome restrictions. When the conditions for collaboration are clear and mutually agreed upon, intentions may indeed be transformed into concrete collaboration that produces joint spatial information services. It is important to acknowledge that the process does not involve technological performance issues only; it is also very much a matter of cartography, communication and user interaction. In advanced services the users should be given enhanced options such as saving workspaces or defining personal user profiles. Semi-automatic GIS analysis tools, map printing options and many other functions would further tailor the services to users needs.
Longevity of Services
The establishment of useful spatial data services is, at the practical level, a complex iterative process that involves many consecutive work phases. Each time new operations are launched, feedback is gained from their users. In order to reach the corresponding target group, marketing and networking is needed. When conditions are right the service will also market itself, but more user requests will also follow. The developer of an information facility in the Internet thus has to be a kind of juggler who is able to keep many simultaneous processes moving. To be able to do this, the developer has to maintain ongoing contact with the entire process, be innovative and, rather than being afraid to make mistakes, accept risks with an aim to providing ever-improving performance. User feedback provides the key to continuous improvement of the service.
Risto Kalliola (risto.kalliola@utu.fi) is a professor in the geography department of the University of Turku, Finland. Tuuli Toivonen (tuuli.k.toivonen@helsinki.fi) is a university lecturer in geoinformatics in the geography department at the University of Helsinki, Finland. Both have taken part in building Finlands SDI and its expert panels, and they coordinated the EU-LIFE funded project, Envifacilitate. The project provides background for the present article. It is based on the projects Lessons Learned document, available at:
Synergy as Ground
Practical work with spatial data infrastructures and services involves simultaneous integrated actions in different spatial scales from subnational through national to international. In many countries, national level organizations
Different user groups require different processing levels of geographic information. SDIs should support services ranging from simple download options and database queries to advanced citizen services that also utilize the best data available.
envifacilitate.utu.fi/deliverables/ENVIFACILITATE_ lessons_learnt.pdf.
July/August 2007
23
Article
Open Skies
Over the last few years, there has been a continuous stream of reports and discussions in the media about Open Skies treaties and agreements. Within this context, much of the media attention is focused on the long and tortuous negotiations between the United States and the European Union (EU) regarding the liberalization of transatlantic air travel and the vexed question of take-off and landing rights for airlines on both sides of the Atlantic. However there is what some people would regard as an even more important Open Skies Treaty that receives much less publicity - even though its remit extends far beyond the U.S.A. and the EU bloc of countries. By Gordon Petrie & Hartwig Spitzer
tography to monitor the weapons arsenals and military dispositions of the U.S.A. and Soviet Union during the Cold War - with the object of preventing surprise attacks by either side - was set out by President Eisenhower in 1955. This purely bi-lateral proposal was quickly rejected by the Soviet Union. However the idea was revived by President George Bush Sr. in 1989. On this occasion, the proposal was to carry out multi-lateral monitoring of all the countries in the NATO and Warsaw Pact blocs on an equable and strictly controlled basis. Times and attitudes had changed, the Cold War was coming to an end and the new proposal met with a much better response. After a long period of detailed negotiation starting in February 1990 and a series of conferences held in Ottawa, Budapest and Vienna, the terms of an acceptable treaty were reached. The Open Skies Treaty was formally signed by the foreign ministers of the 26 countries of the two blocs at a meeting held in Helsinki on 24th March 1992. Over the next two or three years, the Treaty was ratified by all of these countries except Russia, Ukraine and Belarus. Eventually the Ukrainian parliament formally ratified the Treaty in 2000, followed by Russia and Belarus in 2001 - which allowed the Treaty to actually come into force on 1st January 2002. Since then, a further nine European countries have signed up to the Treaty. The detailed coordination of the Treatys implementation and the resolving of any disputes, procedural issues or technical issues is carried out by the Open Skies Consultative Committee (OSCC) which is based in Vienna.
Fig. 1 - The huge land area - from Vancouver to Vladivostock - that is covered by the Open Skies Treaty is shown in blue.
his particular Treaty is concerned with the monitoring of military sites from the air and has the following three main objectives:(I) First of all, it is designed to enhance openness and transparency with regard to the military activities being carried out in Europe, North America and parts of Asia. (II) Its second objective is to help support the verification of the many international arms control agreements that have been reached in recent years. (III) The third objective of the Treaty is to strengthen the international capacity for the prevention of military conflicts and for the management of political crises and disputes with a view to ensuring greater stability and peace over the vast land area of the northern part of the Northern Hemisphere lying between Vancouver and Vladivostock (Fig. 1).
The technology that has been adopted to try to achieve these ambitious objectives is aerial observation - using unarmed manned aircraft equipped only with cameras, scanners and SAR imagers to monitor military activities and sites on a cooperative basis between participating countries. The Open Skies Treaty was originally signed in 1992. However the ratification of the Treaty proved to be difficult in some countries. So the Treaty did not come into actual operation until January 2002. Indeed it has only become fully operational with the full set of permitted imagers since 1st January 2006. It is interesting therefore to report on the implementation, operation and achievements of the Open Skies Treaty to date and, in particular, to discuss the airborne imaging aspects of the Treaty.
Preparation
During the years between 1992 and 2001, a great deal of work was carried out in preparation for the coming into force of the Treaty. This included the setting up of Open Skies units in each country, the training of the appropriate personnel and the preparations for the certification of suitable aircraft and imagers that would fall within the strictly defined terms of the Treaty. Furthermore, over 350 trial overflights were conducted during this period before the Treaty actually came into operation. This provided much practical experience to the newly formed Open Skies units in all of the countries that had signed the Treaty. It also resulted in a real spirit of cooperation and a
Background
The original idea of having an Open Skies programme to make mutual use of aerial pho-
24
July/August 2007
Article
dimensions and the military capabilities and strategic importance of each country that adheres to the Treaty. The so-called passive quota defines the number of overflights that a country (or state party) is obliged to receive from other countries. The active quota is the number of overflights that each country (or state party) has the Fig. 2 - In the U.S.A., several airfields are designated for use by Open Skies right to conduct over observation aircraft. There are two Points of Entry (POEs) - in Washington, D.C. other countries. The and California; three Open Skies Airfields (OSAs) where observation flights can active and passive start and finish; and four airfields where refuelling can take place. (Source: DTRA) quotas of permitted overflights are usually great deal of confidence and trust being built equal in number for each country. Both the up between all the participants engaged in the U.S.A. and the Russia/Belarus state party each Open Skies programme before the Treaty did have an annual quota of 42 overflights; formally come into force. Germany, France, Italy, Turkey, Ukraine and the U.K. each have a quota of 12 overflights; I - Treaty Rules & Requirements Sweden and Norway each have 7; and so on with smaller quotas for each of the remaining (a) Quotas signatory countries! The Open Skies Treaty operates on the basis of (b) Distances active and passive quotas of overflights for each In close association with the number of perparticipating country. These quotas have been mitted overflights, there are also limits regardset largely on the basis of the geographic
ing the maximum distance that can be flown during a single individual overflight. The specific restriction that applies to a particular country is largely related to its size - the largest distances being 7,200 km in the case of the Russia/Belarus combination and between 5,000 and 6,000 km each in the case of Canada and the U.S.A. Each country has one or more airfields designated as its point of entry for the aircraft carrying out an overflight. Further airfields are designated as refuelling stops (Fig. 2). (c) Missions The rules and procedures for the conduct of each individual mission are also set out in detail by the Treaty. Each country wishing to conduct an overflight over another country must give a minimum notice of 72 hours before the arrival of its observation aircraft at the designated point of entry. Besides which, a mission plan must be submitted 24 hours before the intended flight, giving details of the intended route, distance and estimated flight time. Each overflight must then be completed within a period of 96 hours from the time of arrival of the aircraft at the point of entry. There are no territorial restrictions on the overflights. Thus any part of the full territory of each country can be overflown, except for a 10 km zone adjacent to the country's borders with a state that has not signed the Treaty.
[a]
[b]
Fig. 3 (a) - The coverage of the full suite of permitted imaging devices - a panoramic film camera (yellow); 3 photogrammetric film cameras (red/blue); 3 video cameras (green); an infra-red line scanner (purple); and a SAR imager (blue) - as deployed on the German Tupolev Tu-154M aircraft. (Source: IGI)
Fig. 3 (b) - The three Zeiss Jena LMK photogrammetric film cameras and three Zeiss VOS-60 video cameras (pushbroom scanners) mounted in the German Tupolev Tu-154M aircraft . (Source: IGI)
July/August 2007
25
Article
[a] [b]
[c]
Fig. 4 - A U.S. Open Skies team in action with (a) an operator working at an imaging control station; and (b) a film magazine being changed - on-board the Boeing OC-135B aircraft - and (c) the inspection of a processed film being carried out at the Open Skies Media Processing Facility (OSMPF) in Dayton, Ohio. (Source: OSMPF)
(d) Allowable Imaging Devices The imaging devices that are allowed under the Treaty for use in the overflights are also strictly regulated. Optical photographic film cameras can be used in either a vertical or oblique mode of operation provided that the ground resolution of the resulting image is not finer (smaller) than 30 cm. Up to three (one vertical and two oblique) frame cameras and a single panoramic camera can be used during a specific overflight. Video cameras giving a real-time display of the ground on-board the aircraft may also be used, again with the same ground resolution limit of 30 cm. With the full implementation of the Treaty from 2006 onwards, infra-red and SAR imagers may also be used during overflights with minimum ground resolution values of 50 cm and 3 m respectively for the resulting images. It should
[a]
be said that only Russia plans to operate its Open Skies aircraft with the full suite of permitted imaging devices (Fig. 3). Indeed many of the Treaty countries operate their observation aircraft fitted with only one or two film cameras and a video camera. (e) Certification A very important matter for all Open Skies flights is the certification of the whole of the observing system that is being used to collect the imagery. This involves the validation of both the aircraft and the imaging devices that it carries. This is achieved in the first instance through the detailed inspection of the aircraft and its imaging devices that is carried out at the certification site by a wide ranging team drawn from many of the Treaty countries. This procedure is then followed by flights over a test field of calibration (bar) targets from a designated flying height in order to demonstrate that the ground resolution values of the resulting images do not exceed the limits defined by the
[b]
Treaty. This is checked through the subsequent analysis of the image data that has been collected in-flight over the test field. Besides this overall certification carried out at the certification site, prior to each individual operational flight, the observation aircraft and its imaging systems are inspected thoroughly by a team from the country being observed to ensure that they are in exactly the same condition as when they were certified. A team from the observed country is also present in the observation aircraft during the actual flight to ensure that the criteria and procedures laid down in the Treaty are indeed being followed. The exposed films and magnetic tapes (the latter from the infrared and SAR imagers) are certified in-flight by both parties. The films and tapes are then processed and duplicated at a laboratory on the ground (Fig. 4). These operations are carried out in the presence of both teams with certified copies being handed over to each team. All the Treaty countries receive a report on each mission. If requested, further copies of the films and tapes resulting from the flight can be supplied (at an agreed cost) to any other Treaty country. Thus the Treaty embodies both equity and transparency: every state can see what every other state has observed.
Fig. 5 (a) - A Boeing OC-135B Open Skies observation aircraft in flight. (b) - A Boeing OC-135B aircraft being prepared for flight - the "canoe" or bulge on the underside of the fuselage behind the front wheel of the aircraft undercarriage houses the SAR antenna. However the radome and antenna have since been removed from the aircraft. (Source: DTRA)
26
July/August 2007
Article
[a] [b]
Fig. 6 (a) - The Russian Tupolev Tu-154M Open Skies jet aircraft. (b) - The Tupolev Tu-154M aircraft about to be boarded by a Canadian and U.S. inspection team prior to its Open Skies overflights over North America. (Source: Canadian Forces)
II - Observation Aircraft
The various types of aircraft that are used in military aerial reconnaissance operations play no part in Open Skies overflights. The use of high-flying U-2 "Dragon Lady" aircraft or highspeed, low-flying Tornado aircraft would give everyone quite the wrong impression about an Open Skies flight. Besides which, none of these aircraft could accommodate the monitoring teams from both the observed and observing countries as well as the flight crew. Similar remarks about accommodating these teams can be made about the small singleor twin-engined photographic aircraft that are used in civilian aerial mapping operations. So, based on purely practical considerations, the operational Open Skies aircraft are all multiengined transport aircraft that have been modified to act as platforms for the imaging systems with seating for a minimum of 14 to 16 persons. (a) Jet Aircraft The two major powers - the U.S.A. and Russia - together with Germany all opted to use longrange jet aircraft. In the case of the U.S.A., two Boeing 707 four-engined jet aircraft - labelled
[a]
as type OC-135B - were modified for the purpose (Fig. 5), while Russia and Germany each opted to utilize a single Tupolev Tu-154 triengined jet aircraft (Fig. 6). Russia is preparing to bring a new Tupolev Tu-214 twin-engined jet aircraft into service quite soon. All of these aircraft are capable of flying the Atlantic Ocean without refuelling and of undertaking long duration flights across the vast lands of Russia and North America. Unfortunately the German Tupolev aircraft was lost in a mid-air collision with an American C-141 Starlifter cargo aircraft off the coast of south-west Africa in 1997 with the loss of both crews. (b) Turbo-prop Aircraft Turning next to propeller driven turbo-prop aircraft, since most NATO countries operate the Lockheed C-130 Hercules long-range military transport aircraft, a group of ten of them - Belgium, Canada, France, Greece, Italy, Luxemburg, Netherlands, Norway, Portugal, Spain - formed the so-called "Pod Group". In this context, they share a single "pod", which is a modified C-130 fuel tank converted by Lockheed to accommodate a suite of frame, video and panoramic film cameras (Fig. 7). The "pod" is mounted under the wing of the
[b]
C-130 four-engined turbo-prop aircraft which has a range of up to 5,000 km. The other Open Skies aircraft are all twin-engined turboprop types with a much shorter range. Several of the former Warsaw Pact countries Bulgaria, Czech Republic, Hungary, Romania, Russia and Ukraine have all used Antonov An-26 and An-30 survey aircraft (Fig. 8 (a)). Sweden uses a modified Saab 340 airliner and Turkey a CASA CN-235 transport aircraft (Fig. 8 (b)). After the loss of its Tupolev jet aircraft, Germany has used the Swedish Saab 340 and various aircraft from other countries to undertake its overflights. The U.K. uses a modified HS Andover military transport aircraft. The remaining Treaty countries do not operate their own observation aircraft. Instead they hire or lease a certified aircraft and imaging system from one of the other countries or they make suitable arrangements with the country that will be overflown - the so-called "taxi" option that is permitted by the Treaty. Thus, for example, the U.K.'s Andover aircraft has been used for flights over Russia on behalf of Georgia (which does not posses a suitable aircraft) and flights over Georgia on behalf of Russia. The Andover has even been used to fly over the U.K. on behalf of Russia with a Russian observing team on board executing the Russian mission plan (Fig. 9)!
Fig. 7 (a) - A French C-130 Hercules turbo-prop aircraft about to undertake an observation flight over Bosnia. (Source: NATO-SFOR) (b) - The SAMSON "Pod" that is attached to the wing of C-130 aircraft. A video camera is mounted in the nose of the "Pod". Behind this, on the underside of the "Pod" are the windows for the KS-116A panoramic camera and the nadir-pointing KS-87B frame camera, followed by the two windows for the two KS-87B frame cameras pointing obliquely to the left and right of the flight line. (Source: Canadian Forces)
July/August 2007
27
Article
[a] [b]
Fig. 8 (a) - The Antonov An-30 twin-engined turbo-pro survey aircraft used by Romania for Open Skies observation flights. (Source: Temesvari Archiv) (b) The Turkish CASA CN-235 twin-engined Open Skies aircraft in-flight. (Source: Turkish Air Force)
combining high ground resolution (at least around the nadir) with very wide angle coverage of the ground. Quite a number of Open Skies aircraft carry these cameras, most manufactured in the United States by Recon/Optical, Fairchild, etc. Thus each of the American OC-135B aircraft has a KA-91 camera fitted; while both the SAMSON "Pod" and the Turkish CN-235 aircraft each utilize a KS-116A camera. The U.K. Andover aircraft has a KA-95B camera (Fig. 10 (a)). The Bulgarian An-30 aircraft has a British-made Vinten 900B panoramic camera. If these panoramic cameras are fitted with long focal length lenses, then, if they are operated from low altitudes to get below the cloud cover, they will generate images with very high ground resolution
- much higher than the 30 cm limit that is set by the Open Skies Treaty. Thus these cameras may have to be fitted with special optical image degrading filters to ensure that the resulting images do fall within the prescribed limits (Fig. 10 (b)). (c) Non-Photographic Imagers With regard to the non-photographic imagers that have been allowed by the Treaty to come into operation from January 2006 onwards, these have not proven to be attractive to most users. SAR imagers with their all-weather and day/night capabilities would appear to be very suitable for Open Skies operation. However they are expensive, complicated and power hungry. Furthermore the ground resolution of SAR imagers is set by the Treaty at 3 m which is regarded by most users as being too low to be useful. Indeed SAR images with a 1 m ground resolution will be avail-
able from the newly launched TerraSAR-X satellite. The U.S.A. developed its SAROS (Synthetic Aperture Radar for Open Skies) and fitted these to its OC-135B aircraft as early as 1994. The work was undertaken jointly by the Loral company and Sandia National Laboratories. However, according to the DTRA Web site, they have not been used to any great extent till now. Russia is also developing a suitable SAR, but this has not been certified so far. The situation regarding infra-red scanners is rather similar. Till now, few countries have opted to deploy these devices. Turkey is using the old Honeywell AN-AA5 infra-red line scanner. Russia is also developing its own IR scanner, but again this has not been certified as yet. (d) Positioning & Navigation Systems Besides the actual cameras, the aircraft are fitted with positioning and navigation systems to ensure that the Open Skies flights follow the planned paths and the photography is taken in the correct positions. Thus the American OC-135B aircraft are fitted with twin
[a]
Fig. 9 (a) - The U.K.'s Andover aircraft about to undertake a flight over Georgia on behalf of the Russian Federation. (b) - The operation of the cameras on-board the Andover aircraft is being monitored by Russian observers. (c) - The pre-flight inspection of the underside of the Andover aircraft being carried out in Lithuania - note the camera lens protruding just below the belly of the aircraft. (Source: U.K. Open Skies Unit)
[b]
[c]
28
July/August 2007
Article
[a]
[b]
Fig. 10 (a) - The KA-95B panoramic film camera used in the U.K. Andover aircraft, mounted on a cradle with its film magazine lying on the floor to the left. (b) - The image degradation filter used with the KA-95B panoramic film camera to ensure that the resolution of the resulting frame images falls within the 30 cm ground resolution limits of the Open Skies Treaty. (Source: U.K. Open Skies Unit)
Litton LN92 integrated GPS/IMU systems for this purpose, together with a radar altimeter to provide precise measurements of the height of the aircraft above the ground. The U.K. Andover aircraft is fitted with a similar LN-92 unit and a pair of Garmin 420 GPS receivers. However these are of the single-frequency C/A type rather than the more accurate dual-frequency type. (e) Digital Imaging Devices Currently a very important issue for the Open Skies Consultative Committee (OSCC) and its sensor working group is that of trying to accommodate, within the Treaty rules, the airborne digital imaging devices that have become so well established within the civilian mapping community over the last five years. In dealing with this matter, one should remember that the Treaty was negotiated during the period 1990-92, when digital imaging devices were in their infancy and photographic film cameras were used almost universally for aerial reconnaissance purposes.
Only digital video cameras were recognized for the purposes of the Treaty - including, quite remarkably, the Zeiss VOS-60 pushroom line scanner under this heading! Besides these video devices, the infra-red and SAR imagers that gained approval could record their images on magnetic tape. However the current types of digital frame cameras and pushbroom line scanners do not appear on the list of airborne imagers approved for use under the Treaty. In the meantime, during the period that has elapsed since the Treaty was signed, the manufacture of airborne photogrammetric and reconnaissance film cameras has almost ceased and their use is now in sharp decline. As a result of this development, some types of aerial photographic film are no longer made. Furthermore the availability of spare parts and the lack of technical knowledge and expertise to keep film cameras in service is beginning to be a problem. At a recent Open Skies seminar held in Berlin, at which the present authors made presentations about these new airborne
digital imaging systems, there seemed to be a general acceptance among the participants of the need to add them to the list of approved imaging devices. The difficulty lies in converting this overall consensus into the formal decisions that need to be made by the OSCC. Tough negotiations about the detailed proposals, approvals and certification of acceptable airborne digital imaging systems lie ahead!
Gordon Petrie is Emeritus Professor in the Dept. of Geographical & Earth Sciences of the University of Glasgow, Scotland, U.K. E-mail Gordon.Petrie@ges.gla.ac.uk Hartwig Spitzer is Emeritus Professor of Physics and Co-founder of the Center for Science & International Security (CENSIS) at the University of Hamburg, Germany. E-mail - hartwig.spitzer@desy.de CENSIS Web Site http://censis.informatik.uni-hamburg.de
July/August 2007
29
Article
Practical Geodesy
In previous articles we have seen how to describe the shape of the earth and how to identify coordinates and heights. This is usually enough for the automated processing and storage of geographic data. But humans are a visual species and we want to see a map on paper or on a computer screen. Projection
For day-to-day use we need a method to project the sphere (ellipsoid) onto a flat surface. We have already seen that this cannot be done without creating some kind of distortion. The trick is to keep the distortions to a minimum. Depending on the purpose, and therefore on which distortion needs to be minimal, there are three main types of projection. The three types are: Conformal projections or true angle projections. Equivalent projections or true area projections. Equidistant projections or true length projections.
By Huibert-Jan Lekkerkerk
Conformal Projections In this type of projection, directions and angles are projected undistorted onto the map. Meridians and parallels will therefore cut each other at right angles. The most common conformal projection is the Mercator projection. Almost all sea charts are based on this projection, making it possible to plot compass courses directly onto the chart and vice versa. Equivalent Projections These projections display areas correctly. This does not necessarily mean that the length and width are displayed correctly; only that the product is displayed correctly. Geographers often use this type of projection since the (area) scale of the chart is constant throughout the chart and, as such, countries are displayed in their correct size although not their correct shape. Examples of popular equivalent projections are the Albers and the Mollweide projections. Equidistant projections In this projection, the distance between two points along one or more lines remains undistorted. Examples of equidistant projections are the Plate Caree and the equidistant polar projection for displaying areas around the North and South Poles.
The largest globe in the world, the Unisphere in New York (source: www.bretl.com).
Globe
The only correct image of the earth is obtained on a spherical surface such as a globe. This is the main reason why this method was the most popular way of portraying the earth until well after the middle ages. The advantage of the globe is that it represents the world in the
correct context; there are no distortions, distances are displayed correctly and continents have their true shape. There are, however, some disadvantages to using a globe. Distances are displayed correctly but are hard to obtain. Determining an area is even harder due to the curvature of the globe, but perhaps the greatest disadvantage is that we need a huge globe to display small states correctly. Even on the largest globe in the world, the Unisphere in New York City which has a diameter of 120 feet or 36.57 meters, a small state is only a few decimeters in length. This means that it can show no more detail than the average roadmap on a globe whose size is fairly impractical for day-to-day use.
Projection Type
Another method for classifying projections is based upon the surface on which the projection is made. Again, there are three main
30
July/August 2007
Article
types of projection surface: Cylinder Cone Flat surface. A projection is constructed by drawing an imaginary line from the center of the earth towards the point to be projected on the earths surface. By extending this line until it intersects the projection surface the point is projected. When all the points have been projected on, for example, a cylinder, the projection surface is cut open and spread out, thus creating the chart.
oblique projection (axis at a certain angle). The most important cylindrical projection is the Mercator projection. The longitudinal or normal Mercator projection is used for sea charts. The transverse Mercator is often used for countries such as Germany that are elongated in the north-south direction. The oblique version is used for elongated countries such as Indonesia that are not fully north-south or east-west.
tion quite often in combination with the European Datum 1950 (ED50). A projection that is closely related to the UTM projection is the Military Grid Reference System (MGRS) used by armies worldwide. With this projection, coordinates are only given within a latitude zone.
Universal Transverse Mercator A special form of the transverse Mercator is the Universal Transverse Mercator (UTM) Cylindrical Projections projection. This is a transverse Mercator proWith cylindrical projections, a tube or cylinjection with a number of fixed parameters. The der is placed around the earth. Depending on earth is, for example, divided into zones that the attitude of the cylinder with respect to the are each 6 wide and 8 high. The first longiearth axis, the projection is called a longitutudinal zone, number 1, starts at the date line dinal (cylinder axis parallel to the earth axis), (180 east / west) and has a so-called central transversal (axis parallel to the equator) or meridian at 177 west. The first latitude zone starts at the South Pole and is assigned the letter A. Coordinates within a longitudinal zone are defined by the number of meters the point is east of the central meridian and north of the equator on a certain ellipsoid. The North Sea oil and gas industry Different projection methods and the resulting chart images. A: Longitudinal Mercator; uses this projecB: Lambert conformal; C: Azimuthal polar projection.
Conical Projections Conical projections use a cone as the projection surface. Depending on the actual projection, this cone touches the earth (one parallel) or cuts through it (two parallel). Conical projections are often used for countries that are elongated in an east-west direction such as Belgium, which uses the Lambert projection. Another conical projection that is often used is the Albers projection. Azimuthal Projections Azimuthal projections use a flat projection surface where the projection is done from a central projection point. The projection point usually lies at the center of the earth but, for example, the stereographic projection uses a projection point that lies at the opposite side of the earth to the center of the projection. This type of projection is ideal for portraying areas where there is no preference for either a north-south or east-west direction, such as the area around the poles.
Huibert-Jan Lekkerkerk (hlekkerkerk@geoinformatics.com) is editor of GeoInformatics and a freelance writer and trainer in the field of positioning and hydrography.
July/August 2007
31
Column
With its MSS sensor Landsat produced digital image data at 80 m footprint. Now the stage was set for digital image processing using single images for many years to come. Contrary to that, in the 70s photogrammetry was still largely working with analogue and analytical devices, with analogue film-based images, but preferably in stereo mode or even, especially in close-range applications, in multi-image modes. This situation changed dramatically when digital cameras, based on CCD sensor technology, became available at affordable prices. Parallel to that the computer technology improved its performance in terms of CPU times, storage capacity and bandwidth of data transfer continuously. So since the early 80s photogrammetry went fully digital, first a bit slow but steady in the closerange field and later, since the late 80s, also in the aerial imaging domain. However, until recently the processing platforms for satellite imagery, aerial and close-range images developed independently from each other. They even differed from each other to an extend that images from one domain could not even be processed with software of the others. This situation has changed. Nowadays we see the first systems which are capable of dealing with images of all domains, at least with respect to some functions. It is rather surprising that this trend towards unified software platforms has not emerged earlier, as we can observe already for quite some time a convergence of processing paradigms and methodologies, especially concerning the following factors: Use of digital images, strict sensor models, multi-image acquisition and processing, 3D processing, sensor and data integration, and postprocessing functions. With the new generation of highresolution satellite sensors (SPOT 5, ALOS PRISM, Cartosat, IKONOS, Quickbird), and more planned for the near future (GeoEye-1 with 41 cm spatial resolution, etc.), the issue of 3D
modeling is gaining much more prominence also in satellite remote sensing. Photogrammetric techniques provide for appropriate processing tools to achieve these tasks. On the other hand, radiometric analyses are also attaining more attention in photogrammetry. We observe that the originally different techniques in optical remote sensing and photogrammetry are converging strongly towards a unified concept. Photogrammetry and optical remote sensing have expanded their techniques very much in recent years mostly towards the joint goal of precise georeferencing and 3D modeling. This has opened many new fields of applications. The pressing need for modeling and monitoring our 3D environment from terrestrial, aerial and highresolution satellite images will have a tremendous impact in natural and man-made hazard monitoring, risk analysis, car navigation, Location-based Services, virtual tourism and in many more novel applications.
Prof. Dr. Armin Gruen is since 1984 Professor and Head of the Chair of Photogrammetry and Remote Sensing at the Federal Institute of Technology (ETH) in Zurich, Switzerland. He graduated in 1968 as Dipl.Ing. in Geodetic Science and obtained his doctorate degree in 1974, both from the Technical University Munich, Germany. From 1981 to 1984 he acted as Associate Professor at the Ohio State University, USA. He has lectured at University level since 1969, with photogrammetry and remote sensing as major subjects. He acted as member of several commissions, committees and councils, currently Chairman of the ISPRS International Scientific Advisory Committee (ISAC).
Google Earth, Microsoft Virtual Earth, the latest and future mapping and robot rover missions on Moon and Mars and other activities with great publicity have helped tremendously in making the public aware of the discipline of Geomatics, which is generating these datasets. The issue of Digital Earth is on many peoples agenda. Already now, but even more in the near future,
we are and will be overwhelmed by huge amounts of images, emerging from satellite, aerial and terrestrial platforms. A good deal of those images will have to be processed quantitatively using photogrammetric techniques. This is why we see a very bright future for a joint approach to photogrammetry and optical remote sensing in R&D, education and with respect to business opportunities. After some years of stagnation and even recess we diagnose a widespread shortage of experts in the geo-related imaging sciences, be it on university level, in government agencies or with system developers. Even China, a country with seemingly unlimited personal resources, which has provided the world with capable young researchers and professors in the past, is now in urgent need of qualified experts. For photogrammetry and remote sensing it is time to emerge from a time of transition into an integrated technology, providing support and solutions for many problems of our time.
July/August 2007
33
Article
Michael F. Goodchild
Granular Computing
The opening keynote speaker, Professor Lofti Zadeh from the University of California, Berkeley in the USA, spoke about fuzzy logic and especially granular computing. Note that
it is not the logic that is fuzzy, but it is that the logic is dealing with fuzzy data. Zadeh: Granular computing is computing with uncertain, imprecise and partially true data. In granular computing, the objects of computation are not the values of variables but the objects are data on the values of variables. Examples include a category or a matter of degree. As an illustration, we want to know the age of a woman named Vera. We know she has a son who is about 25 years old and a daughter who is about 35. Furthermore we know that the child-bearing age varies from about 16 to about 42. Given this information we can estimate Veras age. Imprecision, uncertainty and partiality of truth are characteristics of the real world. The need
Semantics
There were three papers on semantics, a subject not generally thought of in connection with spatial data. But not all data are numeric or digital. Words and sentences also have to be translated for different communities that require reference systems for geospatial information. There is an increasing number of applications that rely on 3D geoinformation including noise mapping, disaster manage-
34
July/August 2007
Article
ke Damage
dataset, quality is important. There is therefore a requirement to communicate more explicitly what a dataset contains and to raise awareness of internal quality. This should influence the way datasets are presented on the internet and thus make data more accessible, more understandable and more valuable to the everyday person looking for information.
Metadata on Quality
One of the five keynote speakers was Michael F. Goodchild from the University of California, Santa Barbara, in the USA. Goodchild: Worldwide data quality statements are now entrenched in metadata standards. I contrast the needs of the user with the productioncontrol mechanisms of the producer, and argue that metadata standards are producercentric. To the user the ability of data sets to interoperate is of major concern. The experience of prior users, the accessibility of quality statements are of interest. And the user wants the quality information to be handled easily in local software. The focus of discussion has been on data quality; however, increasing the sharing of data is prompting a demand for more comprehensive and thus more complex metadata. Goodchild: Metadata have the potential to exceed data in sheer volume, so it is reasonable to expect that as much effort will be spent documenting data as in compiling them. Whether data or metadata, numeric or semantic, fuzzy or precise, it is clear that we are and should be concerned about quality.
Job van Haaften (jvanhaaften@geoinformatics.com) is editor of GeoInformatics. For more information about the conference: www.itc.nl/issdq2007
ment, architecture and city planning. These applications frequently require complex semantic information from many different sources, often thematically and spatially fragmented. They differ in quality and semantic aspects. Prominent examples involving inconsistencies are flying or drowning houses, where a digital terrain model and 3D models from different sources have been combined. The integration of spatial data with complex object descriptions is a challenge and a change at the same time. The more information is provided by the semantic layer, the fewer ambiguities remain for possible geometric integrations. When a building and a terrain are joined, the lower edge of a polygon marked door needs a surface to step on, either the terrain or a staircase.
website provides the information clearly, they are more interested in determining the content of a dataset. In essence, the word quality may not consciously come to mind at all. When using the data and finding mismatches with the real world or another
Consumers
Semantics is also used for communication to consumers on data and the quality of data. As Anna T. Boin and Gary J. Hunter from the University of Melbourne in Australia put it, spatial databases are more readily accessible to the general public, and every day more people use the World Wide Web to decide whether a dataset is suitable for them. Findings from consumer opinion show that if they use a website to choose data, and if the
Lofti Zadeh
July/August 2007
35
Column
The book also reminded me of how people can be lured into a false sense of security by todays mainstream, location-based technology. The apparently simple yet intelligent devices hide away all the complexity, including geodesy. The risks are understood by only a few people. But often it is the simplest mistakes that have the greatest consequences. Recently a squadron of brand new F-22 jet fighters lost their avionics systems en route to Japan from Hawaii. Rumour has it that the planes had suddenly accelerated to the speed of light, crashing all onboard computers. It turned out that somewhere near the dateline, an unknown force had instantaneously propelled the jets from -180 to +180 longitude. This was a phenomenon unknown to the onboard navigation systems, so the $100 million planes had to be escorted back to base by old tankers which did not have this problem. A couple of years ago, an Asian oil company dispatched a rig to the Indian Ocean to drill a well at a given location near the equator. It should be noted that these are hightech operations that push the envelopes of engineering, logistics, and economics. But somehow along the way, latitude north turned into south, and they ended up drilling 400 kilometers off-target. Expensive mistake: they could have bought an F-22 for the same money. Not that it would have made any difference. There are many more examples like these, and the fun really begins when datum shifts get involved. These mysteriously move targets by tens or hundreds of meters a difference often small enough to remain undetected, yet big enough to cause disaster. Tourists standing on the Prime Meridian in Greenwich might only suffer bemused head-scratching when their GPS tells them they are 100 meters off 0 longitude. But it
becomes a different story altogether when you fall off a cliff which strangely misplaced itself in thick fog. Or you find yourself at the wrong side of a sensitive maritime boundary, or the victim of a smart bomb targeting the right coordinates in the wrong reference system. The list is endless.
Thierry Gregorius (Thierry.Gregorius@shell.com) is Head of Data Management and Geomatics for Shells global exploration directorate, based in the Netherlands. The views in this column are entirely personal.
So the $100 million planes had to be escorted back to base by old tankers which did not have this problem.
With GIS and GPS now hidden in mainstream applications, and interoperability growing exponentially, a small geodetic mistake can produce huge consequences. For example, if left unchecked, the INSPIRE directive could turn Europe into the worlds tectonically most active region, with geological fault lines suspiciously coincident with national boundaries. This brings back early memories from when I had to make a map of North Sea operations with data from all the adjoining countries. To my relief, the pipelines did connect at national borders. But only if I didnt zoom in too much. Most geodetic mistakes, one would assume, have so far been committed in good faith. It may only be a matter of time until someone with less honorable intentions goes to the trouble of actually understanding geodesy. Simple: Deliberately insert a few subtle mistakes at the right places and right times, and wait for the chain reaction to happen. Intelligent mis-positioning by a geodetic terrorist. Make sure you dont become a suspect.
July/August 2007
37
Article
Part 2: Metadata
Metadata
What it is for: Describing the content and origin of datasets with respect to searching, discovering and using these datasets.
Standards in Practice
When building a geo-information infrastructure, specialists tell us that careful attention should be given to metadata. But what is metadata, how does it work, and what standards apply?
Relevant standards ISO 19115 INSPIRE Implementing Rules for metadata Dublin Core Technical implementation XML as exchange format ISO 19139 when using metadata in geo-services Legal basis: INSPIRE Directive (for specific datasets managed by governments)
By Huibert-Jan Lekkerkerk
not possible to store metadata with the dataset. The metadata was then added to the legend in the publishing phase when the paper charts were printed. With the introduction of GIS, this changed again. Information was now acquired digitally and, after conversion, stored Use of metadata for searching, discovery and use of (geographic) datasets in a in the GIS as a digital geo-information infrastructure. product, usually with a limited set of metadata. here is much discussion about the exact defWhen, after a period of time, the dataset then inition of metadata. The most common remark needs to be used or published, it is difficult to is that metadata is information about informaobtain information about the purpose of the tion. This, however, does not solve the problem. original dataset. A number of programs are Say we want to describe a building. To the land available to create and manage metadata, but surveyor who is contracted to survey the buildthey are usually only used for the central archive ing, the main body of data he collects are the and not for the local, specific datasets within an coordinates. All other data such as the quality, organization. specifications etc. are metadata. But the project Metadata Standards developer will probably need additional data If missing metadata poses a problem within a such as build quality, and will consider this the single organization, then imagine what it is like main body of data. within a geo-information infrastructure. Within Metadata is usually collected with a specific puran organization one can usually obtain the metapose in mind. Common purposes are the disdata, albeit with some difficulty, but when pubcovery and use of datasets. The metadata is lishing to the outside world this is no longer then used to answer questions such as the who, feasible. what, where, when and why of a dataset. This For this reason the EU INSPIRE directive has laid article will only consider metadata used for the down a set of implementing rules for the use of discovery and use of (geographic) datasets. It metadata. In these implementing rules an does not consider metadata for single objects; obligatory set of core metadata is defined for these will be discussed later in the series. specified government-managed datasets. Metadata in Geographic Practice Individual organizations can then choose to The use of metadata is as old as the publicaextend this minimum set to cover the needs tion of geographic data. As long as charts have within a specific organization or country. been published, they have had extensive The INSPIRE metadata core set and most nationlegends. al core sets are based on the ISO 19115 metaWith the introduction of digital data, the metadata standard. This standard consists of an data concept was forgotten. Many organizations extensive library of metadata elements of which switched to automated systems in which it was a few are marked as obligatory. The INSPIRE
implementing rules have selected additional elements from the ISO 19115 library and made these obligatory as well, thus creating a new core INSPIRE set. Besides the ISO 19115 metadata standard for geographic information, there is another important metadata standard, Dublin Core (DC). This is a limited, general set of metadata that can be used for a myriad of datasets and information sources. Dublin Core is generally used within governments and is not exactly equal to the ISO 19115 elements. The two standards can, however, be mapped onto each other in such a way that both can be used in a combined environment.
How it Works:
The principle behind a metadata standard is very simple. Users use specific software to enter the necessary list of metadata for a specific dataset. Examples of metadata elements from the INSPIRE core set are: Title of the dataset Temporal elements, for example the acquisition and publication of the dataset Geographic boundary of the dataset Subject of the dataset Keywords Contact details for the responsible organization Abstract of the contents Details about the web service where the dataset can be obtained / used If the metadata software is integrated with the GIS system, some elements such as geographic boundary are automatically generated from the dataset. Other elements will remain constant within an organization and only need to be filled out once.
Metadata Catalogue
After being created, the metadata can be published to a so-called catalogue. Users can search through the catalogue for datasets that, suit their purpose. The searching process is not unlike that
38
July/August 2007
Article
tent of the metadata file but also to define the technical format. Both Dublin Core and ISO 19115 use XML as the technical format. The advantage of using XML is that the metadata files can be checked with regard to integrity, structure Relationships between the various standards and sets of metadata for geographic informaand missing eletion (source: Geonovum; adapted). ments. XML is furthermore a of an Internet search engine, the main difference W3C (Internet) standard and can be read easily being that one can define geographic search by a number of software packages. parameters as well. Once a dataset is discovThe exact way in which ISO 19115 metadata eleered, the information on the web service is used ments need to be put into an XML format that to get the dataset. Most geographic portals can is suitable for web services can be found in display the information in a viewer so that users another standard: ISO 19139. Although XML can think they have direct access to the data from be generated with any generic text editor, it is the portal. The service can, however, also be advisable to use specific management software used to view the dataset in ones own GIS if the that uses information from the internal publisher permits. geographic database and converts it to XML.
new datasets has to be available by 2011. Beyond this legal basis, it is always necessary to have metadata available if datasets are published to the outside world. The mandatory (core) set of metadata elements is relatively small. As such implementation is not a matter of difficult and extensive software implementation but more one of backlogged datasets. In the past, many organizations have invested in numerous datasets that lack adequate metadata. Bringing this up to date will cost a lot of time (and therefore money).
Huibert-Jan Lekkerkerk (hlekkerkerk@geoinformatics.com) is editor of GeoInformatics and a freelance writer and trainer in the field of positioning and hydrography.
Technical Format
In order to publish metadata in a standardized way, it is not only important to define the con-
Legal Basis
For a number of government-managed data categories INSPIRE states that metadata for
July/August 2007
39
Conference Programme
The conference programme has attracted well known industry experts from organisations such as ESRI, Pitney Bowes MapInfo, Oracle, Bentley, Infoterra, Navteq and many others. Delegates can also attend hands on workshops led by GI professionals, participate in open debate sessions, network with peers and view the latest product and service offerings at the Sponsor Showcase Exhibition. Two plenary keynote speakers will present at the beginning of each
40
July/August 2007
Special
Comparing
A view inside a 3D model of a house from a 1970s science fiction cartoon. Printed by a ZPrinter 450.
Latest Technologies
Comparing printers is certainly difficult. Cost and speed are not the only things that matter. The reasons for choosing a certain printer can be complex. With this information we hope to assist our readers in choosing the printer that best suits their requirements and circumstances. Every specification can matter: paper width, the media that can be used, supported print languages, drivers, minimum line width, print resolution and compatibility with, for instance, scanners and folders. From Canon and Oc you can read about the experiences of a user of one of their printers. HPs worldwide Product Manager for the Designjet T-series, Carles Magriny, tells about the companys latest technologies and developments in the market. Also included is a table with specifications of two KIP large format printers which use an LED-laser technology that is quite different from the
technology used in inkjet printers. Nevertheless, they are large format printers and we very much wanted to include them in our article.
instance a roof on a building. It enables the user to make larger models than the printer can handle. The printer can also print multiple models at the same time by stacking and nesting parts. The model is built in layers using a high performance composite powder, a liquid binder and ink. It is fast, and that reduces costs. The printer has no problems with parts that hang over, protrusions or chains. The printer can even produce a structure like medieval chainmail.
Contents
Large Format Printers People Want to Get Their Hands on It Make Three from One 3D Model Helps Coastal Management Thematic Maps on Request 42 46 49 52 56
July/August 2007
41
Interview
HPs Graphic Arts Summit took place in Rome last May to present new technologies in their new printer series, which include the HP Designjet T610 and T1100, and the Designjet Z6100. HP employees, managers and engineers came from all over the world to explain the new large-format printers to resellers, dealers and the press. GeoInformatics spoke to Carles Magriny, Worldwide Product Manager for the HP Designjet T-series, based in Barcelona, Spain, about the companys latest technology. Magrinya: The world is interested in our innovations, and we had hundreds of resellers in Rome.
Technical Developments
In the near future I expect developments in inks. Were checking on the technologies we are using and improving the good things, discarding the bad things. We now produce inks that are more resistant than before. They have a better level of water resistance and protection against fading. At the moment we are searching for pigments to improve the quality of printing on coated papers. The performance on coated papers was not close enough to what we wanted although we had already established good colours on plain paper. Improvements in durability and in performance on coated papers: thats what we expect in the near future. There are also advances in design, managing workflow and better paper quality.
Evolution
Magriny continued: There is quite a difference in evolution between architecture and civil engineering on the one side and mechanical aspects on the other. In civil engineering and architecture the software itself develops, they make more powerful software and they want information on the parts they use. More and more, the components are being assigned to libraries as they are in mechanical, where this change has already happened. Another
Overlap
The overlap of GIS with CAE (Computer Aided Engineering) and CAD (Computer Aided Design) is increasing. There is a lot of blur or overlap with GIS as most of the topography in GIS is used for construction infrastructure or buildings. This is particularly evident between
46
July/August 2007
Interview
CAE and GIS. CAD and GIS have less overlap. Users want more than a map that shows just altitude, land or sea, rock or sand. They prefer an aerial-photograph-like presentation with a layer on top that combines various kinds of information. That requires photorealistic output so you can recognize familiar parts of the area and structures like buildings and bridges. This kind of output is a lot more recognizable than a map. This is a choice that didnt exist before but now with the improvement in printers we can deliver the goods. This creates the need for the delivery of still better output.
July/August 2007
47
Special
Controlled Centrally
Consequently, officials at the State Office defined the special requirements for the new solution. According to Volker Karnahl, divisional head at the Thuringian State Office for Surveying and Geographical Information in Erfurt, a multi-roll system with three paper rolls was required in order to cover all standard paper formats in accordance with the German Institute for Standardisation. In addition, the printing process had to be controlled centrally by print management software to ensure maximum output efficiency, coupled with the flexibility to handle the numerous file formats. The State Office placed particular importance on low ink consumption. The solution also had to be able to process different print sizes, from A0 to A4, with less administrative cost. We wanted to guarantee the automatic selection of paper sizes in order to minimise waste as well as to offer a central application for the preview and print of all customised plotter and image formats says Volker Karnahl. With the integration of a colour scanner the entire solution became a modern reproduction and archive system.
July/August 2007
49
Special
ager at Microbox. Thanks to its client server structure the CAD Station, developed by Microbox, transmits all printable file formats generated by a single application to the respective printer, including print preview. The server automatically defines the correct output device, thereby minimising paper waste and rework times. The printer will be connected to the CAD Station and will subsequently function as a virtual multi-roll system which, in many respects, outclasses conventional multi-roll systems, says Michael Holtij. Initially, officials at the State Office of Surveying had the opportunity to test the efficiency of the solution at Microbox GmbHs annual in-house exhibition in Bad Nauheim, at the end of September 2006. Subsequently, we linked a Canon iPF500 system via the CAD Station as an experiment in Erfurt. All locations could directly produce test prints via the test system, recalls Holtij. By the end of November, the State Office had decided in favour of the Canon and Microbox solution. Up until the end of 2006, all ten locations of the State Office for Surveying had been equipped with a total of 30 systems. Today, each location has access to two Canon imagePrograf iPF700 systems and to one iPF500 system. In each case, the systems are connected to a printing server with a Microbox CAD Station. Two of these locations were additionally equipped with a large format colour scanner which allows the output plans to be scanned into an archive system in colour.
Advantages of the virtual print solution at the Thuringian State Office for Surveying and Geographical Information in Erfurt, Germany.
Print capacity of up to 350 mixed documents in formats A4 to A0 per hour Automatic format rotation and distribution to the corresponding output device Resolution up to max. 2400 dpi Print tools process file formats HPGL/HPGL2, HPRTL, TIFF, JPEG, CALS, BMP, PDF/PS, DWG/DXF No separate viewer required Standard RAM of 1 gigabyte; 2 gigabyte optional Further application options due to Canons 5-colour system Extremely high operating efficiency due to the very low consumption of ink High print-head durability German nationwide service provided by Microbox GmbH
trolled by the Microbox CAD Station. Nowadays, experts at the State Office are able to process mixed print jobs using a single seamless process. All print jobs generated by the diverse applications are automatically distributed to the corresponding printer. The user can also define the method of output manually. In addition, the Microbox CAD Station adopts a rotation system. This allows each plan to be printed continually on the rolls, in the correct positional arrangement. The software also provides a standard cost analysis by means of detailed accounting information. As well as reducing paper consumption, the multi-print solution accelerates the print process which currently works three times faster than before.
product manager for LFP systems at Canon Germany. Thanks to this virtualisation and the operation of three printers at each location, many processes at the State Office for Surveying and Geographical Information have been optimised. Never before has printing been as simple as it is today, confirms divisional head Volker Karnahl. This is proven in the everyday processes. As a result of the virtual print solution implemented in Erfurt, the user no longer has to wait until a format has finished printing during a mixed print job to then exchange the various roll formats and restart the printing process. Of course, this would be the case when running a single multi-roll system. And, its a relatively laborious procedure, explains Wilko van Oostrum.
50
July/August 2007
Special
In Public Consultations
There are really no limits to the number of tiles that can be used to produce the final model. Sarah Clark and Graeme Smith of the Teign Estuary Partnership and a physical model of the Teign Estuary.
estuary, balancing different interests, protecting natural resources and pursuing opportunities for improvement.
Composite Powder
The physical model created by BlueSky was generated by combining aerial photography with ground measurements to produce a 3D computer model of the estuary. Just as a standard desktop printer produces a hard copy
replica of a document, the Contex 3D printer produces a physical model of the computergenerated design. Proprietary software slices the computer design into thousands of ultra fine layers that are then individually printed by spreading a sub-millimeter-thin layer of composite powder onto a base. The model is then built up with subsequent layers of powder that are fixed together using a liquid binder.
52
July/August 2007
Special
ZPrinter 450
The model is built in layers consisting of a high-performance composite powder, a liquid binder and ink. It is fast, and that reduces costs. The printer has no problems with parts that hang over, protrusions or chains. A vacuum and vibration system automatically removes as much as 80 per cent of loose powder and recycles it for future use. The remaining powder can be removed with lightly compressed air in a fully-enclosed chamber that vacuums away the powder. Unlike many other systems, there are no physical support structures to remove with scraping tools. The Z450 produces realistic colour models without paint. The model is finished and strengthened by dipping it in Z-Bond sealant, dripping ZBond over the model or lightly brushing it with Z-Max.
Some Specifications
The Z450 has two print heads, one tricolor and one clear. The colours can be printed with a maximum resolution of 300 x 450 dpi. Files with models in STL, VRML and PLY can be printed directly. The printer is compatible with Windows 2000 professional and Windows XP Professional. The minimum thickness is 0.089 0.102 millimeters. Certain data may have to be modified when this minimum thickness is not attained. The printer produces two to four layers per minute with a thickness of 0.089 to 0.102 millimeters. The size of a model can be up to 203 x 254 x 203 millimeters (8 x 10 x 8 inches). A larger model can be split into adjacent parts.
Spectrum Z510
The Spectrum Z510 from Z Corporation goes further. The model size can be increased to 254 x 356 x 203 millimeters (10 x 14 x 8 inches). The material used to produce the model is a flexible composite (elastomere). The maximum resolution is 600 x 540 dpi and the printer has four print heads. The 24-bit color accurately reflects the original data. The software provides texture mapping, feature colouring, annotation and labelling capabilities.
Unlike many other systems, there are no physical support structures to remove with scraping tools.
This technology was used mainly to produce mechanical prototype parts, allowing the rapid production of quality parts in relatively small numbers.
puts in this case from CAD, GIS and 3D visualization software. It is essential for the models to be 100% complete and volumetrically sealed in order for 3D prints to be successful.
How has 3D printing evolved? 3D printers are generally faster, more affordable and easier to use than other rapid prototyping technologies. With cost reductions and faster processing, automated 3D model production is becoming more viable for a much wider market. How does it work? A 3D printer uses a gypsum and glycerine based composite (gypsum is the mineral that plaster is based on). Layers of the fine powder are selectively bonded by printing a water-based adhesive from the inkjet printhead in the shape of each cross-section as defined in the 3D image file. This technology also allows full-colour printing with ink applied layer by layer as soon as the powder is laid. What software is required? Like conventional printers, 3D printers are designed to print using industry-standard out-
What if I just want a model of an existing geographical area? Aerial mapping exists for many areas, and some aerial photography companies offer a 3D visualization service. Here the aerial photography is basically draped over a digital elevation model to create a 3D model that is a true representation of the landscape or cityscape. Can additional information be printed on the model? Yes, additional information can be added prior to printing. A good example is contour data that can be generated automatically from the elevation data. Symbols, zones and proposed roads or developments, for example, could be added. Can you alter the 3D models from the actual view of the landscape? Yes, the original data itself can be modelled by software. This would allow a sequence of
July/August 2007
53
Special
How big can models be? The size of an individual 3D model depends on the 3D printer used. Each model can, however, be part of a larger model, in effect forming a tile, and there are really no limits to the number of tiles that can be used to produce the final model.
ators with 3D modelling experience. It makes sense to take advantage of the 3D printing services on offer. However, if you have the resources and the demand, purchasing a 3D printer is certainty worth considering.
Job van Haaften (jvanhaaften@geoinformatics.com) is editor of GeoInformatics. With special thanks to BlueSky, Contex and Z Corporation. Listed below are several URLs for additional information: http://en.wikipedia.org/wiki/3D_printing www.bluesky-world.com www.zcorp.com www.teignbridge.gov.uk www.contex.com/3dprint
models to be produced showing change over time, for example a quarry or landfill with extraction stages and final reinstatement, or an expanding property development.
How accurate are the models? The 3D print will be as accurate as the digital model provided and, with a resolution similar to a laser printer, every detail can be printed. The scale will, of course, have a big impact on exactly how much detail can be seen.
How durable are the models? As soon as printing is finished, the models are set in resin and oven baked so they handle like baked clay or pottery. They can be picked up and handled in small groups but it is recommended that they are fixed down when on public display. Is it best to buy a printer or to use a service bureau? 3D printers are still relatively costly and their operation requires care and fully-trained oper-
54
July/August 2007
Special
From left to right: Lex Rietbroek, Ben van Eijk and Jeroen Rijkse next to a recently printed city map.
ex Rietbroek: The predecessors of this Oc printer were written off and we were keen to get the newest Oc machine. The municipality no longer has a manned repro and it therefore became necessary for us to purchase a very user-friendly machine, which could also be used by people with little knowledge of it. In the past we had a manned repro but this is no longer the case. We can of course outsource our repro work, but that can easily take up to a week, while we often have to react quickly to a request by an alderman or a project manager. This is necessary as the drawing is often needed the same day. This is why the printer being able to function unmanned is an important condition. Everyone here can now print something from their own work station and do not have
to be experts in order to replace a roll or an ink cartridge. Another important condition is the high reliability of the system.
In a Multiple of Six
One of the parts of this branch is the department of construction and housing supervision, continues Rietbroek. Drawings often have to be printed in multiples of six. This is a legal requirement. One copy goes to the archive and the others go to various specific people and departments. We still receive many drawings on paper, analogue therefore, which are then scanned in order to be digitally manipulated and saved. Rietbroek: Another department, for example Area-development, has to prepare a new water-
course. The design is scanned and saved but each participant will eventually still want a copy on A0 or A1. That means that the printer can get to work. In the future we will receive all drawings digitally, even notes and changes. The ability to properly file away all documents is now possible with this Oc printer and scanner. The only thing that is not yet possible with this device is the ability to scan a page from a book.
Archive Scanned
Ben van Eijk: For GIS-activities we also use Cyclomedia. This company produces 360 degree panoramic photographs. We already have photographs of every other twenty metres of the municipality. This will now become of every 10
56
July/August 2007
Special
metres. Together that is around 24,000 files. In addition we have digital aerial photographs, which are for example used to place a design on top of an aerial photograph. The result is printed and creates a good idea of what the design will look like in its actual environment. Moreover it is very detailed, allowing you to see actual cables and connections. Jeroen Rijkse: Aerial photographs are also made for the provinces, Ministry of Defence and other government agencies. Discussions are already underway between the various agencies to cooperate in this. When each invests for himself you have less possibilities and it is unnecessarily expensive than when you do it together. Van Eijk: Right now we want to move away from our analogue maps. They will be scanned and stored. All provisional designs (V.O.s) and definitive designs (D.O.s) made now are already digital. All old drawings that need to remain available are currently being scanned and digitally stored.
lic information nights such as that of the redevelopment of the industrial estate Goudse Poort as well. These public information meetings require visual material, preferably large enough to see it from anywhere in the room. Van Eijk: We are now able to produce that in no time at all, even on A0-format. Rijkse: For the legibility of a map an A4 or an A3 will not suffice. People prefer to see the entire city on one A4 and that with legible street names. This is of course not possible. Details are inevitably omitted and whether a road passes by or in fact through a house is of course rather important to the inhabitant.
with GIS to improve the service to citizens, also for example via the internet, where citizens can access data. Van Eijk: "Citizens already know the possibilities well, via Google they can access information about their own street, almost down to the tiles of the pavement." Rietbroek: The registration of a complaint, such as for small maintenance, can be processed directly via the complaints check point. If it is a matter of a larger issue then it goes to one of three area managers, who work on a supra-district level. They can see whether maintenance has been planned and if it is too big for regular maintenance then it goes to large maintenance. The printer goes to work as soon as all data are processed into a starting memo and the large maintenance has been scheduled. The printing begins when the starting memo is ready and is distributed with maps. It is then outsourced and maps have to go to contractors. The actual specifications are printed, and of course the occasional alterations are made, necessitating the re-printing of the series. The same process applies to civil engineering and construction and housing supervision.
Authorized Personnel
Rietbroek: The printer cannot be used by everyone. All users of the departments Beheer Openbare Ruimte (Public Space Management), Projectbureau Openbare Ruimte (Project Office Public Spaces), staff of Project-Management and the like, are authorized to use the printer. This is to prevent that the report for a board meeting is accidentally printed in A4 format. The scanner can however be used by all members of staff. A recent assignment can easily be printed again from the controller situated next to the printer, which stores recent printing assignments for some time. Oc has programmed this type of thing completely according to our wishes.
Job van Haaften (jvanhaaften@geoinformatics.com) is editor of GeoInformatics. For additional information: www.oce.com and www.gouda.nl.
Gouda Municipality
Some key figures Situated in the province Zuid-Holland 1,811 hectare 71,382 inhabitants 39.4 inhabitants per hectare 16.6 residences per hectare
July/August 2007
57
n June, 18 Jack Dangermond, President of ESRI opened the International User Conference in San Diego with explaining his GIS vision and the theme of 5 days of workshops, lectures and meetings, The Geographic Approach. Dangermond reminded his audience of the fact that Arc/INFO was launched exactly 25 years ago. Computers were a million times more expensive and slow. If we could look into the future: in ten years time, our machines will be a million times faster, or cheaper, take your pick. Or theyll have become as big as one bloodcell. Addressing todays problems, like a growing population, global warming, social conflicts, resource shortages, loss of biodiversity and security, Dangermond pointed out: We need a change. We need to build on our understanding of nature and man as a living whole, as a network of integrated systems and its place in the evolution of it. GIS is being applied all around the world, and is becoming an instrument of evolution.
New Versions
Of course plenty of time was spent explaining the upcoming new versions, among others ArcGIS edition 9.3. Only a few changes are to be expected. Dangermond said: 9.3 improves quality but that doesnt change the structure much. Think about service parts, refinements,
58
July/August 2007
age
little tools and hundreds of little things. Just a bit later, ESRIs Nick Frunzi talked about problems with tech support helpdesk capacity concerning the current ArcGIS 9.2. Apart from an increased helpdesk staff there will be diagnostic reporting in 9.3, and also access to an internal ESRI knowledgebase. Our bug list, said Frunzi. The amount of improvements in ArcGIS Server will be striking. The list was long, and it raised the question if the current version hadnt been launched too early. The improvements regarding documentation, scalability, performance and interoperability, plus new possibilities for Mashups. Also there will be support for PostgreSQL, Oracle Express and DB2.
Wangari Maathai was assisted by GIS-specialist Peter Ndunda, who took care of the visualisations during the keynote.
Naturally, you would find usual suspects like Leica GeoSystems, Hewlett Packard, Oc and of course the enormous ArcGIS islands and the wonderfully equipped ESRI Press shop that ESRI had reserved for itself. Most of the people though, especially those who had seen Jack Dangermonds keynote opening speech on Monday, were after some cool new gadgets by three smaller vendors. On the hardware front, the Image Server Appliance by ESRI partner Inline Corporation boasts a mere 12 terabyte of internal storage in an optimized and tuned server appliance, including the ArcGIS Server application. Arc Science Simulations presented an innovative visualization monitor. Their 1-meter diameter globe projector is able to show all kinds of map data in a realistic view as the world turns.
AdapX, however, got high marks with a digital pen and a piece of magic paper, which had been showed earlier in the year at Bentleys BE Conference in Los Angeles. In a quick demo, one could sketch and write over a simple A4 size print map, which only seconds later would turn up on the computer screen, complete with your own scribbling. A great example of working digitally with an analog feel. The locationaware pen recognises a watermark in the paper,
Originally scheduled for a one-hour signing session of her book Unbowed, a huge crowd turned up in the map gallery section of the convention center. It would keep Nobel Prize winner Professor Wangari Maathai from Kenya busy for the best part of the afternoon and the early evening.
July/August 2007
59
Arc Science Simulations presented an innovative visualization monitor. Their 1-meter diameter globe projector is able to show all kinds of map data in a realistic view as the world turns.
Morehouse addressed the concept and history of modelling geographic information, while Brown sketched the evolution and future role of GIS in digital mapping. Of course, the huge impact on the visibility of geo-information through the success of Google Earth was discussed. Brown: It supports getting answers to a focussed and limited set of questions. But, GIS users often need a lot more than roads and imagery in their base maps. For analysts, thats not enough. They might want to know where all the schools are, and how many people are living in a certain area.
User Ideas
The exodus of GIS people started early Friday morning. Some attendees hadnt had a proper night of sleep after the big Thursday Night Celebration at the Marriott hotel, so they made sure they could catch some sleep at the airport or in the plane home. This reporter had a plane to catch to Europe at noon, and thus missed the closing session. This was an open forum for questions and answers with Jack Dangermond and senior ESRI staff. Clearly, ESRI takes some effort in user perspectives. During the whole week, selected members of the ESRI staff had been running around with little attachments on their badges, saying User
and using a docking station, it transfers its information to the original digital map on the PC. Great for simple tasks in the field.
Technical Keynotes
For many people, Mr Jack Dangermond is ESRI, but a lot of folks who have been using GIS software for the last ten years, will also know about Scott Morehouse and Clint Brown. Their technical keynotes during ESRIs annual conference in San Diego gave some deep insights into the history and the technical developments within ESRI and the geospatial community in general.
Ideas. Thousands of people had received an e-mail questionnaire earlier in the week. Just before the conference had started, an e-mail came in which stated that Jack Dangermond had personally answered their questions, to be found at http://events.esri.com/uc/. Looking back on the conference, its amazing how ESRI once again succeeded in organising a smooth event for more than 14,000 people. Loaded with new ideas after every day of the conference, the first thing one would invariably think of when leaving San Diegos Convention Center was: how come nobody thought of a nice pedestrian tunnel under the busy (and dangerous) freight train and trolley crossing? If, like in the US, pedestrians are clearly not visible in any planning or building concept, some just has to step up with a map visualisation of the situation at the Convention Center and, in close proximity, Petco Park baseball stadium. Lets check if, next year, things have changed for the better
Remco Takken (rtakken@geoinformatics.com) is a contributing editor to GeoInformatics. For more information visit www.esri.com.
60
July/August 2007
Column
For a quick introduction to the subject, just take a look at the list of the national mapping and cadastral organizations, which are the members of EuroGeographics association, whose main mission and goal is to achieve interoperability of geographic information in Europe. You will find a real Babylonian mixture of names in that list. Of course, almost every country in Europe has its own language or even more languages in one country, so the different names are all understandable. But when you look closer, you can see that the organizations are not only different in organisational forms like institutes, committees, administrations, amts, etc. They are different in their contents, they can have geodesy or surveying in their names or not, and the same goes for land cadastre, cartography, real estate, etc., they are in organizations names or not. But remember, the common mission and goal of all of them is to achieve interoperability of geographic information in Europe.
Portugal to Russia, what these organizations are and what they do? My suggestion is to choose a name that consists of just two syllables geo and reg, which draw their roots from Latin language. This way the possibilities to hurt any national prides are minimized. Geo is obvious with so many words that start with it, beside geodesy there are many closely related terms, which belong to the common field of geosciences for a long time and also many newer terms, which are entering the main stream or are already firmly in it, like geoinformatics, geomatics, geolocating, georeferencing, geocoding, geostatistics, geodemographics, geosemantics, geotopology, etc. Reg represents the other half of the potential new common name, starting numerous Latin words related to regulating, renewing and maintaining cadastres and registries, like rega, regio, regionalis, regeneratio, regere, regerere, regestum, registrum, registratio, registrator, registrare, regula, regulare, regularis and regulatio. The exact meaning of all these chosen exemplary terms can be found in dictionaries, but we need only a little bit of imagination to connect all these terms with the activities and services of the national mapping and cadastral organizations. So, let it be GeoReg across the continent, lets for example call the present Geodetska uprava Republike Slovenije simply GeoReg.si (www.georeg.si) and present Kort og Matrikelstyrelsen simply GeoReg.dk (www.georeg.dk) and present Kadaster en Openbare Registers simply GeoReg.nl (www.georeg.nl), etc. Then, one day, when we Europeans finally decide to form a common EU mapping and cadastral agency, we will be able to name it simply GeoReg.eu (www.georeg.eu). And the same philosophy we could use in product naming, giving the products a common root name georec to let everybody know, that any GeoReg agency is
offering standard quality interoperable georeferenced records of all kinds. When GeoReg agencies achieve this one day, they will become the real kings of geospatial world and their users will call them appropriately as GeoRegis! Does this writing perhaps look like a fairy tale to you? It is for now, but it can become true. Lets start making it true by starting at the roots. Its all in the name. If we can agree on it, it will be much easier to agree on everything else, even on the matters we dont dare to dream about yet!
Why dont we choose a name of essence, which would tell everyone from Iceland to Turkey and from Portugal to Russia, what these organizations are and what they do?
So, why dont we all look at a common denominator of the activities and services of all these organizations and put them in one common name, which could then be used across the continent and be recognizable by its every citizen? Why dont we choose a name of essence, which would tell everyone from Iceland to Turkey and from
July/August 2007
61
Product News
Adapx Introduces Software that Enables Digital Pen and Paper with ESRI GIS Mapping Software
vide an end-to-end data collection process that results in accurate, real-time digital information. With Mapx, the user retains all the flexibility of using paper maps in an easy-to-use data collection solution. A pattern of dots embedded in the printed document enables these fully geo-registered maps to be used with Penx, Adapx field-ready digital pens supporting Anoto functionality. The user is able to mark and annotate maps, and forms associated with the map, containing geodatabase features and attributes, and collect or report on these features and attributes in real time. Adapx Penx, based on the Anoto technology, looks and feels like a regular ballpoint pen, but contains an integrated digital camera, advanced image microprocessor and mobile communications device for wireless connection. Internet: www.adapx.com
Adapx revealed a new software product called Mapx, a fully integrated software solution that enables digital pen and paper-based data collection with ESRI ArcGIS mapping software. As an ESRI ArcGIS Engine 9.2 Extension, Mapx can create, import, edit, and share paper-based Geographic Information Systems (GIS) data. Mapx seamlessly integrates with ArcGIS Desktop applications to pro-
Internet: www.topcon.eu
62
July/August 2007
Product News
Trimble Makes it Easy for GIS Professionals to Update Maps Electronically from the Field
Trimble introduced field revision management and Geographic Information System (GIS) redlining capabilities for its Trimble Fieldport software. Trimble Fieldport software is a Web-based, wireless software suite for utility field service management and location-based mobile mapping. Trimble acquired Spacient Technologies, Inc. and its modular Fieldport software in November 2006. The acquisition has enabled Trimble to further expand its Mapping and GIS solutions for the field and mobile workforce. Trimble Fieldport software integrates GIS and Global Positioning System (GPS) positioning data with customer and asset information for mobile workers in the utility, government and corporate sectors. With the addition of a field revision management and mobile GIS redlining tool, workers can now use text and graphics to note field conditions that are different than those indicated on maps in the GIS. Comments are immediately available to other field crews accessing the GIS and the annotations are submitted electronically to those responsible for maintaining the accuracy of the mapping system. The new field revision management and mobile GIS redlining features also include advanced workflow management capabilities. Now work orders can be generated automatically to streamline the process of repairing and maintaining field assets. The identity of the user submitting the change, time, date and other details are also tracked automatically. Internet: www.trimble.com
Autodesk unveiled Autodesk MapGuide Enterprise 2008 and Autodesk Topobase 2008
Autodesk MapGuide Enterprise 2008 builds on Autodesk's support for open source software. Valuable contributions from the open source community reflected market need and fostered many innovations within Autodesk MapGuide Enterprise 2008. Autodesk Topobase 2008 software incorporates all of the functionality of Autodesk MapGuide Enterprise 2008 and AutoCAD Map 3D 2008 products, as well as Oracle Spatial database technology, for unparalleled performance, scalability, reliability and security. Topobase software brings together design, spatial and enterprise data and bridges the communication gaps that separate people from the knowledge and insight they need to design, build and service infrastructure assets more effectively. Topobase 2008 offers additional features that take advantage of the Web for project workflow management, job creation and editing, and visibility into existing infrastructure and new projects. With the 2008 updates to the Topobase Web component, remote users, such as customer service agents, managers, field operations and external contractors, can securely access asset information to make better decisions and improve operational effectiveness. In conjunction with Topobase 2008, Autodesk introduced a new module developed expressly to address the needs of gas utility operators. Autodesk MapGuide Enterprise 2008 is now available in English. German, Japanese, French and Italian versions will be released in the near future. Autodesk Topobase 2008 will be available in English later this month and in German, Italian and French shortly thereafter. Internet: www.autodesk.com
Internet: www.trimble.com
July/August 2007
63
Industry News
Leica Geosystems Geospatial Imaging Acquires IONIC
Leica Geosystems Geospatial Imaging has acquired all outstanding shares of IONIC Software, a geospatial software company headquartered in Liege, Belgium, as well as all the shares of the related American company IONIC Enterprise. Together, these companies are referred to as IONIC (www.ionicsoft.com). IONIC provides state-of-the art, enterprise geospatial technology with the most advanced service-oriented mapping available for web-based and distributed systems. IONICs strength in the defense, space, government and commercial enterprise sectors complements Leica Geosystems market presence and existing product portfolio. kets, with emerging opportunities in areas such as land record management, utilities, and health and human services. Oakar Services, Ltd. (OSL), the former ESRI distributor in the region, will now focus on GIS project and consulting services and remains a sales channel for software solutions that complement the ESRI core technology sold and supported by ESRI EA. and construction (AEC) and geospatial software needs of subscribers, providing building, plant, civil, and geospatial solutions and supporting a managed environment for their AEC and geospatial IT.
www.bentley.com
www.esriea.co.ke
Microsoft Enhances Virtual Earth Product in the UK with Data from Intermap
Microsoft. and Intermap announced the launch of Microsofts enhanced Virtual Earth 3-D viewing platform based on Intermap Technologies highly accurate and up-to-date elevation data for all of Great Britain. Intermap is also creating similar maps for all of Western Europe and the continental United States. The resulting product delivers a more seamless and accurate 3-D experience for all Internet users visiting Microsofts Live Search Maps for England, Scotland and Wales.
www.gi.leica-geosystems.com
www.esri.com
www.intermap.com
www.esri.com
www.autodesk.com
www.fugro.com
www.applanix.com.
www.1spatial.com
www.cadcorp.com
Geokosmos has successfully completed LiDAR survey of French national railways in France. Using combined airborne laser scanning and digital aerial photography technology Geokosmos surveyed that part of the railroads which connects Amiens and Bouillon and covers an area of 83 km. The captured data was used for producing of a large-scale digital topographic map (scale 1:500) and a digital orthophoto (GSD 5 cm) that will definitely help French National Railways Service to provide technical assessment of the French railways and its infrastructure.
www.geokosmos.ru
64
July/August 2007
Industry News
datasets and geometric images. EVC specializes in custom GIS data production using data from a variety of in-house or client sources and in helping clients find and manage the right mapping and GIS data production resources for their projects.
www.cartographic.com. www.Intermap.com.
www.esri.com
Ordnance Survey has become the first organisation to offer oblique imagery for the whole of London, offering a revolutionary new perspective of the capital. Over 3 000 km2 of data covering all London boroughs is available for immediate supply. Data for all towns and cities across Great Britain with populations greater than 50 000 will be available by the end of 2007, with over 70% of data captured to date. Pictometry oblique imagery fits well with Ordnance Surveys core datasets. It complements the entire OS MasterMap intelligent data portfolio of Topography, Imagery, Address and Integrated Transport Network Layers and can be used in conjunction with Ordnance Surveys Points of Interest data.
www.intermap.com
www.gaf.de
www.ordnancesurvey.co.uk/products/pictometry.
Leica Geosystems won the prestigious Gold Partner Award at Ordnance Survey's biggest ever partner conference earlier this month. Furthermore, Leica Geosystems was nominated for an Innovation Award at Ordnance Survey's annual conference in London. With a membership of over 500 organisations from individual entrepreneurs to multinational companies over a wide range of markets, Ordnance Survey's Partner Programme is a world renowned commercial alliance programme. Leica Geosystems entered into partnership with Ordnance Survey in 2005 to offer Britain's first commercial RTK network, SmartNet. SmartNet offers an RTK and DGPS correction service that uses signals from Ordnance Survey's network of base stations, OS Net.
www.icc.cat
www.infoterra.co.uk
Norwegian Air Traffic Control Services Company Awards Contract to ESRI Affiliates
A consortium led by ESRI business partner CGx AERO in SYS of France and including ENAC (France) and Geodata (Norway) has been awarded an international tender launched by Avinor AS for its Procedures for Air Navigation, Design and Aeronautical (PANDA) charting system. Avinor is a state-owned limited company that is responsible for air traffic control services in Norway. The company owns and operates 46 airports throughout the country including 14 that are in association with the armed forces.
www.leica-geosystems.co.uk
www.esri.com
www.teleatlas.com
www.pb.com
www.topcon.eu
www.ordnancesurvey.co.uk/annualreport. www.ordnancesurvey.co.uk
July/August 2007
65