The prepared text of Schell's remarks can be found below or you can listen to the interview in which only selected questions were addressed... go to the end of the prepared remarks to listen.
Directions Magazine (DM): Most people don't give much thought to standards. What's the main thing about geospatial standards that you would like our website visitors and listeners to think about?
David Schell (DS): This is what I was just getting at. I believe there is a basic difference between geospatial standards and the generality of IT standards, just as there is a fundamenal difference between geoprocessing and traditional information technology. This is because the standards and technologies that facilitate geoprocessing are rooted in the physical reality of geodetic reference, which means that they are not simply abstract engineering prescriptions for manipulating information.
Geoprocessing actually represents an aspect of real world measurement fundamental to the scientific method that structures academic research, as well as basic economic theory, and public sector policy. In this regard, geoprocessing is used to give meaning to such basic concepts as ownership, geospatial modeling and all sorts of decision-making involving situational awareness and the validation of natural or demographic phenomena.
DM: By now, most of the vendors have implemented in their products the OGC's open interface standards - Web Map Service, Web Feature Service and so on. Is the OGC's work nearing completion? Have you accomplished what you set out to accomplish?
DS: OGC's work is by no means near completion - as it would be if it were just another IT consortium. Again, this has a lot to do with the scientific aspect of geoprocessing which ensures the relevance of the consortium to the research community.
But it is also true that in recent years we have seen geospatial interoperability become a significant dimension of "information technology" in general - in many business, technical and consumer applications - in mobile computing, for example, in logistics, in security applications, demographics, agriculture, avionics, consumer mapping, tourism, and many others.
In fact, It has become a major development issue around the world, each region bringing additional requirements not just for interoperability but for national and cultural interaction involving everything from boundary disputes to transportation routing, to disaster management, sharing of water resources, demographics and urbanization, not to mention language differences, cartographic conventions, place names and the sort of conflicted territorial claims made on such regions as Antarctica that may not be resolved before the continent melts.
There's also the concept of "geospatial interoperability" as a policy issue - something even more important and difficult than its technological foundations. We're often asked, "What is the next level of OGC development?" That is, people are always asking "What is the next challenge we face in fully integrating interoperable geoprocessing into the mainstream of international ICT?"
The simple answer to this is that implicit in development of the OGC Reference Model (the ORM) is the potential for its use in a wide variety of application domains, and the probability that it becomes part of the methodological approach in each. In fact, OGC now faces the endless task of applying both the consensus process of the consortium and the practical experimentation of the Interoperability Program to address domain issues, one after another, and to contribute to the remodeling of the technology process in each, as we are currently doing with our CAD-GIS integration initiatives and collaborations with the hydrology and meteorological communities.
In fact, OGC's work seems to be co-extensive with the objectives of most of the world's NSDI efforts, except that OGC's process deals primarily with the foundation level of the geospatial enabled enterprise that the public sector is not able to address alone. This is because OGC comprises a union of both public and private industry research and development interests, free of the usual constraints of procurement policy and commercial profiteering, and guided very much by objective peer review values.
DM: Has the world economic downturn hurt OGC membership numbers or revenue?
DS: Of course we are acutely aware of the economic situation and have adjusted our operations accordingly. We've put stringent financial controls in place, and we are more than usually careful about scoping consortium development and outreach objectives to get the most out of both member and staff efforts.
But I don't think the economic downturn has been as painful for us as it has been for many of our commercial or government members. In fact, OGC membership has grown month-to-month over the last two years - we now have 387 members - more than ever before.
In times like these a well-positioned and technologically focused consortium like OGC seems to act as a mitigation strategy for product and service providers as well as for users. The collaboration of both communities really does minimize the cost of doing business for everyone. For the suppliers it's generally a question of lowering the cost of infrastructure to free up resources for their distinctive "value add", and for users its always the low pricing of "plug-and-play" technology that they find attractive.
What results is a more orderly and efficient market process that helps everyone set more realistic expectations and minimize risk. I think it is OGC's role in "risk reduction" across geospatial markets that's made it possible for the consortium to continue to grow its membership and sustain productivity since the "bubble burst." We continue to look at this situation very carefully, preparing for the unexpected. But so far - counter-intuitively - OGC has been able to hold its own.
DM: Are OGC standards truly international, or are there other countries heading off in different directions with different standards to make geospatial systems communicate with each other?
DS: Yes, by definition, it is true that OGC standards are international. Your question implies that OGC is focused primarily on the US market, but that's not the case. We may have founded OGC in the US and it is well known that US companies and agencies played a great part in funding and building the organization, but we always intended for OGC to be international in scope and belong to the world, all of us - not only OGC founders and staff, but also the US organizations that supported us.
OGC is now more than 15 years old, actually 18 if you count the three years we incubated OGC in its predecessor, the Open GIS Foundation.
Throughout these 18 years the involvement of non-US organizations has increased steadily. From the beginning, for example, the role played by Canadian companies was formative, and involvement of European countries and EU programs has grown at a steady pace so that now OGC has more European members than US and Canadian members combined.
In addition, during recent years there has been an upsurge of involvement of Pacific Rim and Asian markets. Australia has quickly become one of the leaders in national implementation of OGC standards, and OGC member Forums are emerging in India and Korea.
Most significantly, now there is extensive use of OGC specifications in both public and private sector organizations in China, and active discussion of evolving Chinese participation in consortium activities is on-going. And Japanese corporations have been involved in OGC since the early 90's - Japan, in fact, played an early and active role in the international harmonization of GML.
It is true that because of our long tradition of partnership with various US organizations -- and because our business offices are located in the North America, we do still depend significantly on North American investment. But the consortium consensus processes do really serve as a "great leveler."
In the working groups of OGC's Technical Committee, for example, where the most basic work of the consortium is done, regardless of membership level or region all participants have an equal vote. This alone should answer your question. But it is also true that since the adoption of OGC standards under the INSPIRE legislation of the European Parliament the trend toward non-American leadership of the consortium's standards process has accelerated to where we now actually have a sufficient number of voting members from around the world to make it impossible to release parochial standards - this is particularly true with respect to Europe - and OGC's long standing collaboration with ISO TC211 effectively provides a "global" de jure status on many of OGC's releases. Our board of directors reflects this.
And all this is reflected more and more each year in the composition of the OGC board of directors, which presently includes five Europeans as well as two who are associated with Indian organizations, and by the end of the year will include at least two more non North Americans. And we are in the process of organizing a "Global Advisory Council" under a committee of the board to enfranchise leaders of so far unrepresented regions to supplement the workings of the board.
DM: Some of the things OGC is involved in now seem to be only partly about geospatial technology - Sensors, Building Information Models, Semantics, and Grid Computing, for example. In all of these, geospatial is just one component. Aren't there other standards organizations that are better positioned to do this kind of work?
DS: I've held from the beginning that geospatial information processing is a pervasive though poorly understood dimension of information technology. Your assertion that "geospatial is just one component" echoes what I consider a dated popular view of geospatial processing, taking us back to the time when proprietary GIS applications characterized the geospatial market.
In my view the pioneers of GIS have enabled a generation of geospatial service architects who are now showing that geospatial thinking actually animates much of mainstream enterprise architecture and makes it possible to address the space-time dimension of information processing. What you refer to as a "component" is actually an "integral quality" of information processing, and the applications you refer to - sensor webs, BIM, semantics, and even the GRID, cannot be fully conceptualized without reference to a space-time manifold.
Lately, the popular way to say this is "everything happens somewhere." But more to the point, each one of the technologies you mention assumes a spatial and/or temporal context. OGC doesn't by any means claim to own the core standards issues of such related technology markets, but it is fair to say that none could be fully addressed without reference to the infrastructure of spatial processing standards that OGC and ISO have formalized in the same sense that geospatial processing doesn't exist in a vacuum and depends on a real world context. Actually, the examples you give in your question represent some of the most productive opportunities to show the extreme relevance of geospatial standards development and service architecture.
So, to answer your question simply, "No, I do not think there are other standards organizations presently positioned to do this kind of work, although to be sure, OGC will rely as often as necessary on partnerships with other SDOs and consortia to make its best contribution."
DM: Google gave their KML standard to the OGC a year or two ago to maintain as an open standard. Why did they do that, and what has resulted?
DS: Because of our many years of work evolving GML, OGC has a deep understanding of the development and marketing implications of KML. I believe also that there was general recognition of the requirement for harmonization of the two specifications, as well as a need to stabilize KML through a formal standards development process. This was a practical move on Google's part, associating its specifications with the tradition of formal GIS and geospatial web services represented by OGC.
Also, GML and KML are complementary and the real issue is determining which one is most suitable for a specific need or application. GML is an XML grammar for encoding features. KML, on the other hand, is used to encode less complex data than can be described in GML, but it provides a convenient way to present information in an earth browser application. One result of harmonizing KML and GML is that people are able to do more with KML and they are using KML more often. You see this in fields of activity such as climate science and urban modeling, where people are using Google Earth and Google Maps to display complex data, and where the data is often encoded in GML. In urban modeling, the improved interoperability between CityGML and KML can be very useful.
DM: Is anyone using the OGC Sensor Web Enablement (SWE) standards?
DS: Yes, the SWE standards are coming into wide use. NASA is using SWE in operational systems for tasking earth observation satellites. The US military is using SWE with drone aircraft and in naval operations. The European SANY program - Sensors Anywhere - is based on SWE. SANY is part of a major European space initiative called GMES (Global Monitoring for Environment and Security). SWE is also part of another European space program, the European Space Agency's Heterogeneous Mission Accessibility project. SWE is important in the German-developed Indonesian Tsunami Early Warning System, a watershed monitoring program in Germany as well as a Fire Information System in South Africa. It's part of a major program to provide better access to U.S. Hydrologic Data, and it's part of a very exciting international movement toward open standards in the world of ocean observation systems. And, there are many other examples.
DM: OGC is involved in developing the architecture for the Global Earth Observation System of Systems (GEOSS). Where does that stand?
DS: In its institutional goals, GEOSS is moving forward as planned, and in its technical architecture development, in which OGC is playing an important role, GEOSS is ahead of schedule. OGC leads the GEOSS Architecture Implementation Pilot using the OGC Interoperability Program Procedures. In the Pilot, requirements are derived from GEO Societal Benefit scenarios. Many organizations contribute technology components, and these feed into an iterative process for testing and improving interoperability based on open standards. A GEO Web portal and a GEOSS Clearinghouse are part of this, and it's all designed to provide shared access to services and data through technology that implements the agreements among GEO participants. In the most recent phase of the pilot, USGS, ERDAS, Northrop Grumman, the European Space Agency and the European Commission's GIGAS program are the sponsors. 37 organizations are contributing technology.
DM: You sent out a press release not long ago about the OGC Board's new Law and Policy Committee. How is that going?
DS: The process is going well. One of our directors, Kevin Pomfret, an experienced image analyst and lawyer has structured a board-level committee process to ensure that the board and membership in general are aware of some of the emerging legal problems we are going to face as geospatial information and services become a greater part of the infrastructure on which people depend. Kevin is drawing member legal experts to the committee to discuss prevalent issues, and beginning to create a framework of legal and policy thinking around spatial issues that have for years been taken for granted, such as liabilities, data accuracy and ownership.
There is a great latency for interest in this area. So far corporate lawyers have been mostly concerned with intellectual property issues and the many other conventional legal questions related to contracting. However, it's no longer adequate to confine our concern to these issue as it's important to realize what is really at stake when we talk about liability. Liability doesn't simply refer to a single product and a simple one-vendor solution or transaction.
DM: OGC has published a Geospatial Rights Management Reference Model. I imagine that will be tied into the Law and Policy discussions, is that right?
DS: Yes, the Geo Rights Management, or GeoRM, activity, does relate to the work of the Law and Policy Committee. The GeoRM Working Group produced a Geospatial Digital Rights Reference Model that lays out in detail the functional requirements for web services that manage rights related to geospatial information. This is a complicated area when you consider, for example, the provenance issues relating to data that has been created from multiple data layers, or the need for emergency access to data that might normally be protected for commercial or privacy reasons. The laws and technologies need to co-evolve, and this co-evolution can benefit greatly from a transparent decision-making process that is open to all interested parties.
DM: I notice that the OGC has more than 100 academic members. From your perspective, are the universities doing a good job of preparing young people for careers in the geospatial technology market?
DS: I can't speak for the general market in terms of the qualifications of recent graduates. I do, however, have opinions about the state of the present academic process that relates to our work, the implication of which is that serious thought must be given to the positioning of geospatial information processing within academia or the next generation of graduates may not be prepared to address the many challenging issues we we will have to face in the future, issues such as climate change, sustainable development, disaster management, urban planning, public health and national security.
My main concern is that many universities are too slow in evolving curricula from a focus on traditional GIS to a focus on "geospatial information processing" with an emphasis on generalized IT architectures and interoperability. The issue at the heart of the matter is whether university curricula are designed to teach geography, for which GIS is beautifully designed, or to teach spatial information science, which is something different. It involves the convergence of the fields of geodesy, semantics, real-time sensor-based computing, modeling and cognitive science, as well as GIS techniques. This is an important distinction because on the one hand a student focuses on an important but circumscribed geographical problem where the results contribute to the growing body of geographic information and analysis, while on the other hand the student is cut free from the traditional research methods of academic geography to explore the nature of spatial relationships, meaning, information integration and presentation in models - all with a basis in geodetic reference. It is a question of engineering versus the evolving science of spatial analysis and interoperability.
Needless to say there are many universities that support robust GIS programs in either geography or urban studies contexts, but few have realigned the interaction between relevant disciplines to address the fundamental questions of this new science, which has everything to do with interoperability. A few universities have far-sightedly created programs or institutes with such names as "Geospatial Information Science", de-emphasizing traditional geography and exposing students to the more abstract interdisciplinary approach I have described, with very significant results, which we see in newly minted graduates participating in consortium programs. My worry is that in a world where all of the traditional GIS companies and contractors are moving quickly to bridge the divide, or cross it entirely, and where global solutions cry out for complex thinkers capable of integrating disciplines as well as data sets, if enough universities do not reevaluate their approach to geospatial computing, young people with conventional educations will be left behind and we will have too few really qualified graduates to fully assume the burden of advancing the science.
DM: What are the new frontiers? The people in your Technical Committees are very knowledgeable about the latest in technology and they must share some of their ideas. What are they most excited about?
DS: There's a constant interplay of both new technology and new applications in the OGC. Sometimes it is driven by individuals on a mission and sometimes it is driven by major well-funded programs that have the capability to produce new resources.
Many members are interested in advanced imaging and modeling, and they are interested in grid computing applications. This often ties into the Sensor Web Enablement effort, which, as I mentioned, is gaining momentum in a number of application areas. It also ties into workflow management.
There's strong interest in our 3DIM working group, which involves modeling of the built environment. Our recent AEC-Owner-Operator testbed, which we did in conjunction with the buildingSmart alliance, touched on building energy analysis and cost projections, but there's a whole lot more to do in that area. We were interested initially in the overlap of geospatial with CAD and related technologies, but this quickly gets into the issue of Building Information Models, which need to migrate quickly from proprietary frameworks to open frameworks and from file-based methods to Web-based methods. Especially with the mortgage and real estate industries in crisis, there is a huge need to make this happen, and we know how to do it.
Also, governments around the world are suddenly pushing hard for smart grid standards - bits being used to manage electrons and the buying and selling of electrons. Every air conditioner, electric car and power plant has a location, so both geospatial and in-building location are important in those standards efforts. The members who are becoming interested in smart grid will almost surely tie into our existing AECOO effort and the work of the 3DIM Working Group.
There's lots more going on. You can look at our website to see the descriptions of the work being done in the various Working Groups.