In the summer issue of ESRI’s ArcNews, Dr. Paul Torrens penned an article titled “Process Models and Next-Generation Geographic Information Technology.” The article provided a truly unique vision on how GIS should work by incorporating more dynamic data and having users develop a better understanding of how geospatial phenomena really work, specifically those “processes” which control complex spatial situations. Editor in Chief Joe Francica interviewed Dr. Torrens, and this is part two of that interview.
Directions Magazine (DM): Given today's Web architecture and associated search, Web services and syndication methods, do you believe that we will be able to create true semantic spatial queries, thus accessing data, data models and other spatial information now or in the future?
Paul Torrens (PT): My sense is that we already can and many of us are already doing this in a disjointed manner through browser technologies, whether on our desktop computers, laptops or phones. As I respond to your questions, for example, the phone that sits in my pocket is harvesting content from the Web on my behalf, filtering it based on semantic classifiers that I have specified and syndicating the results as an RSS feed. Much of the semantic functionality being used to guide that harvesting relates to the location at which the information was either composed or published and the spatial information embedded in its content. I am likely to access the results as text or through a map, using the latter, for example, when I am on the road and need to find a restaurant. The browser can poll functionality from the data models of any given number of mapping APIs to build the map, saving me the trouble. At the same time, my phone (or rather the applications that it holds) "knows" where I am in many spaces: a social network of my contacts, the physical network of my service provider, the local Wi-Fi space of access points around me, on the Earth's surface down to a resolution of about seven meters, and the "cyberspace" of the Internet Protocol. All of this spatial information is being communicated to a broader ecosystem of data and semantic processes.
Currently, the level of "intelligence" used to ally the spatial information on my phone to the Web is more basic than it could be. The software on my phone really only makes use of my position via GPS, Wi-Fi, cell-tower triangulation and IP address, distilled to a common location ID by geocoding or reverse geocoding and allies that piece of data with the Web by essentially scraping Web sites for textual location data (place names, street addresses, ZIP Codes, phone area codes) and by using geocoding to convert those data to latitude and longitude. But I also use my phone to browse the news, to email colleagues, to search the Web and so on. Many of these tasks are correlated, which infers the relationships that might exist between them. Moreover, the phone usually goes where I go, and so the sequencing of my interactions with its software and with the Web, as well as the spatial location of the phone when those actions are performed, infers a wealth of additional semantic meaning that could be used to deliver tailored location-based services to me. Of course, the privacy implications of this sort of semantic analysis are mind-boggling, particularly if we consider that something as simple as a phone number (information which many people voluntarily give over at a point of sale, for free) can be used to cross-link all of this information to the sorts of semantically aware geodemographic systems I discussed earlier.
DM: What are the complementary, non-geospatial process models that can support GIS and will these require current geospatial professionals to be retrained to think differently than they do today?
PT: I think that the inverse of the question is perhaps more interesting: any transaction that can be tagged with a location is amenable to geographic processing. Currently, that is largely limited to a location, but I see huge currency in geographic processing that operates on space-time activities, trajectories, interactions and events, as well as the ambient geographic context of those things and their spatial structure and compositions. My sense is that we are on the verge of a revolution in the way that geographic processing relates to Internet and communications technologies. To some extent, I suppose, that revolution has already taken place and, like ubiquitous computing, it's not obvious to us because it is part of our everyday lives. ï¿1⁄2
But to answer your question: the most active development in non-geographic process modeling is taking place under the umbrella of the "semantic Web." To the extent that the term "Web 2.0" has come to be associated with the fusion of "produced" data with volunteered data, and mashups of disparate Web streams into repurposed media, a next stage for the Web (what many refer to as Web 3.0, although both terms are controversial given that the fundamental architecture of the Web has retained the same versioning all of this time), is the emergence of semantic technologies that will feed on Web 2.0. Such technologies are already being built - as "Web services" that take a user request, disappear into the Web to populate that request with data and perform either a preliminary or deep analysis of those data, and return answers, products, services to the user in an ontology that maximizes the utility of the information for the user. Prototype forms of this already exist in common usage: translation services for Web pages that appear as a hyperlink next to a search result, comparison shopping bots, automated trading agents for stock portfolios and so on. In each of these examples, geography is significant.
One of the wonderful things about working in geographic information technologies at the moment is that the technology that drives the semantic Web is already part of the toolkit of the GIS professional: object-oriented programming, extensible markup, server-side scripting, database query and access methods, and visualization. Most server-side GIS already work natively in this sort of architecture. Some really interesting Web-enabled GIS tools are being built: my colleagues in at the GeoDa Center for Geospatial Analysis and Computation at Arizona State University, for example, have developed a version of their popular GeoDa spatial statistics toolkit that works as a Web service. Methodologies for artificial intelligence (agent-based modeling, machine-learning, filtering, data reduction and so on) are in one area that is perhaps outside the GIS professional's standard toolkit, although these are now taught in many university GIS programs, and some of the skill sets are already standard in image processing for remote sensing.
To the extent that I could offer my opinion of how GIS professionals might think about things differently amid the emergence of these technologies, I might suggest that, in addition to building GIS tools for desktops, they also consider the products that they develop as being part of the emerging "cloud" of information systems (a cloud that now exists in the Web, the GeoWeb, in distributed sensor networks and in users' hand-held devices) and that they contemplate ways in which the "G" in "GIS" can be leveraged (or monetized) to extend the current state-of-the-art. To some extent, GIS is dominantly tool-oriented and that is advantageous because it allies GIS professionals with their colleagues in IT. But spatial thinking, independent of GIS tools, is a fantastically valuable skill set that is growing in its potency, and it is a skill that geographers have undisputed mastery of.
DM: We're interested in understanding your thinking about the use of "spimes" (as you describe: "artifacts that are aware of their position in space and time"), sensor and swarm networks to support risk mitigation such as for natural disasters or responding to terrorist activity. Do we have enough of the sensor infrastructure to support this and, if not, when will we? Said a different way, do we have the technology today to process this kind of information and would decision makers know what to do with it if they had it anyway?
PT: The definition of spimes is something that belongs to the science fiction writer and futurist, Bruce Sterling, which is interesting because it is an idea that developed outside (but alongside) geography. But it is a concept that is really coming to shape how we think about cyberspace, cyber-place, cyber-geography, and emerging "code-spaces" (to use a term from Martin Dodge and Rob Kitchin that I think really captures this field well). For many people, in the design world, for example, they come to learn about GIS through spimes.
For certain systems, the architecture of spimes in existence is already very sophisticated and it is pervasive enough to allow for almost complete coverage of system attributes. The most obvious example is the postal system (whether you consider the federal postal system in the United States, for example, or private systems run by any of the big parcel service providers). Various groups are also developing sensor networks for things like earthquake and tsunami monitoring and geographic information technologies, from GPS to accelerometers and GIS, are an integral component of these systems. Many of these sensor networks involve location technologies that work as fully fledged spimes, miniaturized to scales that allow them to be mass-produced cost effectively and fastened onto mobile objects (buoys, rocks atop tectonic plates, crates, cargo containers and so on). They can be assigned IP addresses that allow them to participate in the Web, just like any other Web-enabled device. Spimes even have their own operating systems, such as Java Jini, that have been in active development for some time. As the cost of embedding GPS chips, Wi-Fi connectivity, processors and MEMS (micro-electromechanical systems) falls and the technology fuses further, the "Internet of things" that these devices constitute in collective form, will extend. So, I think the sensor technology to empower decision making and planning is already there and it is relatively mature. Your question about how to leverage the information produced by that infrastructure for decision making really captures one of the grand challenges for the technology. With such huge amounts of data being generated, the only feasible way to sift through it in the sorts of temporal scales that human decision making operates is to co-develop tools to process the information. I think that GIS will play an increasing role in the development of these technologies.
DM: You mention in the article that the data we are collecting on the Web through mouse clicks and simple searches, coupled with geodemographic information, hold great potential. How can we become better marketers using information from the Web and employing process models?
PT: One of the advantages that I see in associating geographic information technologies with Web technologies is in using the power of geography to contextualize data to places, regions, people, industries, social groups and so on. This is already happening in online advertising. By reverse geocoding the IP address on a user's machine, a browser application can tailor advertisements to a particular city. (The technology is not fool-proof, however, and you will often see erroneously tagged marketing appear in your browser advertising services in the wrong city.) If we throw process models into the mix, the possibilities for semantically inferring a user's age group, affiliation, interests and so on, grow. With geographic information, that marketing can be further refined to specific stores in a user's neighborhood. When mobile, similar schemes could be delivered to users' phones or laptops to suggest taxi services, coffee shops, tours or any service that are fine-tuned to the users' interests.ï¿1⁄2 There is a role for the visual nature of GIS as a medium of communication as well: a carefully designed map can greatly facilitate the uptake of this information, particularly in online contexts, where many users already use maps as their portal to the Web.
DM: As geospatial technology adapts to changing business needs, how do you see it integrating with mainstream information technology systems like business intelligence or more specific systems such as for human resource management, sales and finance? Is geospatial information just another datatype to mainstream users, or is it truly "game changing" for those who can exploit it best?
PT: Geographic information is already game changing: it is an essential component to the business models of insurers, banks, transportation providers, retail stores, civil engineers, marketing firms, utility companies and to the service models of defense agencies, urban planners, hospitals, law enforcement, schools, mail providers and so on. My sense is that geographic technology is already integrated with the mainstream of information technology and that it has been for some time. In the GIS courses that I teach, for example, we have students from sociology, criminology, biology, urban planning, civil engineering, ecology, environmental science and business; they all regard GIS as a core part of their education.
Geographic information is probably always going to be of distinct value because of its ability to associate data contextually to an event, a place, an interaction, an activity, a trend and so on. As information technology has evolved over the years, its association with GIS has grown closer and, looking to some of the potential futures of IT, that trend seems likely to continue.