Keynote: Emerging Zoonotic Diseases and the Need for Global Surveillance
The keynote by Dr. Corrie Brown, DVM, Ph.D. highlighted a few key ideas valuable to those of us not really tuned into global infectious disease challenges. First, she discussed how globalization (or perhaps "globality") is changing the stage for epidemiology. Not only are countries (and thus people and animals) more connected, they are bound in a complex interdependence, a close set of connections that means one country can only "tug" on another, rather than cause drastic impacts there.
Second, she described how the spread of disease is directly related to the time to diagnosis, and then response. That led directly into the core of the presentation: the three "steps" to battling disease in a global context. These steps include each country addressing: (1) surveillance, (2) will to report and (3) capacity to respond.
Surveillance refers to "keeping an eye out" for recurring or new disease. The ability to monitor, investigate and confirm possible diseases varies from place to place. Further, there's a basic incongruity between the areas where new diseases may appear (densely populated areas and those where animals and humans come in contact) and the tools available to identify outbreaks. In some areas equipment like autoclaves are on site, but electricity is not reliably available to power them.
The will to report refers to a country's decision to formally announce that disease has been found within the country's borders. There are many disincentives for reporting disease including the possibility of trade and travel bans, though in general a country's standing in the world is raised when issues are reported early. That, in turn, helps the flow of funding for response. It's also noteworthy that the World Health Organization (WHO, part of the United Nations) in 2007 changed its rules, allowing it to announce to the world an outbreak even before a country officially reports it. The OIE (the World Animal Health Organization) is working toward similar goals related to outbreaks in the animal population. For now, reporting is voluntary.
The capacity to respond involves all sorts of response efforts from education to treatment. Perhaps the most important factor in terms of changing policy related to fighting global disease is the fact that the cost of response is far higher than the cost of surveillance. Unfortunately, many funding organizations only want to invest after a threat is identified. As Dr. Brown put it, the funding becomes available after the horse has left the barn.
I also learned a new term: zoonotic. It means "from animals" and is quite important in these times since the most notable recent diseases in the human population have come from animals. The continued use of the term also reinforces the idea that efforts to mitigate and respond to human disease can't ignore the health of animal populations worldwide.
Web Delivery of Health Data
A session on Web Mapping included papers on health mapping in the UK, a Web-based solution for Montreal and incentives for sharing public health data.
Jessica Wardlow, a student from University College London, made the case for mapping in health. She reported on a pilot study using maps in local primary care trusts. Among the findings: users needed reference data to make sense of any maps (how they compare with neighboring trusts, for example) and enough, but not too many, categories and symbologies to match their needs (they preferred simpler maps over the more familiar OS-like maps). The takeaway for me is that developers of such focused systems still need to craft applications that address the specific needs of the users.
The team from Montreal demonstrated its interesting Web-based atlas, which provides many capabilities even beginners can use to manage data display. "You can change almost everything on the map if you want," said one team member. They did note that by default the initial map created in response to a query was quite useful to a beginner, while a more advanced user might further manipulate the data.
Jeff Christensen of Rhiza Labs (a company that made a bit of a splash in relation to H1N1 mapping) had one, big point: defining your technology solution should be one of the last things you need to do in a mapping project; there are many other things that need to come first.
To begin the discussion, he offered barriers (excuses) to data sharing:
- data too complex
- I'll lose money
- my data stink and I don't want people to know
- data will be "dumbed down" by others
- too complex internally to get it out
- no credit for me if others use my data
- data are sensitive
These are, in fact, benefits/positives, he argued. He listed these corresponding responses:
- provide feedback mechanisms to help users make better use of data
- if you give it away you get more back; get metrics and document use
- get it out there and fix it (in other words, suck it up!)
- tools are audience-dependent; offer more than one tool if needed
- get over internal ontology discussions; get something out and then decide on a structure
- build in linkbacks (put in a phone number so people can connect with data author)
- some are sensitive, de-identify/aggregate or start with some public data and push internal data into the public side over time
The Who, What and How of Mapping
Christensen then went on to what I consider to be the most important part of the most important session of this, or perhaps any recent conference in memory: the order for attacking a mapping (or frankly any design) project.
Essentially he suggested we jump over the third step: identifying the user (who), and go directly to the fourth (how).
The correct process looks like this:
Then, third: How
NO STOP! Third: Who. Fourth: How
So, how do you define the "who"? Christensen described how to define personas, as is done in advertising. For Rhiza's mapping clients that might mean putting people on a grid with axes from objective to biased in one direction and low science/high science on the other. After mapping some personas, the team needs to select which of the personas to support. It's unlikely "all" will be supportable with a single implementation. He noted that it never works to put out a fully functional app and then trim it back for beginners.
He then went on to the "how," and offered the many possible options (and technologies) available. The trick is to select the one to serve the users identified in the "who" part of the development.
Dr. Michael Shambaugh-Miller of the University of Nebraska used a federal grant to help support populations at risk for flu. Interestingly, the federal and state government did not know who these people were, so the team built its own list (those who spoke English as a second language, those in assisted living - about eight datasets, plus or minus). The state has 19 public health offices funded by the tobacco settlement. The grant covered installing ArcGIS in each office and training one specially selected individual. These users developed datasets to a standard, such that they could be, and eventually were, knit together in a statewide "Guardian" system. Thus any office could download and run queries against any part of the dataset. Each year there's a "test" flu to ensure the local offices can query to a specific population. Further, the individual offices have begun to use the system for participating in other grants and adding to various health data collections. The next step involves integrating this system with other state health systems. Total cost: $116,000 (and it's sustainable).
Dr. Brown returned to provide a primer on the biology of H1N1. It was fascinating, but my biology basics were not sufficient to keep up!
Jeff Chistensen returned to tell the story of Rhiza Labs' experience and lessons learned tracking H1N1. The company began working with Dr. Henry Niman in part because his use of Google My Maps for his tracking was about to hit a wall. Dr. Niman looked at news and other reports and manually dropped pins, added blurbs or copied text from sources to build his original My Map. The map, "the only game in town" collecting data from official and many unofficial sources, was widely popular because unlike official offerings from the Center for Disease Control and the WHO, Dr. Niman's map provided perhaps not entirely accurate but seemingly actionable data. The other offerings were more in the vein of "infographics." For example, visitors to CDC or WHO maps couldn't determine if known cases increased from yesterday to today or if a parent might want to keep a child home from school.
The Rhiza Labs team quickly jumped on the challenge and asked: What do people want on a map? The answers based on what they saw and heard included:
- locations of cases
- specific consistent information on cases (this is unofficial data)
- source for each case (for users to determine if they believe it)
- frequent updates
- user-contributed reporting
- downloadable data
- the ability to see data in different ways
So what did Rhiza do? It created a submission form for those who wanted to contribute (to get geocode-worthy addresses) that was quick and dirty, and required a URL source. Contributions were "controlled" by requiring registration. Curators - Dr. Niman and his team, a total of four people - then used that input among other sources.
The data, made available via Creative Commons license, were used by the likes of Walgreens to manage its Tamiflu supply chain and businesses that wanted to track the outbreak in the context of their distributed employees, among others. It also popped up in an emergent reality game in Hawaii and an iPhone app.
I was lucky enough to get to ask my question among the many, many from the attendees. I wanted to know what single aspect of the implementation Rhiza would have changed in retrospect. Christensen quickly pointed to a tool to ensure incidents were not mapped more than once. If you are at all interested in volunteered geographic information for health or other purposes, I suggest you keep an eye on Rhiza Labs.
The ideas from the first breakout session that I attended, which related to identifying and serving an audience with maps, continued to pop up throughout the day. At lunch, I joined a roundtable discussion about Google Earth and public health. One participant noted that her organization was launching a data portal with some mapping but they had heard suggestions they should look into Google Earth in the future. She'd never used Google Earth and was curious to learn about it. I could not help going back to the morning's discussion and thinking, "Google Earth is great, but who are you serving and what do you hope to achieve?" Only after answering those questions can you consider the "how." Another participant jumped in to suggest Google Earth Professional. But again, not having the why or the who, I was concerned that the suggestion, while well-meant, jumped the gun. I was disappointed that given Google's involvement and interest in health data and records, no one from the company or Google.org attended.
Another theme reappeared in papers and discussions: the challenge of delivering data and tools to both novice and expert users. That seems to plague organizations worldwide. Also, the challenges of acquiring and using health data appeared. My sense was that those who had data, mapped them (one paper was a series of maps correlating health disparities with other factors), and those who didn't have data, struggled to get them any way possible (as was done in the Nebraska and Rhiza Labs presentations noted above).
Perhaps most noteworthy during my single day at the three-day event was the confirmation that the challenges in health GIS mirror those across all uses of GIS. Further, those challenges are not that different from the ones faced 30 or 40 years ago in the early years of the digital mapping technology.