GIS, Data and the San Bruno Pipeline Disaster

By Nora Parker

A San Francisco Chronicle article published on Feb. 13, 2011 seemed to suggest that software upgrades and errors in maintaining the utility’s information system were contributing factors to lack of maintenance of the pipeline (see reports and comments on All Points Blog).
Utilities are working to prepare for the "smart grid" era, but Bob Montgomery, Infotech’s utilities manager, believes that utility companies are finding that the role and complexity of geospatial information and “smart data” are being grossly underestimated, and that a lot of utilities are missing critical information. In this interview, Montgomery addresses some of the issues faced by utilities leading to such situations.

Directions Magazine (DM): The San Francisco Chronicle article makes various statements about the role of technology in the accident. It sounds like there might be the possibility of scapegoating the technology (which comes up in the follow-up discussions at our blog). Can you comment?

Bob Montgomery (BM): If I ever failed an exam in school, I could not blame the paper, pen or calculator I used for the exam. Similarly I believe we cannot blame technology for problems that arise from the implementation of the technology. The article suggests software complexity and software upgrade problems contributed to the pipeline problem. This is a bad notion for our industry, and it is not accurate. Any attempt to upgrade an information system, especially a system supporting public safety, is essentially a positive thing. Any system can have errors and omissions to the extent tolerated by budgetary and logistical constraints. Project design and implementation typically try to minimize, or at least control, errors and omissions. Recent trends show an increased focus on rigorous system testing that goes beyond simple linear testing and really looks at how systems will perform under a crisis situation. This type of testing is one way to help ensure new technology and system upgrades add value and do not add to operational issues.

My experience also tells me that no one wants to be the first to install and try new versions of software. Sometimes it is a matter of “if it ain’t broke, then don’t fix it.” If a system is performing acceptably, then users typically resist any interruption or retraining if they do not foresee a clear improvement. Users resist change for the sake of change. How many people are still running old versions of Windows, Microsoft Office or other applications? Quite often users are not aware of available upgrades, but more often they do not want to take the perceived risk that may result from a system upgrade.

Bottom line on technology is that improvements and new releases continue, and new upgrades are periodically available. Most upgrades are scheduled and made on a priority basis. I believe utilities typically do a good job of evaluating the effort to upgrade a system against any potential improvement in performance, functionality, interoperability and adherence to regulations and standards. If faced with a time constraint for implementing changes, then utilities may need to depend on system providers or other experts for assistance in the design, testing and implementation.

DM: What do you think utilities need to do to get ahead of this enormous data problem? What about the problem of maintenance of the infrastructure itself? What needs to happen to ensure that pipelines (and other infrastructure) are really safe?

BM:
While there is never a perfect information system that is dependent on human data entry, there is an ever-increasing demand for more reliable information to help prevent tragedies like this from occurring. Standards for information assurance are on the rise, and there are some key areas on which utilities are focusing to improve their confidence in infrastructure information. These include:

  1. Positional Accuracy – making sure the geographic positions of features represented on the computerized maps are close enough to the real world to make correct judgments about the relationships with other features in the real world, or on other maps.
  2. Completeness – using validation and cross-check techniques to discover and correct places where information is available but missing in the infrastructure data. The bedrock for this is comparison with new field inventory data.
  3. Currency – reducing the time between real-world changes and updates to the infrastructure database. A recent survey indicated that only 10% of utilities update changes to infrastructure in less than 10 days. Many utilities take as long as six months to post changes to their infrastructure into their computerized systems. This time gap has to be reduced in order to support day-to-day operations.
  4. Authenticity – improving the information about the source of the data, how it was processed and by whom. This also means documenting processes, conducting audits of the process, eliminating duplicate data systems and assigning accountability for accuracy, completeness and currency.

If engineers and management have confidence in the information available to them, then decisions and contingency plans can be made, including decisions related to public safety.

DM: Are you working with utilities now to help them get a handle on their combined data and infrastructure problems?

BM:
We are involved in various phases of work with many utilities. The goal of the utility often involves regulatory compliance such as Pipeline Integrity or Distribution Integrity Management Program (DIMP).  The first step toward that goal is always the same; that is, increasing the knowledge of the system.

We are conducting workshops that include assessing the quality, completeness, connectivity and positional accuracy of data. The assessment also reviews processes to look at currency and quality control of data. One of the objectives of the assessment is to help ensure the data will support the engineering, accounting and day-to-day operational needs of the utility. The outcome of these assessments is typically a set of recommendations based on known best practices in the industry.

We also have been participating in the design and testing of system interoperability; that is, making sure systems talk together and that applications are empowered by the data.  For others we are heavily involved in performing conversion, migration, conflation and maintenance services in support of their corporate goals.

DM: Do you think utilities are taking this problem seriously, or are they trying to “kick the can down the road”? What barriers are utilities facing in addressing these problems; is money the biggest one?

BM:
A tragic event for any utility is a wake-up call for all utilities. Responsible stewardship of public utilities requires that priorities be set for infrastructure information improvement that has maximum impact on public safety and delivery efficiency. I believe all utilities are taking this very seriously. Utilities realize community well-being, public trust, additional regulations and economic well-being may all be impacted by their treatment of this issue.

I believe utilities are working toward compliance with regulatory requirements that sometimes are implemented in phases. Waiting for published standards, interpretation of the regulations, and government approvals of plans can be a barrier to full implementation.

Editor’s note: On Feb. 10, Directions Media produced a webinar with Infotech that spotlighted the fact that many utilities need better information about their networks - the kind of information that was apparently lacking in the San Bruno case.


Published Tuesday, March 8th, 2011

Written by Nora Parker


If you liked this article subscribe to our newsletter...stay informed on the latest geospatial technology

© 2016 Directions Media. All Rights Reserved.