Directions Magazine (DM): What do you believe is driving image processing applications to the cloud?
Rui Gomes da Silva (RGdS): We see cost and productivity as the primary driving forces that are pushing image processing onto the cloud. Using cloud computing saves users from the high cost of acquiring desktop image analysis machines (both hardware and software) and allows organizations to offset maintenance and support fees. With cloud solutions like ours, customers can process anything from a few thousand up to several million remotely sensed images. We support operations such as change detection, surface classification, cloud detection, object identification and a range of other operations. A wide range of image knowledge extraction can be done cheaper and with higher accuracy than ever before.
If organizations set up their own desktop image processing systems, they typically also have to invest heavily in training operators. This is another barrier; it can take months or years to fully train GIS operators so that they obtain acceptable levels of accuracy from desktop systems.
DM: What special technical issues have to be addressed to do cloud-based image processing?
RGdS: Users need a browser and access to the Internet - that's about it. With the on-demand system, there is no hardware or software to purchase; you can just login and start processing. For customers who have very large, regular processing requirements, we can also deploy systems on premises alongside existing systems to meet their specific needs.
The common alternative is to build your own compute cluster using existing software. Imagine the costs of having to purchase hardware (or pay for every hour of virtual server time), pay for several server licenses, pay to setup and manage batch processing software, as well as training all of your GIS operators to use the software. It's too time consuming and complex for many organizations. Our approach lets users get results quickly and reliably.
DM: What have you experienced in improving processing speed, and how have you achieved that?
RGdS: In our case, we achieve our speedup both at the hardware and software levels. On the hardware level, we use the graphical process unit (GPU) to do many of the analytical processes - we gain a significant speedup from this alone. With GPUs predicted to reach 15 TFLOPS (Trillion FLoating point Operations Per Second) within the next few years by NVIDIA, this immense processing capacity is pushing the limits to make supercomputing power available at a lower cost than ever before. We've also stepped out of the "desktop" batch processing approach and have built the supercomputing infrastructure, which makes us good at dealing with "large data."
To get down to the numbers, take the case of a client of ours who recently processed over 3,600 aerial images (25 million pixels each) covering an entire city at high resolution. They needed to perform surface classification of city green-space versus urban areas. They only had to spend about a day training the system to complete the entire project - once they clicked "export" our systems took care of the rest. Another client of ours needed to process 4 million km2 of imagery every single day to detect clouds.
We make it possible for clients to extract actionable knowledge in hours or days, not weeks or months.
DM: What application areas can benefit from cloud-based image processing?
RGdS: Our image processing system takes tasks that were once very difficult, such as counting all oil well pads on the planet or classifying forests covering entire countries, and now makes them possible. We know that the amount of data is also growing. Imagery is the core source of data for remote sensing, and as more and more satellites are launched and exquisite high resolution cameras become available, the amount of data is projected to grow significantly in the years to come. Things like this make us very excited about the future. This is because high scalability is at the core of our cloud-based technology.
Some of the tasks that we currently do are change detection to help monitor disasters or urban sprawl, object detection (oil wells), porous/impervious surface classification, tree counting, tree canopy delineation, crop and tree species classification, surface classification, cloud/shadow detection and a variety of other interesting tasks. Many customers also have a need to extract knowledge from multiple data sources at once, which makes our robust, easy-to-use approach very useful.
DM: How do you deal with security issues for companies with proprietary processes or data?
RGdS: Our security professionals monitor all access to our data centers and extensively review all aspects of the system's data access points. In many circumstances, hosting on cloud providers can, in fact, be more secure than "doing it yourself." For many organizations, IT departments are under constant strain to secure their networks - it is a lot of work to keep up-to-date all operating systems and software security patches as well as spending time thinking about and securing every possible attack vector. Not every organization can afford to hire dedicated security professionals to perform extensive intrusion testing. But for cloud computing companies, extensive security knowledge is absolutely essential. Highly scrutinized security is a top priority for cloud computing services such as ours.
For customers who cannot transfer data outside of their network because they either have large volumes or must contend with national security agendas, we provide internally deployed solutions in which accessibility is controlled by the organization itself. Examples would be companies that own satellites or the defense industry. The benefits of our technology remain strong whether the cloud is internal or at one of Incogna's secure datacenters.