Whitepaper: Driving Operational Efficiencies with Digital Technologies
When PGS launched the Triton survey, they knew they’d end up with the largest seismic survey they’d ever collected – and when they finished acquiring the 660 terabyte dataset over 8 months later, they had the most complex data processing and imaging challenge they’d ever faced. PGS and companies like them have typically used clusters for these needs, but a new era of massive data volume and the overall tightening of margins led PGS to the realization that they couldn’t meet their demands with their existing compute technology. This case study explores the supercomputing solution PGS chose, including before/after scenario, key decision factors around system design, software environment and team expertise.
After reading this case study, you’ll gain a deeper understanding of the attributes of an ideal seismic imaging supercomputer:
- Enables implementation of new algorithms
- Is cost effectiveness for production applications
- Seamlessly fits into existing technology pipeline
- Reduces development time
- Support the most challenging, largest-scale imaging problems