
Why we built Global Relay cloud
Global Relay builds all technology from the ground up. We like things to work well together, beautifully designed, and to a consistently high standard. Our Chief Strategy Officer, Alex Viall, explains why.
Written by a human
“You seem to be an outlier, a lot of your competitors utilize Public Cloud services like AWS, Azure or Google Cloud.” A question we get asked in conversation on a regular basis and our answer goes to the heart of how Global Relay approaches designing and delivering solutions to our customers.
This article explores the key reasons why we believe the effort and expense of building and owning our cloud infrastructure far outweighs the convenience and perceived scalability of a public cloud for our workloads and use cases.
We build rather than acquire
There is a fundamental principle that has driven Global Relay since the beginning. We build our own technology patiently. We like things to work well together, in one system, beautifully designed, and to a consistently high standard. That “one system” is a fully interconnected set of technologies, processes and resources designed to achieve compliant recordkeeping and messaging. This encompasses hardware, networking, security and management tools – all supporting the application software that our teams have developed.
Workloads may be distributed across multiple device types including traditional servers, storage devices and GPUs, but they must work effectively together to deliver a cohesive, reliable and secure solution. We need to understand and control as much of that system as possible to not only deliver the service reliably, but to inject innovation and value in every available component.
Data security and compliance
Our highly regulated customers have the strictest obligations over how their data is stored, processed and secured. While public cloud providers implement standard security measures, they can never match the rigorous level of security we can achieve. Global Relay controls the entire infrastructure stack, and we deeply understand the application layer, enabling tailored protections that are specific to the domain we operate within.
Data security is a vast topic, but by way of example, let us focus on just one aspect. Confidentiality is the principle of ensuring that sensitive data (including documents and communications) are only accessible to authorized individuals within an organization. The concept is deceptively simple, but requires a complex interoperation of access controls, encryption, authentication mechanisms, and monitoring systems. It requires a deep understanding of both the technical security landscape and the specific operational needs of the business.
Aside from our adherence to global security standards such as ISO 27001 and SOC 2, we are constantly questioned, examined and probed by some of the brightest and most well-informed security minds in the financial services community. Our answers are precise, informed and based on evidence. We feel this is only possible, because we have designed, built and operate the entire system from the ground up.
Perhaps more importantly, is our ability to respond quickly to new emerging threats or to implement additional controls to enable the use of new capabilities such as large language learning models.
Public-interest technologist, Bruce Schneier said “If you think technology can solve your security problems, then you don’t understand the problems, and you don’t understand the technology.” The point being that security is just as much about people as it is about technology. Our customers can talk directly to the people that are managing their data, the product teams that are developing our applications and the leaders that run the offices that we own. We can communicate and debate as to why we made every design decision or chose one piece of hardware over another.
There is no limit to the time we will spend with our existing or prospective customers to provide assurance that their data is safe, secure and will be available to them when they need it. We can only make that commitment given our choice to build and run our own infrastructure.
Cost certainty and control
Search “unexpected cloud charges” for evidence to support our next justification for building our own cloud. These articles contain instances of hundreds of thousands of dollars in overage, egress fees and unexpected costs. Often accumulated over a few hours or a weekend and the unexpected consequence of a variety of common operations. Elevating debug information levels for logging, deleting larger quantities of files or moving data between cloud sites have all triggered “shock bills”.
We don’t like surprises, and we know that our customers don’t either – particularly when it comes to expenditure beyond agreed budgets. Building our own cloud infrastructure was a significant investment, but it now gives us the ability to understand, control and accurately forecast costs. We can build commercial models that work for all parties, are sustainable and predictable. We store petabytes of data for tens of thousands of customers. At that scale, pure usage-based pricing models from public cloud providers are much less attractive, particularly over the long-term.
Furthermore, we need a range of non-production environments to innovate, develop new capabilities, test ideas and prepare applications for production deployment. Some of these environments need to be near identical to their production counterparts and we don’t want to compromise, be constrained or concerned about overage charges when it comes to stress-testing our solutions.
Risk management
There have been many examples over the years of breaches with commoditized cloud services including issues related to availability, scalability, security and deployment. Consolidation in BigTech brings a new set of risks that has not gone unnoticed by the world’s financial regulators. The CrowdStrike outage from July 2024 that Global Relay avoided remains an unnerving example of how a single issue can bypass multiple redundant measures introducing significant risk to an organization’s security and resilience posture.
Global Relay Cloud was built to avoid such wide scale single points of failure, and we certainly don’t want to be in a queue of thousands of other companies waiting to get their services back on-line with minimal communication throughout the process. Prevention of issues through meticulous design, extensive testing and constant monitoring is the goal, but when an issue does occur, we must be in full control of how it gets resolved.
In designing, building and managing our own cloud, Global Relay can identify, understand, manage and mitigate these infrastructure-related risks.
The bottom line
We have referred to our instinct to build first rather than buy where it makes sense. But of course, there is a limit, we are a huge hardware technology buyer, we partner with technology companies, communication carriers and service providers and we are an enthusiastic adopter of open-source software. When we decide to build something, the fundamental question we ask ourselves is “Will building this deliver greater value and a more reliable, higher-quality service to our customers.”
There are use cases and workloads where we think the public cloud makes total sense: if Global Relay didn’t offer an archive solution and was purely an orchestration and integration platform; if we only integrated with AI products rather than developing and hosting our own; if we didn’t have the ambition to design, develop and launch new and unique future products to the market; or if we were constrained by the shorter-term needs and demands of external investors. Fortunately, at Global Relay, none of these things apply, which is why we believe the Global Relay Cloud is the best choice for us and our customers.
References
- https://www.grip.globalrelay.com/cloud-spending-predicted-to-grow-to-600bn-in-2023/ – Nov 1 2022
- https://www.grip.globalrelay.com/not-every-cloud-has-a-silver-lining/ – Sep 16 2022
- https://www.grip.globalrelay.com/everything-you-need-to-know-about-managing-data-in-the-cloud/ – Aug 19 2022
- https://www.grip.globalrelay.com/basecamp-says-goodbye-to-public-cloud-in-growing-challenge-to-established-narrative/ – Oct 2022
- https://www.grip.globalrelay.com/a-new-threat-to-financial-stability-lurks-in-the-cloud/ – June 2023
- https://www.grip.globalrelay.com/ofcom-takes-aim-at-cloud-services-monopoly/ – April 2023