Let’s say you are working on a greenfield project to develop a new game or web application. Great! With a greenfield approach you get to choose where to host your project, which third-party services to use, and which to build. For example, hosting on Azure might be a great idea because your DevOps and SRE engineers are already familiar with this hosting platform.
It may also be important to look into some of the great machine learning services on AWS so the team can innovate faster and iterate new features into the game based on how players are using it. For example, with SageMaker Canvas, your team can quickly make ML predictions on how the user navigates the virtual world and incorporate this info back into the game.
Or perhaps there is a need to store real-time data for monitoring and dashboard purposes in a warehouse like Datadog or New Relic. There are various challenges and many solutions, so let’s explore some.
Disparate Systems
The approach outlined above is common and sometimes referred to as Disparate Architecture or the Democratisation of Technology. The benefits are obvious. Using available services and applications that are already available and battle-tested speeds up the delivery process.
Services can be separated using the separation of concerns principle design pattern, which means the application can be more easily scaled. This is done by adding more service-specific instances as needed rather than scaling the entire system.
Disparate, or “cross-cloud” hosting, is more common when adding to or growing current applications than greenfield projects because of the risks involved in moving established components and the commercial uptime agreements.
Security and Monitoring
Even with greenfield gaming projects, the benefits of disparate hosting can outweigh the cost, and positive decisions need to be made to architect platforms separately, as described above. One of the significant costs of this decision involves security and monitoring. This approach often results in application ‘blind spots’ where no specific part of the multi-dependent system is in control of monitoring.
Additionally, ‘alert fatigue,’ where multiple systems report the same issue, and/or the issue is already known and considered a low priority, or the issue is being over-reported, perhaps because of an overactive alerting mechanism. All of these problems can cause more prominent issues to be missed or overlooked.
The solution to these points is often a consolidation of application reporting and protection using a Cloud-Native Application Protection Platform (CNAPP). CNAPPs combine the multiple capabilities of cloud security, compliance, workload, entitlement management, and container security into a single platform.
This results in a lower cost of ownership and faster and more consistent alerting and reporting. Where separate security platforms result in increased cost and complexity, CNAPPs enable the foundation of a continuous security fabric and often without significant investment in tooling or developer attention.
Cloud Native
Arguably the most crucial decision will be to build, from day one, a cloud-native platform as opposed to a more organically grown system that develops over time and is dependent on dedicated infrastructure. Cloud-native (also called cloud-first) is predominantly a decision to build an application primarily as cloud-hosted rather than using dedicated servers.
While this may seem obvious to some, it is vital this is communicated as a positive architectural decision at the start of any development project. Stating this decision clearly implies multiple facts to different members of a team. Developers and architects may be able to use specific pre-built services and features. Testers can use native tools for that platform. And most significantly, DevOps and Site Reliability Engineers know and understand the boundaries of the system they need to protect and secure.
This concept goes deeper in two specific ways, server vs. serverless and microservice architecture. Cloud-native platforms can still be run on dedicated servers, and although looked down upon by some, this approach does have its advantages, though admittedly, they are short-term.
For example, it may be necessary to very quickly create an MVP (minimal viable product) or POC (proof of concept) application. These are sometimes used to prove features to potential investors or to show that a platform or application is not just vapourware. Sure, we certainly want our application to be built in a scalable way eventually, but that is what major version releases are for, right?
Traditional Server-based computing may be faster to stand up initially, but they have fixed resources that need to be provisioned, and developers must consider the physical limitations of the physical (or virtual) hardware. On the other hand, serverless computing has none of these limitations from a developer perspective. The cloud infrastructure takes care of virtual machine management, hardware allocation, and specific tasks such as multi-threading. Containerization is a key element and does need to be considered during development to take advantage of these benefits.
Agreeing on a microservice-based approach is a similar decision point to be discussed and agreed upfront. Not using microservices from day one will likely mean pain later in the software development lifecycle and rewriting, or at a minimum the reallocation of code, both of which are costly and time-consuming.
Leave a Reply