If you say a person or organization “goes to great lengths” to achieve something, it means they try very hard and perhaps do extreme things to accomplish their goal.
One example of “going to great lengths” that I’ve seen with traditional companies is how they go to great lengths to “hide” the cloud from their pool of potential technical consumers doing work like development. Instead of saying, “here it is…” they block or restrict users from direct consumption. Developers don’t directly login to Azure, AWS, or Skytap, they go to the “internal corporate portal” and fill out a web form of what they want and submit it. Then someone will eventually process it and create what is needed.
In extreme cases, organizations create “cloud teams” and staff those groups with people who build cloud resources on behalf of downstream technical consumers, like developers. Potential consumers must submit a request with a detailed design document specifying everything they want. Then the “cloud team” schedules the project and delivers it sometime in the future with the whole process mimicking a historical waterfall approach to implementing systems.
Why do things this way? It always seems to come down to 4 things:
- Costs – “If we give open access we will end up with a huge bill!”
- Compliance – “Everything built must meet our security standards.”
- Complexity – “Most users will waste time trying to figure it out.”
- Control – “Our group has always coordinated access to resources”
In traditional companies, it always seems to be some variations of this list. And when you rebut with something like “Do you think Facebook, Twilio, Netflix, etc. restrict cloud access to their developers?” The answer is usually like: “Those are elite tech companies, and we’ve always done it the way we do it now…..”
In most cases, there is no singular argument you can make to expand the range of possibilities in thinking, but you can offer facts for consideration.
COSTS
“If we open up the cloud to everyone, then all the developers will endlessly create VMs and associated resources and services, and we will end up with a huge bill.”
There are examples of this if you search for some variation of “unexpected cloud costs,” especially a few years ago when cloud adoption was just taking off. But what about today? What can be done to control costs? Even though there are lots of tips and ideas on how to control cloud costs, I’ll offer one simple possibility that directly applies to developers and development groups that is not often mentioned in many articles:
Implement a cloud resource quota per developer or dev group.
In AWS you can do things with IAM, with Azure, you can use resource groups, and in Skytap, you can assign resource quotas on multiple levels down to a per user basis. The net-net is that there is a way to restrict the number and amount of resources that a single user can consume. So if you have 10 or 1000 developers, you can use these mechanisms to limit and then project what your consumption will be. If I have 100 developers and I put a limit on the number of VMs, the number of gigs of ram they can allocate, and the amount of storage each developer can consume, then haven’t I just created a “cap” on consumption? The quota concept gives you a path to a “predictable” bill every month.
COMPLIANCE
“If we open up access to everyone, they’ll create VMs that don’t adhere to all our internal security standards.”
Aren’t developers just looking for a VM(s) to do dev work? We aren’t talking about building production hardened VMs to host applications. We are talking about giving developers the proper sandbox to do their work “in.” So from this perspective requiring that developers only create VMs that adhere to strict compliance standards reserved for production servers doesn’t make sense.
For example, maybe one of your compliance standards is that all production servers have a monitoring agent installed on them. Fine, that makes sense, but what about dev/test? In dev/test, I might be spinning up a VM or container that only exists for a short time, maybe even less than 1 hour while some tests are running. Does it make sense to require that a monitoring agent be installed on a VM that only is going to have a 1-hour shelf life? So just to throwing out the term “compliance” as a catch-all is not a real objection. Perhaps for dev/test, there are minimum standards you would want to follow regarding OS levels, patches, etc. But the VM(s) you use in dev/test doesn’t necessarily need to have all the additional security requirements reserved for production servers.
The other part of this objection hinted to earlier is that you have to transition to “cloud thinking.” Resources that exist for the next 10 minutes in the cloud could intentionally be gone shortly after that. In the cloud, you can quickly create and destroy VMs or entire environments of VMs. That is what makes the cloud a game changer versus traditional on-prem ways of doing things. You can’t apply your 1-for-1 on-prem thought process to the cloud. You need to come up with a new thinking model on what it means to use the cloud versus how you did it before on-prem.
COMPLEXITY
“It’s too hard to figure out some of these cloud architectures, and developers will waste time trying to figure it out versus having a central team do all the work correctly the first time.”
Without the commitment to being a “learning organization”, there can be no DevOps, no transition/migration to the cloud, no digital transformation. Check out item #6 of “7 Ways to Make Developers Happy.”
If “complexity of learning” is one reason why you restrict access to the cloud to your teams, then you are missing the point of what it means to be a technical person in general. Developers partially define themselves by the depth and the range of knowledge that they have accumulated, so continuous learning is a significant component of keeping developers “happy.”
CONTROL
“If developers are creating their own servers, what does our team do?”
This one is the toughest of all. Many organizations have one or more infrastructure “teams” who are responsible for creating servers and resources and giving access to downstream consumers. These groups “control” the resources. Now here comes the cloud, where everything can be self-service. And the natural reaction will be: “If developers are creating on their own, what do we do?”
For many traditional companies making the transition to the cloud, there will be many opportunities for participation. Instead of “architect,” now we need “cloud architect” instead of “engineer,” now we need “cloud engineer.” Also, what else of higher value could you be doing for the company if you aren’t spending all your time provisioning and maintaining on-prem servers? Do you think all of the existing on-prem applications are going to go away overnight? Will your company become “cloud-native” in the next six months? What about all of the AIX, IBMi, Solaris, and other legacy components that are in the datacenter? There always will be plenty of work for everyone to do.
The reality is that giving developers direct access to the cloud (with controls) only frees up many other corporate resources to work on things strategically more important. If your company is not cloud-native right now, then guess what, it won’t be for a “long time.” Maybe in the short term, you’ll be hybrid-cloud, and then in the long term, cloud-native. All along the way, there will be opportunities for everyone to re-think how things get done, and maybe learn something new along the way.
Summary
If you are hiding the cloud from your internal customers, why? Are the original reasons still relevant, or are they holdovers from a time gone past? Maybe it is time to re-examine your original premise on why developers don’t have direct access to the cloud. If the goal of your organization is to “service the customer,” “do more,” “move faster,” etc., then removing internal barriers to accessing the cloud is one way to make progress.