Skip to Content

Inside Amazon’s Cloud Computing Infrastructure

Posted on October 24, 2022 by

Categories: AWS

Tags:

Amazon has clearly established itself as the leading player as cloud computing has emerged as the new paradigm for computing at scale. With the introduction of Amazon Web Services in 2006, the store invented the public cloud industry. Today, AWS generates $6 billion in revenue annually.

Amazon’s infrastructure has evolved over time to become essential to the uptime of more than 1 million customers. As was the case on Sunday when Amazon encountered issues at a data center in Virginia, an outage at the retailer may impact well-known websites like Netflix, Reddit, Tinder, and IMDB.

We’ll examine Amazon’s formidable cloud architecture this week, including how it constructs its data centers and where they’re located (and why).

In its worldwide network, Amazon runs at least 30 data centers, and another 10 to 15 are in the works. Although Amazon withholds information regarding the full extent of its infrastructure, outside estimates place the IT capacity of its U.S. data center network at roughly 600 megawatts.

Leading observers consider Amazon to be the leader in the public cloud. According to IT research company Gartner’s analysis of the cloud environment, “AWS is the overwhelming (cloud computing) market share leader, with more than five times the compute capacity in use than the combined total of the other 14 providers.”

Taking Off the Cloak of Secrecy… A Bit

Historically, Amazon has kept its data center operations a secret, revealing significantly less about its infrastructure than other hyper-scale computing pioneers like Google, Facebook, and Microsoft. Since Amazon executives Werner Vogels and James Hamilton started talking openly about the company’s data center operations at developer gatherings, this has started to alter.

Werner Vogels, VP and Chief Technology Officer for Amazon, stated during a presentation at the AWS Summit Tel Aviv in July that the company has had “a good number of requests from customers wanting us to discuss a little about the physical structure of our data centers.” “We hardly ever discuss it at all. We thus intended to remove the veil of secrecy around our networking and data centers.

These seminars have one of their main objectives educating developers on Amazon’s approach to redundancy and uptime. The organization’s infrastructure is divided into 11 areas, each with a collection of data centers. Customers can mirror or back up important IT assets in each region’s numerous Availability Zones to prevent downtime. This functionality is still not being fully exploited, as seen by the “ripple effect” of outages whenever AWS has issues.

Scale Drives Investment in the Platform

The income for Amazon Web Services was increasing at an annual pace of 81 percent in the most recent quarter. It’s possible that this does not exactly translate into a comparable rate of infrastructure expansion. Still, one thing is sure: Amazon is rapidly constructing additional data centers, servers, and storage facilities.

James Hamilton, Distinguished Engineer at Amazon, discussed the AWS architecture at the Re: Invent conference last autumn. “Every day, Amazon instals enough new server capacity to run all of Amazon’s worldwide infrastructure when it was a $7 billion annual sales organisation.” There are several scales. With that volume, we can continue innovating and investing heavily in the Platform.

To reduce costs, Vogels remarked, “We undertake a lot of infrastructure innovation in our data centres.” We consider this a high-volume, low-margin business. Therefore we are content with the margins as they are. And if our cost base decreases, we’ll give you your money back. Werner Vogels of Amazon: “We regard (cloud computing) as a high-volume, low-margin market.” click to tweet We consider (cloud computing) to be a high-volume, low-margin industry, says Werner Vogels of Amazon.

The size of the data center construction is a crucial choice in the planning and deployment of cloud capacity. The enormous scale of Amazon brings benefits in terms of both cost and operations. According to Hamilton, most Amazon data centers hold 50,000–80,000 servers and have a 25–30 megawatt power capacity.

Hamilton stated, “We’ve decided to develop this number for an incredibly long period, and in our opinion, this is about the correct amount. “We can expand our buildings. The problem is that while size has significant early benefits, these benefits eventually start to wane. A vast data center is slightly less costly per rack than a medium-sized data center.

What Size Is Too Large?

As data centers grow, they pose a greater risk to the enterprise network.

Vogels noted the industry phrase for evaluating risk based on a single devastating regional event, “the blast radius.” They added, “It’s undesirable to create larger data centers than that.” “Data centers continue to be a point of failure. The consequences of such a failure might increase in proportion to the size of your data centers. We prefer to restrict the number of servers in each data center to around 100,000.

So how many servers are now in use by Amazon Web Services? According to Hamilton and Vogels’ descriptions, there must be at least 1.5 million. Although estimating the higher end of the range is more challenging, Timothy Prickett Morgan at the Platform calculated that it may reach 5.6 million. Werner Vogels of Amazon said, “We like to keep the size of data centres to fewer than 100,000 servers.” [clickToTweet tweet] quote=”Amazon’s Werner Vogels: We like to keep the size of data centres to less than 100,000 servers.”]

Several wholesale data center suppliers, such as Digital Realty Trust and Corporate Office Properties Trust, rent out premises to Amazon. The corporation used to frequently lease pre-existing assets, such as warehouses, and refurbish them for data center use. Amazon has been concentrating on a new building in recent years because it offers a “greenfield” that can be tailored to accommodate all of its concepts, from the grid to the server. Amazon has accelerated its growth in Oregon by using prefabricated “modular” data center components.

The fact that Amazon can create its own power substations is an intriguing aspect of its strategy for building data centers. Instead of cost control, the demand for specialization is driven by the desire for speed.

Hamilton answered, “You save a pittance. “We can make them considerably more rapidly, which is beneficial. Our growth rate is not what utility businesses typically experience. We had to do this, so we did. However, it’s fantastic that we can.

Individualized Servers and Storage

Amazon initially purchased its servers from reputable suppliers when it launched its cloud platform. Rackable Systems, a pioneer in creative cloud-scale server designs, was one of its key suppliers. In 2008, Amazon spent $86 million with Rackable on servers, an increase from $56 million the previous year.

But as its business expanded, Amazon copied Google and started creating unique gear for its data centers. This gives Amazon better control over both performance and cost by enabling it to optimize its servers, storage, and networking equipment.

Yes, Vogels acknowledged, “we create our own servers.” “We could purchase off-the-shelf, but they are incredibly costly and generic. As a result, we are creating specialized servers and storage to handle these workloads. To provide home CPUs that operate at much higher clock rates, we collaborated with Intel. It enables us to create particular server types to accommodate specific workloads.