Cloud computing is one of those really awesome buzzwords that gets thrown lightly without a lot of understanding about it.
Defining Cloud Computing
In actuality, technology is not the big difference between traditional data centers and cloud computing. The real innovation with "cloud" is actually a business one. The biggest difference between a traditional data center and a cloud data center is the lease time.
In your own data center, you purchase the building, network infrastructure, power infrastructure, computers, operating systems and the whole data center from top to bottom. It's a huge capital investment and lead time to get up and running and even to simply add in new compute power. More than one client has told me that it takes roughly 6 weeks lead time to order a new machine, get it delivered, provisioned, installed in the rack and get an application up and running.
In a traditional third party data center, you negotiate a set of computers, sign an annual or month to month lease, get those machines provisioned to you and start deploying applications to them.
In cloud computing, you lease by the hour or sometimes pay by the number of transactions or some other consumption based unit but in short, it turns your compute from capital expenditure to operating expenses.
Doing in wrong...
There are a tremendous number of companies that I see doing it wrong. And by wrong, they are looking at cloud computing wrong and treat it like it's a traditional data center.
You can tell because they start running the largest machines that they can with multiple applications on them. Then they start naming these machines like pets and treating them like family, nursing them back to health when they get "sick". This is doing it wrong. I've had a number of customers come to me and tell me that cloud computing (Azure since I work for Microsoft) is too expensive. Every single time I've heard this, I go look at what they are doing and it turns out that they are running their services like a traditional data center.
Doing it right...
In cloud computing you should treat your computers like livestock. You need to start out small, never keep them longer than you need to and be quick to replace them if they get sick. By that I mean start by building your application to run in the smallest footprint that you can but architect it to scale up as needed. In cloud computing, you can automate the provisioning of new compute power and deployment of your application because you can just rent another "machine" by the hour. This makes it easy to do either the scale part or the replacing machines that get "sick" really simple.
Where's a ton of ways that this is done (PaaS, IaaS, SaaS, virtualization, containerization...) but that's for future posts.
Hopefully this either clarifies or confirms your understanding of cloud computing and gets your head in the cloud that much more...