What is the Cloud?
A brief history
Back in the 1970s, it was popular for businesses to rent time using big, mainframe computer systems. These systems were extremely large and expensive, so it didn’t make sense financially for businesses to own the computing power themselves. Instead, they were owned by large corporations, government agencies, and universities.
Microprocessor technology allowed for great reductions in size and expense, leading to the advent of the personal computer, which exploded in popularity in the 1980s. Suddenly, businesses could (and did) bring computation in-house.
However, as high-speed connections have become widespread, the trend has reversed: businesses are once again renting computing power from other organizations. But why is that?
Instead of buying expensive hardware for storage and processing in-house, it’s easy to rent it for cheap in the cloud. The cloud is a huge, interconnected network of powerful servers that performs services for businesses and for people.
The largest cloud providers are Amazon, Google, and Microsoft, who have huge farms of servers that they rent to businesses as part of their cloud services.
For businesses that have variable needs (most of the time they don’t need much computing, but every now and then they need a lot), this is cost effective because they can simply pay as-needed.
When it comes to people, we use these cloud services all of the time. You might store your files in Google Drive instead of on your personal computer. Google Drive, of course, uses Google’s cloud services.
Or you might listen to songs on Spotify instead of downloading the songs to your computer or phone. Spotify uses of Amazon’s cloud services.
Generally, something that happens “in The Cloud” is any activity that takes place over an internet connection instead of on the device itself.
The Internet of Things and the Cloud
Because activities like storage and data processing take place in the cloud rather than on the device itself, this has had significant implications for IoT.
Many IoT systems makes use of large numbers of sensors to collect data and then make intelligent decisions (want to know how an IoT system actually works?).
Using the cloud is important for aggregating data and drawing insights from that data. For instance, a smart agriculture company would be able to compare soil moisture sensors from Kansas and Colorado after planting the same seeds. Without the cloud, comparing data across wider areas is much more difficult.
Using the cloud also allows for high scalability. When you have hundreds, thousands, or even millions of sensors, putting large amounts of computational power on each sensor would be extremely expensive and energy intensive. Instead, data can be passed to the cloud from all these sensors and processed there in aggregate.
For much of IoT, the head (or rather, the brain) of the system is in the cloud. Sensors and devices collect data and perform actions, but the processing/commanding/analytics (aka the “smart” stuff), typically happens in the cloud.
So is the cloud necessary for IoT?
Technically, the answer is no. The data processing and commanding could take place locally rather than in the cloud via an internet connection. Known as “fog computing” or “edge computing”, this actually makes a lot of sense for some IoT applications.
However, there are substantial benefits to be had using the cloud for many IoT applications. Choosing not to use the cloud would significantly slow the industry due to the increased costs.
Importantly, cost and scalability aren’t the only factors. This brings us to a more difficult question…
Is the cloud desirable for IoT?
So far we’ve only been discussing the benefits of using the cloud for IoT. Let’s briefly summarize them before exploring the concerns:
- Decreased costs, both upfront and infrastructure
- Pay-as-needed for storage/computing
- High system scalability and availability
- Increased lifespan of battery-powered sensors/devices
- Ability to aggregate large amounts of data
- Anything with an internet connection can become “smart”
However there are legitimate concerns with cloud usage:
- Data ownership. When you store data in a company’s cloud service, do you own the data or does the cloud provider? This can be hugely important for IoT applications involving personal data such as healthcare or smart homes.
- Potential crashes. If connection is interrupted or the cloud service itself crashes, the IoT application won’t work. Short-term inoperability might not be a big deal for certain IoT applications, like smart agriculture, but it could be devastating for others. You don’t want applications involving health or safety crashing for even a few seconds, let alone a few hours.
- Latency. It takes time for data to be sent to the cloud and commands to return to the device. In certain IoT applications, these milliseconds can be critical such as in health and safety. A good example is Autonomous Vehicles. If a crash is imminent, you don’t want to have to wait for the car to talk to the cloud before making a decision to swerve out of the way.
So when we ask if the cloud is desirable for IoT: it depends.
The Internet of Things is a broad field and includes an incredible variety of applications. There is no one-size-fits-all solution so IoT companies need to consider their specific application when deciding whether the cloud makes sense for them. That’s actually one of the reasons why my company Leverege exists, to help companies who want to build an IoT system navigate the entire process.
Sign up for IoT for All’s newsletter!
Last Week in the Future is our weekly newsletter, covering the latest and greatest in IoT, AI, and other tech fields from last week.