I attended a live video stream of the Future in Review conference on Wednesday. Although the format was unusual (even with a cinema screen and surround sound), several comments made by the speakers caught peeked my interest.
During a conversation on the network implications of cloud services, it occured to me that the storage and processing of vast amounts of data in a central location is going to lead to significant latency problems. This in itself isn't great insight. Neither is the fact that network management is going to lead to distribution of content and processing around the edge of the network, based on statistical analysis of likely usage.
I started wondering, however, whether that distribution is going to lead to 'systems' of usage, similar to the weather systems in the Earth's atmosphere. The Brownian motion of particles in the atmosphere is in some ways analagous to the 'motion' of packets in IP networks, with rate shaping and the variable speed of networks acting similarly to the effects of thermal currents on land masses (in my head, anyway!).
The implication of this would be that congestion would build chaotically in certain geographic locations of the network as demand builds, causing localised speed changes and service degradation as resources are pulled from deeper in the network to cope with demand. Digital weather, if you will.
Predicting this weather could become an industry in itself. Organisations wishing to use the cloud in a particularly intensive manner (transfering large volumes of data, running complex simulations or applications) will need to forward plan in a more sophisticated manner than with a traditional utility. Suffice to say, predictions of this complexity are difficult to make at the best of times and may even require distinctly un-cloudy technology such as super-computers to execute in a timely manner... or perhaps the cloud itself can solve its own problem with utility computing. Time will tell.
Alternatively, I could have missed the point! Any observations greatly appreciated.
During a conversation on the network implications of cloud services, it occured to me that the storage and processing of vast amounts of data in a central location is going to lead to significant latency problems. This in itself isn't great insight. Neither is the fact that network management is going to lead to distribution of content and processing around the edge of the network, based on statistical analysis of likely usage.
I started wondering, however, whether that distribution is going to lead to 'systems' of usage, similar to the weather systems in the Earth's atmosphere. The Brownian motion of particles in the atmosphere is in some ways analagous to the 'motion' of packets in IP networks, with rate shaping and the variable speed of networks acting similarly to the effects of thermal currents on land masses (in my head, anyway!).
The implication of this would be that congestion would build chaotically in certain geographic locations of the network as demand builds, causing localised speed changes and service degradation as resources are pulled from deeper in the network to cope with demand. Digital weather, if you will.
Predicting this weather could become an industry in itself. Organisations wishing to use the cloud in a particularly intensive manner (transfering large volumes of data, running complex simulations or applications) will need to forward plan in a more sophisticated manner than with a traditional utility. Suffice to say, predictions of this complexity are difficult to make at the best of times and may even require distinctly un-cloudy technology such as super-computers to execute in a timely manner... or perhaps the cloud itself can solve its own problem with utility computing. Time will tell.
Alternatively, I could have missed the point! Any observations greatly appreciated.
Comments
Post a Comment