- ‘Big data’ analytics could have major implications on the ability of providers to move workloads seamlessly between and among clouds based on processing needs and other requirements.
- Vendors and providers alike are gearing up for future needs by investing now in the technology to provide the underlying analytics to automate decision support and drive higher-level computing.
‘Big data’ is one of those great nebulous terms that gains traction in part because it is vague enough to be all-inclusive. In spirit, big data resembles the amorphous nature of the cloud by offering such an undefined scope that its potential seems nearly endless. Massive volumes of mobile and other data can provide organizations with deep insights into complex pattern phenomena such as consumer behavior, which can be potentially priceless to a company trying to grow market share. However, without a way to process all this data, the information is practically unintelligible.
Demand for this kind of advanced analytics is driving considerable investment from vendors and service providers in both the infrastructure to support highly distributed massive computing models and the technology to process data. Cisco is one of the vendors looking to help its service provider customers mine some of this data through advanced analytics. Just this month, Cisco announced its latest analytics investment with the pending Truviso acquisition. Truviso provides streaming database and complex event processing technology, which today is largely used only by online advertising networks. Cisco plans to use the technology to bolster its Prime network management technology used by service providers, optimizing network performance using current (real-time?) statistics. Cisco sees this technology being used to support dynamic movement of workloads between and among clouds based on available computing and database resources. It could also, in theory, support more advanced analytics.
In many ways, the cloud model, with its distributed resources and the potential for built-in redundancy, could provide the ideal infrastructure to support big data analytics. Imagine being able to process massive data volumes instantly by automatically moving workloads among distributed multi-dimensional databases located in either a single or multiple clouds to unlock critical information. This level of analytics could provide crucial new real-time levels of intelligence that today are still practically outside of our reach.
What is your view of the cloud as a foundation for advanced data analytics? Could this be the cornerstone of some great advances or another pie-in-the-sky boondoggle?