VMware’s VMworld was a hit again, pulling in partners and customers alike
The buzz around VMware is about much more than simple virtualization software
I did not attend last week’s VMworld in Las Vegas, hosted of course by VMware, the virtualization software market leader. I wish I had, though. While timing and location prevented my own pilgrimage, Current Analysis was very well represented as were a who’s who of technology-market partners and a robust contingent of IT executives and managers. The reason why this event has become so important for so many is simple, but also profound: Certainly VMware caught lightning in a bottle with its virtualization software, but the company is also leveraging this rather arcane solution as a platform to help solve myriad other IT problems, both with and without partner support. Continue reading “What Does VMware Mean to You?”→
The pressure is on organizations to leverage technology models like cloud services to improve efficiencies and cut costs, but many still struggle to produce workable on-demand computing strategies.
Enterprises can learn from early adopters including the U.S. public sector, which has instituted mandates to encourage cloud use that should provide valuable insights to both the private sector and cloud providers.
Cloud computing is coming of age at a time when economic uncertainty and competitive pressures of globalization force organizations to be more efficient and more flexible. As much as an on-demand computing model appeals as a way to cut costs and increase organizational agility, making the transition to the cloud can be difficult and complicated for enterprises that have substantial infrastructure and legacy applications. Continue reading “Crossing the Cloud Chasm: Learning from the U.S. Public Sector Advance”→
Verizon, Interoute, euNetworks, Colt, TSIC, and others, are examples of service providers that have lit fibre assets with 100 Gbps bandwidth and Ethernet at the core transport layer
Video, mobility, cloud-based computing and storage, and rapidly growing SaaS take-up are pushing the need for high-capacity service
There is an on-going performance versus cost challenge for buyers of high-speed service to consider
The desire for faster Internet does not have a ceiling, because it is linked to human impatience, which is limitless. From the perspective of business applications, bandwidth growth is driven by cloud storage, SaaS, enterprise mobility, high-powered cloud computing, and business video. To date, 100 Gbps Ethernet, optical transport, and DWDM wavelength announcements have largely been coming from the equipment manufacturer’s camp; but this is changing as more and more service providers start to expound upon the virtues of recently launched long haul 100G circuits as well as early readiness for 400G service.
Software-defined networking (SDN) is a massive, all-encompassing concept which spans campus, data center, WAN, and carrier backbone networks (pretty much every type of networking infrastructure imaginable) and is being touted by some as capable of solving nearly every networking issue that has plagued us for the last 20 years; and yes, it does make coffee in the morning for you (no, not really).
Eventually, SDN may do most of the things claimed, but getting there will take a long time and some IT fundamentals and best practices will remain critical moving forward.
The OpenFlow protocol and (more recently) SDN have been discussed and put forth as solutions to complex, hierarchical, legacy architectures that were built up over years to solve the complex performance and management needs of enterprises and service providers alike. Yes, the technology for each type of deployment was different (MPLS vs. OSPF vs. multicast, etc.), based on various criteria, but regardless of the technology, each vertical or segment executed on best practices learned over years of (sometimes painful) experience. The result was a set of processes and instructions, if you will, that each IT or production environment team could leverage as they looked to new protocols or ports or architectures to avoid the same pitfalls encountered before. SDN promises to eliminate the need for several of these, but a few still demand strict adherence or consideration. Continue reading “SDN Market Frenzy: Your Network Best Practices Remain Important!”→
An unfortunate series of big impact cloud outages including Windows Azure, Salesforce.com, Amazon Web Services, and Twitter have users fuming and organizations rethinking their on-demand strategies
Are cloud providers doing enough to address the underlying issues and reassure enterprises? The answer, at least for now, seems to be no
Summer malaise has hit the cloud in a big way with a series of service interruptions knocking some of the most popular services offline temporarily and setting some corporate users into a tailspin. Though the root causes of the failures may differ, providers often issue ineffectual mea culpas, often citing avoidable issues like Twitter’s “double system failure” or Salesforce.com’s power failure rather than more unpredictable causes like a natural disaster or an unanticipated massive influx of traffic flooding their service. The result leaves already leery enterprises even more on edge about making the shift to the cloud anytime soon. Continue reading “Service Instability Underscores Serious Cloud Issues – and the Need for Better SLAs”→
Automation jeopardizes flexibility when needed by clients deploying new technology
Smaller services deals expose buyer shortcomings that need early due diligence and lifelong flexibility by providers.
The cloud brings exciting innovations that increase the potential of, and customer choices for, unified communications and workspace services. With that fresh potential comes the possibility of clients making more errors in buying decisions and specifications. Service providers and third parties are also likely to make genuine mistakes when advising clients on strategies that exploit new technologies. Continue reading “Keep Flexible to Keep Customers”→
You really can’t run an enterprise without some level of support contract these days due to infrastructure complexity
Your own talent pool & business needs will drive the level of support contract required for your environment
There are many case studies and hot topics that have circulated for years (and will continue to for many more, I’d wager) about how much support contracts cost. However, I’ll ask you this, “Do you want to be the one responsible when you explain that the network outage could have either been avoided, or considerably shortened with expert help available?” The question isn’t whether you should have access to expert help. The question is what level of expertise is appropriate for your organization. This in turn depends on the systems in question, how many vendors are involved (in which case you begin to drift from a vendor specific support contract into a more involved services engagement with an integrator/partner – which is out of the scope of this particular blog) and what kind of an investment in your IT staffing you’ve made – and will continue to make. Certifications, time out of office, headcount, expertise focus, business metrics, uptime requirements, line of business commitments for network uptime, etc. It’s quite simple, right? (Tongue firmly in cheek.) At minimum, you should have a standard business hours call center contract, which also gives you access to software updates. Not every vendor requires a contract for this and it is a significant perk for customers of those who are satisfied. Though in mission critical situations, when a problem can run from a simple configuration error (which in my experience, is increasingly rare) to the more grievous hardware failure that you may not have hot spared on site (these lessons are learned once, painfully, and then never repeated), you need expedited assistance. When a two or four hour support contract is put in place, a vendor or local partner is trained and carries inventory for every SKU that such a high alert contract may need. After all, when an outage occurs, it could be trivial, it could represent millions of dollars per hour in lost revenue, or it could result in potential litigation (think about emergency services or when lives are on the line). This is the vendor-side support model. Continue reading “Help! My Network is Broken!”→
Being local and having staff available to UEFA at its key sites is as critical to the organization as the ability to be a good partner that can support its ICT system. What’s often overlooked as we get caught up in technology is that the human touch and ability to anticipate and solve problems quickly counts for a lot with customers when it comes to contract renewal time.
With full ownership and control of its network, Interoute offers customers high-performance services, fast provisioning times and competitive pricing. Interoute has significant network assets spanning 100 European cities and featuring 21 MANs across Europe, as well as PoPs in Eastern Europe, which is a key requirement for UEFA. Ownership of eight data centres and strength in hosting services has evolved into an expanded cloud services portfolio.
It’s showtime for UEFA (Union of European Football Associations), as Euro 2012 is now underway in Poland and Ukraine. The two Eastern European countries will play host to 16 teams and an expected 1.4 million football fans over the course of the competition which happens just once every four years. The total predicted global TV audience for Euro 2012 (including qualifiers) is 4.3 billion, and it’s not just football on the pitch, as so much work goes on behind the scenes at the big stadiums, including security, emergency services, catering to journalists and broadcasting networks and the supporting technology and communications. The International Broadcasting Center (IBC) in Warsaw is the temporary home to all the key broadcasting and press outlets covering the event as well as UEFA’s ICT team. This is a live event where no downtime can be tolerated. UEFA does not take chances, even with the power grid, relying on diesel generators instead to power its ICT during the event. Continue reading “UEFA Euro 2012: Key ICT Partners Stay Close to the Pitch”→
Intelligent embedded network agents and sophisticated software heuristics provide key insights into information and performance patterns for predictable data consumption, but interpreting these requires talent
Humans remain the most valuable troubleshooting tool in the IT arsenal
Having worked in infrastructure in the ‘90s and I’ve done my fair share of troubleshooting vampire taps, thick-LAN, and eventually thin LAN (and those finicky terminations) I can say we’ve come a long way. Granted at its most basic we’re troubleshooting low voltage electrical wires in most wired infrastructure. Sophisticated tools are embedded in many switching platforms now which can immediately detect a link loss in addition to whether it’s a damaged cable or connector, or alert correlation from multiple devices to pinpoint the exact location of a ‘noisy’ device polluting the network. Advances such as these have increased efficiency, reduced trouble ticket resolution times, and freed up valuable resources to work on more complex challenges. With wireless access becoming the norm for clients as more and more devices go solely mobile, tools have generally kept pace and network management systems have slowly grown more capable and feature rich. As cloud adoption rates increase and systems grow more diverse, the tools are likely to suffer a setback, though, with many disparate elements, both physical and virtual, contributing to a single application connection. Troubleshooting these will once again require a significant amount of technician involvement to determine root cause during an outage (and no, rebooting your client isn’t the answer, Mr. Helpdesk). Physical and virtual agents must be deployed in order to collect statistics in real time and aggregate these bits into a collective perspective of the health of the network. Whether this is done with one of the extensible “framework” NMS systems or via vendor element management systems does not matter, but at the heart of this is that enterprises need to embrace a more sophisticated management model than they have in the past. Continue reading “IT Pains Evolving: Where’s Holmes & Watson?”→
Massive, multi-billion dollar growth projections and continuous vendor and service provider marketing keep a white-hot spotlight on cloud computing.
Yet, as rich as the potential benefits of on-demand computing are, enterprises are approaching the cloud with caution as well as some very fundamental questions which, left unanswered, will keep the model from becoming anything but a tactical tool.
If you believe everything you read, the cloud is supposed to be the cure for all that ails the enterprise. Flexible, cost-effective, and scalable, the cloud holds a lot of promise for organizations struggling with severe budget limitations. Every self-respecting vendor and managed IT services provider needs to have a coherent and innovative cloud strategy or risk looking backward in a fast-moving space. Market projections seem to come out every week talking up the cloud’s potential value in astronomical multi-billion dollar terms. The last one I saw projected cloud sales to top the $39 billion mark… this year. Continue reading “Cloud Computing: A Strategic Engine of Change or Just Another Tool in the Kit?”→