A software defined data center is nothing without a software defined network. Programmability and API support are more important than speeds and feeds in making a purchasing decision.
Enterprises have to assess a networking vendor’s software plans as thoroughly as hardware specifications.
There are three critical features of data center switching that you need to keep in mind on your next refresh: overlay support, programmability, and APIs. Speeds and feeds, table sizes, and other data sheet specs are table stakes, and most data center networking vendors are keeping pace on the important parts. Seriously now, how many of you are going to make a purchase decision based on MAC table size? Do you really need more than 256,000 entries? Hardware is keeping up. Software impacts the integration and interoperation of your switching hardware with the rest of your data center, so much so that it becomes the most critical set of features that can make or break a fully automated data center. Continue reading “When Your Switch Vendor is also Your Software Vendor”→
With the emphasis on cost competitiveness and transparency, distinguishing features can quickly fall away in the cloud.
Some providers respond by stepping up to strategic roles as chief advocates for their clients, aggregating services and supplying mechanisms to streamline provisioning and management.
In an environment where providers trumpet similar pricing models and comparable feature sets based on technology from common vendors, it can be hard to distinguish one cloud service from another. Enterprise IT decision-makers tend to select providers that have earned their trust through work in other projects. However, there is still room for rival providers to compete for new accounts by offering a compelling solution. The most savvy of these service providers recognize that a change as inherently complex a change as the move to the cloud presents opportunities for them to position themselves as strategic partners in guiding clients through this transition. Continue reading “Brokering a Better Cloud Position for the Enterprise”→
Networks and networking suffer from a lack of respect that defies logic.
Innovation continues apace, however, the industry often fails to give these advances the attention they deserve.
Networks and the stuff that make them work are suffering from a dearth of respect to which even Rodney Dangerfield would have to defer. Sure, we all know that it is lunacy to dismiss the value of both private and public networks because the quality of experience is utterly dependent on the quality of the network connections. This is a stone-cold fact, whether we are talking about a teenager looking at YouTube videos on a smartphone, or a business running mission-critical applications.
Yet while networks and networking have never been truly glamorous, there is a perceptible downward trend in love for the stuff of connectivity. It has long been the case, for example, that the hottest, most admired Internet businesses take public and private networks for granted and ride roughshod over them with something approaching complete disdain. If Facebook is sluggish, you don’t blame Facebook, do you?. Continue reading “Networks Do Matter – Really!”→
Flexibility is one of the prime benefits of a cloud-based IT consumption model, giving IT and non-technical employees capacity when they need it. This access to resources enables organizations to execute new projects quickly and respond to fast-changing market dynamics.
This appealing model has risks for the IT organization – issues around manageability and control are a natural byproduct of shadow IT.
While cost reductions are often the main driver for cloud adoption today, elasticity and accessibility distinguishes cloud from traditional methods. Organizations of many sizes gravitate toward the cloud to make it easier for individual business units and employees to tap IT resources to support organizational goals. Cloud can facilitate a more improvisational approach to technology and project management, allowing even non-technical users to dial up and down server and storage capacity for short term or cyclical projects. Continue reading “Out of the Shadows: Making a Decentralized Approach to IT for Business”→
Provider-managed WAN optimization is less likely to be used in the U.S. market due to the widespread availability of cost-effective bandwidth across major towns and cities.
A number of pan-European and UK carriers report take-up of managed WAN optimization in domestic-only networks.
WAN optimization can be a costly component and there is always going to be a tradeoff between throwing bandwidth at a problem versus implementing some sort of WAN optimization. The other question IT managers face is whether to buy and drop in WAN optimization on their own in a DIY setup or to contract a service provider, but this is a topic for a future discussion. Current Analysis has noticed a difference between the way UK and U.S. service providers respond to the question. In the U.S., national operators are ambivalent about deploying their own managed WAN optimization services, because there is not much customer demand. WAN optimization CPE and provider-managed services are expensive, and it is more logical for customers to purchase more capacity, rather than to try to manage capacity more granularly. There are some provider WAN optimization services run out of Internet data centers, and some enterprises will buy and drop in their own CPE to triage their worst application behavior. In contrast, BT and Colt report customers that subscribe to their domestic UK WAN optimization implementations. Continue reading “WAN Optimization: Limited to International Networks or Also Suited to Local In-Country Networks?”→
The market early optimism towards cloud may have been tempered due to skeptics and the overuse of ‘cloud washing’ campaigns (i.e., everything in the cloud, attached to the cloud, or solved by a cloud of some sort)
Enterprises remain optimistic though as many have embraced some form of cloud with measured success and asked good questions about what to do next, moving forward, and leveraging the experiences and concept proofs others have employed
Last week’s Interop show was a success by many measures. It offered users and vendors the opportunity to interact on critical topics. The track sessions were reasonably attended, though no one had to fight for seats at this event. There were few logistical issues, due in large part to the efforts by UBM TechWeb, the company behind the Interop magic (and a great crew running the show). Continue reading “Interop New York 2012: A Variably Cloudy Perspective”→
Organizations emerging from experimental to broader cloud deployments are running into hurdles in full-scale on-demand implementations
Acute challenges relate to enterprises’ lack of internal cloud expertise
For all of its many potential benefits, the cloud also comes with a myriad obstacles and challenges attached that are daunting enough to keep enterprises relegating on-demand services to support only the most basic tactical use cases. Yet, however complex or difficult cloud computing may seem to be, the advantages are so compelling that even the most risk-averse have to at least consider whether there might be an enterprise-wide fit for the model in their organizations. Continue reading “Interop New York: Successful Cloud Transformations Start with Well-Defined Migration Paths”→
Cloud services imply a new type of sales and support ecosystem that is still very complex and relatively unstable at the moment
This should not put buyers off, and should be welcomed—but all customary, cautionary warnings apply
The dynamics of cloud services have caused a fair bit of healthy upheaval in the way technology and software suppliers deliver and support their goods. In fact, that would be an understatement. Beyond the obvious difference between a network-based infrastructure or a software service versus goods sold or licensed for installation on-premise, there is a fundamental shift in the go-to-market plan for suppliers that takes the notion of so-called co-opetition to an entirely different level. Continue reading “Beware the Cloud Service Provider Shell Game”→
The software-defined data center is a concept that encapsulates networking, virtualization, storage, orchestration, and ultimately, a truly agile framework.
Orchestration and manageability must be designed into a solution, rather than being bolted on, to yield the best results.
It became evident during VMworld that the notion of a software-defined data center is central to VMware’s strategy. However, when you pause a moment and reflect on where the tech industry has been heading for the last five to ten years, it is easy to see elements of this notion accelerating over time, really coming to dominate design principles across the disciplines that constitute the DC (storage, compute, network, and operations platforms) in the last few years. Software-defined networking (SDN) is perhaps one of the most visible or actively marketed software-defined concepts, but when one realizes that virtualization is just another software-defined concept (compute/machines), it is easy to see the theme encompassing practically every element of DC technology, not to mention platforms and applications already being managed as software elements themselves. The logical question here is: If all elements within a data center are software-controlled, then what about the technology characteristics of fabrics, SPB-M/Trill, FCoE, and more of the physical network elements? The answer is that the technology differentiation of the devices which constitute the infrastructure does not go away or diminish with the SD DC, but rather becomes instrumental as the devices themselves must each integrate with upper-level orchestration platforms (i.e., VMware vCenter/vCloud Director). Continue reading “Is Your Network Ready for the Software-Defined Data Center?”→