4 MIN READ | Service Assurance

How Are You Dealing with Carrier Ethernet's ‘Best Practices' Evolution?

Christopher Cullan
Oct. 17 2013
Share

By Christopher Cullan, Product Marketing Manager, Business Services Solutions, InfoVista

Last week, InfoVista released its Carrier Ethernet network performance management survey results. One of the interesting findings from that research was the significance operators put on Metro Ethernet Forum (MEF)-defined performance attributes as well as the growing awareness (nearly 50%) of those familiar with the organization and its specifications, particularly 35 and 36, the service OAM performance monitoring implementation agreement and related SNMP MIB.

This data aligns well with my own, less scientific interactions with the market at industry events, MEF quarterly meetings and directly with our customers — operators are aware of the best practices the MEF has defined. However, they are frustrated by the inability to fully implement them. They are suffering from what I term the “best practices life cycle” — something that all industries deal with. Essentially, this life cycle has four phases:

  • Market Need  — the identification or realization of a particular need by a market — for example, consistent end to-end performance metrics in the case of Carrier Ethernet services.
  • Innovation  — where vendors/suppliers attempt to address the market need. Since they are doing so on their own, by nature, these solutions are generally proprietary in nature. Vendors try to borrow what they can from available solutions and re-use some of their past solutions, but ultimately, the goal is to address the need quickly and preferably before the competition does. Over time, one gets multiple solutions, from multiple vendors to address the market need, each with their own set of advantages and disadvantages.
  • Standardization — where best practices are established to deal with the original market need and which have often influenced the various solutions that have come to fruition to date. Once again, by their very nature, the standards are unique and not exactly aligned with any one single solution from the innovation stage. As an example, we can look at Cisco's IP SLA instrumentation — a great tool for measuring things like latency and jitter between 2 endpoints on a layer 3 network. With Carrier Ethernet, Cisco took the same instrumentation and leveraged some standards from Y.1731 to provide end-to-end measurements for layer 2 services using itsOAM Jitter probe, part of the IP SLA framework. Later, the MEF standardized these end-to-end measurements, including an implementation agreement within MEF 35.
  • Adoption — the standards are adopted and implemented across the various vendors/suppliers and members of a market's ecosystem.

If one was to enter the market at the Adoption stage, one would find it perhaps easier to proceed because standards would be clear, fully adopted and available for purchase at competitive prices. Thus, this would enable a timely ramp-up to market. Of course, that's rare for most, especially in the data communications market, so instead, operators must manage this life cycle. The question is: how can operators manage this evolution?

MEF 35 is available with the basic support of Carrier Ethernet network and service performance monitoring, and MEF 36 and (more recently) MEF 39 provide two constructs to enable MEF 35 using SNMP and NETCONF, respectively, but what can operators do? Well, some of the leading vendors are already moving forward, with being MEF 35-compliant, and we're speaking with many to secure MEF 36 compliancy, which drastically cuts the integration effort for an Ethernet device to enable full, MEF- and best practice-aligned performance monitoring— valuable to both the internal stakeholders like operations and engineering as well as for end customers — especially in the continually growing wholesale market.

The goal is to align the model, reporting and workflows around the best practices, and then map that to the practical reality of what's available and implemented. For example, one of our customers uses a layer 3, synthetic, end-to-end test for performance monitoring of its layer 2 networks — a best practices approach? Surely not, but at the time of implementation many years ago, it was one of the only practical methods available at the time.

But does the operator want to change report views, templates, etc. for every type of end-to-end test? Not really — instead, they want ones that provide a measure of delay — latency in layer 3 terms, frame delay in MEF canonical terms, and then to be able to report on the service, the links, the EVCs using all of these methods in a consistent fashion. As the network evolves, you switch to NIDs, and perhaps later on, the instrumentation will be built right into the edge, using the MEF-defined performance attributes and formulas that borrow from ITU-T's Y.1731 and stored in a consistent MIB or YANG module.

I certainly hope to see the adoption of these standards accelerate, so our customers and the market as a whole can get what they need. In the meantime, we'll do our best to provide easy integration paths for the semi-proprietary “innovations” that have cropped up along the way, to continue to make Carrier Ethernet network performance management as successful in the future as it's been to date.

Written By

Need updates? Get on our mailing list.