Welcome!

OpenStack Journal Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Elizabeth White, Michael Bushong

Blog Feed Post

OpFlex, OpenDaylight and OpenStack: Policy-Driven Networking

#OpFlex #CLUS #OpenDaylight #OpenStack #F5 There's multiple moving parts to driving a policy-driven network

There's a lot of buzz about Cisco's proposed standard, OpFlex, as a control protocol for policy-driven networks. The protocol addresses, in part, the inability of protocols like OpenFlow to provide a means by which L4-7 services can be consistently provisioned and configured (and updated dynamically) as part of a comprehensive approach to modern network architecture. 

For some of us, this the culmination of efforts now long in past (because five or six years ago is forever in technology time) to drive toward a singular means of integrating infrastructure to enable an agile network capable of scaling and adapting to meet the challenges of an increasingly cloudy, virtualized application world.

Long time readers may recall this blast from the past:

Somehow the network-centric standards that might evolve from a push to a more agile infrastructure must broaden their focus and consider how an application might integrate with such standards or what information they might provide as part of this dynamic feedback loop that will drive a more agile infrastructure. Any such standard emerging upon which Infrastructure 2.0 is built must somehow be accessible and developer-friendly and take into consideration application-specific resources as well as network-resources, and provide a standard means by which information about the application that can drive the infrastructure to adapt to its unique needs can be shared.

-- Infrastructure 2.0: The Feedback Loop Must Include Applications

Suffice to say that the need for a standardized, southbound (if we want to use today's terminology) protocol has long been recognized as foundational for a next generation network architecture.

But a standardized southbound protocol isn't the only piece of the puzzle. A control protocol only specifies how information is to be exchanged. It does not necessarily specify what that information is. OpFlex is a control protocol that describes how policies are to be exchanged and updated (and the relationship between a variety of policy-related components), but it does not specify what those policies will look like. Probably because, well, where do you start? How do you standardize a policy across disparate implementations even within a specific "category" of infrastructure? Should policy describe services (load balancing, application acceleration) or discrete device elements (ADC, IPS, SWG)?

Therein lies the real challenge; a challenge we've (the industry) been struggling with for a long, long time: 

...none of us are really compatible with the other in terms of methodology. The way in which we, for example, represent a pool or a VIP (Virtual IP address), or a VLAN (Virtual LAN) is not the same way Ciscoor Citrix or Juniper represent the same network objects. Indeed, our terminology may even be different; we use pool, other ADC vendors use "farm" or "cluster" to represent the same concept. Add virtualization to the mix and yet another set of terms is added to the mix, often conflicting with those used by network infrastructure vendors. "Virtual server" means something completely different when used by an application delivery vendor than it does when used by a virtualization vendor like VMWare or Microsoft.

And the same tasks must be accomplished regardless of which piece of the infrastructure is being configured. VLANs, IP addresses, gateway, routes, pools, nodes, and other common infrastructure objects must be managed and configured across a variety of implementations.

-- Making Infrastructure 2.0 reality may require new standards

One of the reasons OpenFlow was not universally lauded by providers of L4-7 services is that OpenFlow by design was focused on stateless L2-3 networking, pushing rules for port forwarding and routing that controlled the network fabric. While certainly L4-7 services must interact and interoperate with L2-3 by virtue of their placement in data center architectures in the network, their primary operation is at the application layers. The rules and policies used to implement load balancing are not just about forwarding to an IP address and a port, but about how such decisions should be made. The rules and polices that protect applications from myriad application attacks must include URLs and actions to be taken at the application layer - including those that might require rewriting the payload (message) being transmitted. These rules and policies are not only specific to the category of service (web application firewall, web anti-fraud, web acceleration, caching) but to the specific implementation (i.e. the product).

That makes defining a singular "god policy" for L4-7 similar to that of OpenFlow is not only impractical but downright impossible without stripping those services of their usefulness. Thus, some other approach was necessary.

For that approach, we need to look to the OpenDaylight (ODL) project.

The OpenDaylight Connection

So ODL has created a new project called the ODL Group Policy plug-in. As explained by Cisco in an OpFlex white paper, "the goal of this project is to provide a policy-based API that can serve, in practice, as a standard information model in OpFlex implementations."

The ODL Group Policy PlugIn "introduces the notion of groups of endpoints and policy abstractions that govern communication between these groups. The goal is to support a northbound API that accepts abstract policy based on application requirements while offering numerous concrete southbound implementations for programming network element with configuration/programming stanzas rendered from application policies." 

Translated into regular old English, what it means is that instead of a policy that's essentially a bunch of ACLs and routing and switching entries, it's designed to be more developer and human friendly. So you might say "Web server A can speak SQL to Database server 1" or "Let the OpenStack console communicate with App Server B". An imperative policy would spell out the VLAN participation and routing, policy-model_thumb[6]and the ACLs required to allow traffic between the various VLANs, IP addresses and ports. It would be long and prone to misconfiguration simply due to the number of commands required. A declarative policy just describes what is desired, while allowing each individual network device - a policy element in an OpFlex architecture - translate that into the specific commands and configuration required for its particular role in the policy domain within which it is acting. This reduces the possibility of misconfiguration because, well, no configuration specific commands are being communicated.

It also addresses the problem above noted with defining common object models across a role in the network, particularly for those devices operating at L4-7. Traditional approaches have relied on finding the lowest common model, which inevitably ended up with all offerings needing to dumb down (commoditize) their value in order to ensure consistent support for all "XYZs". By abstracting to the policy - and not the object model - level, the OpFlex approach effectively eliminates this issue and emphasizes defining what is desired by the application, but not how.

Now, the actual policy declaration is unlikely to take the form of natural language (that'd be nice, but we're not quite ready for that... yet) but the general premise is to provide a far more, shall we say, friendly policy language.

The OpenStack Connection 

Astute readers will note that OpenStack was mentioned but not further explored as of yet. Right now there's very little on the connection between OpenStack and OpFlex and OpenDaylight, but there are mentions that make it clear that OpenStack is not being shunned with these efforts.

The Group Policy Plugin proposal specifically calls out OpenStack Neutron considerations:

Relationship with OpenStack/Neutron Policy Model

The model developed within this project relates to that being developed in OpenStack in the following ways:

  1. The Neutron model being developed MUST be always renderable into the OpenDaylight policy model.
  2. The OpenDaylight model MUST NOT be constrained in any way by the Neutron model being developed.

 

Given that OpenDaylight's focus is operationalizing the network, and OpenStack's focus is operationalizing the entire infrastructure stack (compute, network and storage), one can see that a relationship makes a lot of sense. A comprehensive approach might be to "plug in" OpenDaylight by accepting through its policy API a Neutron policy and distributing that to both policy elements via OpFlex as well as through OpenFlow to OpenFlow-enabled network elements.

Enabling Choice

One of the side effects - or perhaps intentional outcomes - of OpFlex and a policy-driven model is that it enables organizations to choose their preferred implementation model. Because OpFlex is designed to be one of the southbound API options for SDN implementations like OpenDaylight, it allows organizations to choose that model without compromising on support for a wide variety of network options. OpFlex and OpenFlow could, theoretically, coexist in such a model, or support an OpFlex-only model.

Too, the desire to not impair (or be impaired by) OpenStack means there's a movement toward cooperation between these various open source networking projects.

The view of the open source movement has always been free, as in choice (and not necessarily gratis) and these developments and approaches certainly embody that view.

These are still early days, despite the increasing maturity of OpenStack and continued momentum behind SDN. But the interaction and cooperation required to produce interoperability of policy and operational frameworks across these three efforts is definitely promising.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Cloud Expo Latest Stories
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore’s Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at 15th Cloud Expo, Mason Katz, CTO and co-founder of StackIQ, to discuss how infrastructure teams should be aware of the capitalization and depreciation model of these expenses to fully understand when and where automation is critical.
Over the last few years the healthcare ecosystem has revolved around innovations in Electronic Health Record (HER) based systems. This evolution has helped us achieve much desired interoperability. Now the focus is shifting to other equally important aspects – scalability and performance. While applying cloud computing environments to the EHR systems, a special consideration needs to be given to the cloud enablement of Veterans Health Information Systems and Technology Architecture (VistA), i.e., the largest single medical system in the United States.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Most of today’s hardware manufacturers are building servers with at least one SATA Port, but not every systems engineer utilizes them. This is considered a loss in the game of maximizing potential storage space in a fixed unit. The SATADOM Series was created by Innodisk as a high-performance, small form factor boot drive with low power consumption to be plugged into the unused SATA port on your server board as an alternative to hard drive or USB boot-up. Built for 1U systems, this powerful device is smaller than a one dollar coin, and frees up otherwise dead space on your motherboard. To meet the requirements of tomorrow’s cloud hardware, Innodisk invested internal R&D resources to develop our SATA III series of products. The SATA III SATADOM boasts 500/180MBs R/W Speeds respectively, or double R/W Speed of SATA II products.
14th International Cloud Expo, held on June 10–12, 2014 at the Javits Center in New York City, featured three content-packed days with a rich array of sessions about the business and technical value of cloud computing, Internet of Things, Big Data, and DevOps led by exceptional speakers from every sector of the IT ecosystem. The Cloud Expo series is the fastest-growing Enterprise IT event in the past 10 years, devoted to every aspect of delivering massively scalable enterprise IT as a service.
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.
Until recently, many organizations required specialized departments to perform mapping and geospatial analysis, and they used Esri on-premise solutions for that work. In his session at 15th Cloud Expo, Dave Peters, author of the Esri Press book Building a GIS, System Architecture Design Strategies for Managers, will discuss how Esri has successfully included the cloud as a fully integrated SaaS expansion of the ArcGIS mapping platform. Organizations that have incorporated Esri cloud-based applications and content within their business models are reaping huge benefits by directly leveraging cloud-based mapping and analysis capabilities within their existing enterprise investments. The ArcGIS mapping platform includes cloud-based content management and information resources to more widely, efficiently, and affordably deliver real-time actionable information and analysis capabilities to your organization.
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. In his session at Internet of @ThingsExpo, Mac Devine, Distinguished Engineer at IBM, will discuss bringing these three elements together via Systems of Discover.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Solgenia, the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between personal and professional social, mobile and cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing collaboration and infrastructure solutions to the cloud.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
Enterprises require the performance, agility and on-demand access of the public cloud, and the management, security and compatibility of the private cloud. The solution? In his session at 15th Cloud Expo, Simone Brunozzi, VP and Chief Technologist(global role) for VMware, will explore how to unlock the power of the hybrid cloud and the steps to get there. He'll discuss the challenges that conventional approaches to both public and private cloud computing, and outline the tough decisions that must be made to accelerate the journey to the hybrid cloud. As part of the transition, an Infrastructure-as-a-Service model will enable enterprise IT to build services beyond their data center while owning what gets moved, when to move it, and for how long. IT can then move forward on what matters most to the organization that it supports – availability, agility and efficiency.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.