More Details and Refined Status for the ExperiaSphere Project!

I’m trying to provide regular updates on the ExperiaSphere activity without giving you old news or nothing useful.  Hopefully today I have something that avoids these traps!

I’ve determined that the best way to explain ExperiaSphere is to start with a multi-plane vision of services and applications.  The top plane of this, which I’ll call the Service Domain describes the logical structure of a service, experience, or application.  This is related first to what it is, second to how it works in a functional sense, and finally how it can be described as deployable functions.  The bottom plane is the Resource Domain where the real stuff lives—including servers, storage, network equipment, legacy stuff, SDN, and so forth.  The resource domain is a lot of things because there are a lot of old and evolving things that live, will live, even have to live in this space.  It’s a classic moving target or collections of silos so potentially vast that it might look like the Farm Belt from the air.

ExperiaSphere’s goal is to accommodate anything that can be considered a service or a resource, period.  Further, the goal is to accommodate and not to make the other stuff do the accommodating.  Nothing has to change, not cloud or NFV components, not SDN or legacy technology.  The benefits of the network of the future can’t be achieved by limiting how you get there or by making early transitional phases totally unprofitable.  So we accommodate, and we start with a mechanism for defining services and applications that’s totally flexible, creating the Service Domain.

Service Domain functions are represented by objects, which I’ve called the process of structured intelligence because the service domain is about how to structure the intelligence, the logical or functional pieces.  If we’re going to deploy one of these objects we have to be able to map it to something in that vast chaotic world of resources.  We have to bridge these domains.

We actually have to bridge them twice, because there are really two binding issues between the two.  One is the deployment process, the linkage of service object to resources.  The other is the ongoing lifecycle management.  From the very first days of the open-source project that launched ExperiaSphere, we represented these two layers, imposed strict separation, and mandated explicit binding between the two.  One aspect of that binding was a dynamic data model that was built when a service was deployed and sustained as a kind of contractual record of the service.  This works nicely for services built as Java applications, but not so well for something built through a more agile service-architect process set.

The ExperiaSphere model, rather than build two bridges, builds a functional double-decker.  We have an Infrastructure Manager or IM that bridges both the deployment and management functions.  Whatever technology might live down in our resource-domain land of silos, there can be an IM designed to create the deployment and management bridges.  You can deploy any service object (a component of a service, application, experience, or whatever) and manage it on an ongoing basis by having the appropriate IM for it.  Anything that can support an IM, then, can be used as a resource.

Both services and resources have models, and I’m proposing to use the same open standards and open-source tools to describe both domains.  The principles applied in the modeling are consistent with the TMF’s operations structure (GB922) but ExperiaSphere does not use the TMF model explicitly for either services or resources.  I’d rather use an open standard to get open-source support.  It’s possible to map between the TMF model and ExperiaSphere but this is to me an “Adapter” function and it’s beyond the scope of the project.

The Service Domain in ExperiaSphere is a domain of functional models, and the Resource Domain a domain of real stuff.  To get all the siloed possibilities in the latter under some sort of control, ExperiaSphere proposes an i2aex-like repository that will hold not only all of the “primary MIB data” that real resources generate, but also all of the derivations or restatements of primary data that describe the management properties of the service objects.  Why?  Because these objects are virtual devices and we need to be able to make them appear in the most convenient and manageable way.  I called this part “Derived Operations” and it’s the second pillar of ExperiaSphere execution.

Operations processes have to be integrated too, of course.  Every functional Service Domain element defines state/event relationships that will map to the service lifecycle.  For each intersection we define a process link, and that process is treated as a Resource and supported by an Infrastructure Manager (see below).  That allows ExperiaSphere to bind any current or future operations or management task to the lifecycle process, which means that operations at all levels can be integrated with the Service and Resource Domains.  Even cloud applications can be represented as services, decomposed into deployable units, and then lifecycle managed.  Scale-in and –out and other new cloud-driven resilience and performance enhancements are, to ExperiaSphere, secure lifecycle elements and not the responsibility of the applications or functions.  And anything that runs on a platform that can be modeled as a Resource can be deployed and managed.

Our open-source mission here can be viewed in this light.  We need a set of open-source tools that will let us build that functional-symbolism-laden Service Domain.  We need a set of open-source tools that will let us build that great and agile management repository.  The former will obviously have to be a combination of what we could call descriptive/parametric processes and the latter a generic database function surrounded by proxies that can populate and represent it.  The good news is that I’m fairly sure at this point that I can identify what makes up both of these layers, what open-source pieces lie on both banks of the ExperiaSphere stream.  For the Service Domain I can identify tools and even implementations close enough to my own that there’s validation in the real world.  I’m working through the details of exactly how these two layers can work in open-source, and I think I’ll have the answers within a month.  Then it’s a matter of putting together a set of videos to describe them.

The bridge, the Infrastructure Manager, is where most of the variability comes in.  I’m going to propose a standard model for representing the linkage between the layers so that it can adapt to any software component/API.  That means that the Service Domain can speak a single language looking south to the resources.  What will have to be done is to create the shim (which in software is often called an Adapter Design Pattern) to translate this common language to each of those silos, or at least those that a given implementation is prepared to support.  Where there are open-source tools to facilitate this, including OpenStack or OpenDaylight, the shim can link to the existing tool/API set and that will generalize support for anything that the selected tool can support on its own southern border.  I can’t bridge a gap to so diverse a set of resource options as we have already with a single strategy, and no single open-source element does everything needed without at least a bit of customization.  I’m going to go as far as I can, then define what needs to be done to glue the assembly together.

In total, I think that 15% to 20% of an ExperiaSphere implementation could be done using off-the-shelf open-source tools.  The rest will be customization, largely in the area of the IM and the mapping between operations and management processes and the state/event structures that define lifecycle processes for the Service objects.  Existing structural models for things like SDN and the cloud, including OpenDaylight and OpenStack, could be integrated under an IM without anything more than the stub I’ve described.

So this is where we are.  I’m working on the detailed tutorial material, and I expect that it will start with a high-level picture and then divide into additional videos to deliver the details of the Service Domain, the Resource Doman and Derived Operations, the Infrastructure Manager, and a final one applying all this to a model service lifecycle.  The material will obviously take time and it may stretch even into September to get the last of it up.  By August, though, you’ll have enough of the picture to see what open-source software can do for the modern age of virtualization.

This entry was posted in Uncategorized. Bookmark the permalink.