Automation Should Prevent Operator Errors

This blog post was initially sent to subscribers of my SDN and Network Automation mailing list. Subscribe here.

One of the toughest tasks faced by networking engineers attending our Building Network Automation Solutions course is designing a data model describing network infrastructure or services. They usually think in terms of individual devices (nodes) resulting in tons of duplicated data.

I always point that out when reviewing their solutions and suggest how to minimize or eliminate duplicate data. Not surprisingly, doing that is hard, and one of the attendees started wondering whether the extra effort makes sense:

I’m finding it’s a fine balance between exposing the complexity to the operator (by asking them to specify values in multiple places) or pushing that complexity into the data model rendering and removing “flexibility” for the user.

There’s difference between flexibility and duplicate data. For example, asking the operator to enter interface description for every link instead of using default value “link to X:Y” when the data model already contains the information that the interface is connected to port Y on device X is unnecessary data duplication.

On the other hand, removing the operator’s ability to overwrite default interface description with a more meaningful text (where needed) is reducing flexibility and might be undesired… keeping in mind that flexibility (aka “nerd knobs”) increases complexity, requires more thorough testing, and in the end increases the development costs. The fine balance is thus “do we really need this flexibility and what are we getting for the increased complexity?

When asking the operator to deal with more complexity, I typically mitigate this risk with documentation and/or a walk through session.

One of the goals of introducing network automation should be increased reliability and consistency. Documenting data duplication instead of eliminating it doesn’t bring us closer to that goal, as it still permits operator mistakes. The very minimum you should do in this case is:

  • Document the complexity (in our case data duplication);
  • Add consistency checker to input data validation script (you’re validating input data, right?)

In the end, it might turn out that it’s still cheaper to solve the problem in the right place and redesign the data model. More about that in a series of blog posts coming in a week or two.

Add comment
Sidebar