Software dataplanes are emerging from both research and industry backgrounds as a more flexible alternative to traditional hardware switches and routers. They promise to cut network provisioning costs by half, by enabling dynamic allocation of packet-processing tasks to network devices; or to turn the Internet into an evolvable architecture, by enabling continuous functionality update of devices located at strategic network points.
Flexibility, however, typically comes at the cost of reliability. Software of non-trivial size that is subject to frequent updates is typically plagued by behavior and performance bugs, as well as security vulnerabilities. It makes sense then that network operators be skeptical about the vision of software dataplanes that are continuously reprogrammed in response to user and operator needs – as they were skeptical a decade ago toward active networking. The question is, has anything changed? Have software verification techniques matured enough to enable us to reason about the behavior and performance of software dataplanes? Or must we accept that they will always be less predictable than their hardware counterparts?
The subject of this work is a verification tool that takes as input the executable binary of a software dataplane and proves that it does (or does not) satisfy a target property; if the target property is not satisfied, the tool should provide example packet sequences that cause the property to be violated. Developers of packet-processing apps could use such a tool to produce software with guarantees, e.g., that never seg-faults or kernel-panics, no matter what traffic it receives. Network operators could use it to verify that a new packet-processing app that they are considering deploying will not destabilize their network, e.g., it will not introduce more than some known fixed amount of per-packet latency. One might even envision markets for packet-processing apps – similar to today’s smartphone/tablet app markets – where network operators would shop for new code to “drop” into their network devices; the operators of such markets would need a verification tool to certify that their apps will not disrupt their customers’ networks.
For general programs, verifiability and performance are competing goals, if not mutually exclusive. Proving properties of real programs (unlike searching for bugs) remains an elusive goal for the systems community, especially for programs that consist of more than a few hundred lines of code and are written in a low-level language like C++. So, one either writes in such a language and gives up on verifiability, or in a language like Haskell and gives up on performance.
For software dataplanes, it does not have to be this way: we will argue that we can write them in a way that preserves performance and enables verification. The key question then is: what defines a “software dataplane” and how much more restricted is it than a “general program”? I.e., how much do we need to restrict our dataplane programming model so that we can reconcile performance with verifiability?
We observe verifying packet-processing pipelines should be significantly simpler than verifying general software, because of their special structure can help sidestep path explosion. In a typical pipeline, two elements (stages) never concurrently hold read or write permissions to the same mutable state, whether that state is a packet being processed or some other data structure. Intuitively, the fact that there are no state interactions between elements (other than one passing a packet to another) makes it feasible to reason about each element in isolation. When properly done, such “decomposition” can help significantly with path explosion. So, we first reason about each pipeline element in isolation, then compose the results to prove properties about the entire pipeline. This reduces by an exponential factor the amount of work that needs to be done to prove something about the pipeline.
Verification then consists of two main steps: step 1 searches inside each element, in isolation, for code that may violate the target property, while step 2 determines which of these potential violations are feasible once we assemble the elements into a pipeline. More specifically, we cut each pipeline path into element-level segments. In step 1, we obtain, for each segment, a logical expression that specifies how this segment transforms state; this allows us to identify all the “suspect segments,” which may cause the target property to be violated. In step 2, we determine which of the suspect segments are feasible and indeed cause the target property to be violated, once we assemble segments into paths.
In step 1, we treat each element in isolation: First, we obtain logical expressions for all its segments, by symbexing the element assuming unconstrained symbolic input. Next, we conservatively tag as “suspect” all the segments that may cause the target property to be violated. E.g., in the figure above, if the target property is crash-freedom, segment e3 is tagged as suspect, because, if executed, it leads to a crash.
If we stopped at step 1, our verification would be complete, but not sound: If this step does not yield any suspect segments for any element, then we have proved that the pipeline satisfies the target property. E.g., if none of the elements ever crashes for any input, we have proved that the pipeline never crashes. However, a suspect segment does not necessarily mean that the pipeline violates the target property, because a segment that is feasible in the context of an individual element may become infeasible in the context of the full pipeline. E.g., in the figure, if we consider element E2 alone, segment e3 leads to a crash; however, in a pipeline where E2 always follows E1, segment e3 becomes infeasible, and the pipeline never crashes. In program-analysis terminology, in step 1, we over-approximate, i.e., we execute some segments that would never be executed within the pipeline that we are aiming to verify.
Step 2 makes our verification sound, by discarding suspect segments that are infeasible in the context of the pipeline: First, we construct each potential path that includes at least one suspect segment. Next, we compose the path constraint and symbolic state for each such path based on the constraints and symbolic state of its constituent segments (that we have already obtained in step 1). Finally, we determine whether each such path is feasible (based on its constraints) and whether it violates the target property (based on its symbolic state). Note that the last step does not require actually executing a path, only composing the logical expressions of its constituent segments.
So far, we have evaluated our verification tool on stateless and simple stateful Click dataplanes; we were able to perform complete and sound verification of these pipelines within minutes, whereas a generic state-of-the-art verifier failed to complete the same task within several hours. A technical paper that presents these results has been accepted at the USENIX Symposium on Networked Systems Design and Implementation (NSDI) 2014, and we will make it publicly available soon.