Copyright © 2007 – Brian Bailey Consulting 1
The Great EDA Cover-up
Brian Bailey
Abstract
Functional verification is an art, or at least that is what we are led to believe. Every once
in a while a new technology emerges that injects a dose of science into the process and
these advances can make the process more predictable, increase efficiency and lower
overall verification costs. At the same time, the introduction of standards can ensure a
degree of commonality in the capabilities provided by the EDA industry. This whitepaper
will examine the state of the art in coverage metrics and try to separate some of the
hype from reality. It will explore the existing types of coverage metrics, looking at their
strengths and weaknesses, and then discuss some technologies that have been
developed that can enhance the credibility of the information that coverage metrics
provide. While the EDA industry is not yet at the stage of providing a go/no-go indicator
informing a team that verification is complete, we are getting to the point where
verification can be migrated somewhat from the realm of being subjective to one of being
more objective.
Introduction
Verification methodologies and tools have made significant progress over the past five
years as the proportion of total time and money spent on the task has increased. While
there is significant disagreement over the total costs and the quality of the results, there
is no argument that verification is getting tougher than it used to be. One area of
advancement has been the coverage metrics used, and a sign of the importance of this
subject is the recent formation of a standards group within Accellera
i
to bring
convergence on the metrics and to define an interface that will allow users to combine
coverage data of various types and from a number of different sources.
There are two primary roles for coverage metrics: 1) to provide an indication of the
degree of completeness of the verification task and 2) to help identify the weaknesses in
the verification strategy. The measure of completeness, while often based on objective
measures, has traditionally been treated as subjective since most of the metrics in use
today can only identify when the task is not complete, rather than when it is complete.
This article will explore the reasons for this and how those metrics can be improved.
When the metrics identify an unverified behavior of a design, then changes can be made
in the verification environment to enhance the possibility of those behaviors being
exercised and verified.
To look for solutions to these issues, it is worth reminding ourselves about the basics of
verification that are often forgotten. These will be discussed in the next section, followed
by a look at the fundamental types of coverage in use today along with their strengths
and weaknesses. This is followed by a discussion of two recent advances that can
provide additional confidence in coverage data.