«…лишь недалекие люди боятся конкуренции, а люди подлинного творчества ценят общение с каждым талантом…» А. Бек, Талант.

Coverage Cookbook/Requirements Writing Guidelines

Материал из Wiki
Версия от 16:53, 6 марта 2013; ANA (обсуждение | вклад)

(разн.) ← Предыдущая | Текущая версия (разн.) | Следующая → (разн.)
Перейти к: навигация, поиск
Проект Диплом

Литература

Coverage Cookbook (en)

OVM методология

* OS-VVM *

When creating a testplan, requirements to have a successful chip need to recorded in a useful, easy to digest manner. The following rules and guidelines will help to ensure this happens. It is a good idea for the verification team to compile a list such as this before starting the planning process and to divide them up into rules (must be followed) and suggestions (good ideas). In effect, this is defining the requirements for writing requirements.

  • Don't rewrite anything that is already detailed in the source specifications, just reference the original document.
  • Divide up the categories and subcategories so that each row is a single requirement. Don't write five requirements on one row. Write one requirement per row.
  • Each requirement should be unique. Do not use ten requirements, when one will do. The gauging criteria is often whether or not it will easily link to a coverage element.
  • Each requirement must be linked to some coverage element (test, covergroup, coverpoint, cross. assertion, code coverage, etc).
  • Write each requirement at about the same level. Don't write one at a subsystem level, and the next at the AND-gate level. If you do have multi-level requirements, come up with a natural three to five level scale and define them clearly. Maybe use alpha numeric tags to distinguish levels, or maybe put each level in its own hierarchical testplan spreadsheet.
  • A requirement is typically written in the positive, a description of what the design shall do. However, some requirements which place bounds on behavior are easier to write in the negative; in other words, a description of what the design shall not do.
  • It is alright to add a requirement that is not going to be addressed by the verification process. It might be addressed by C-modeling, or by FPGA validation in a lab, or by some other means. Include it anyway, and add a column that states which process is being used on that requirement.
  • Identify each requirement with both unique name and a unique number system.
  • Requirements might be sub-divided into major categories like design requirements (about the design), verification requirements (about the run management), testbench requirements (about the testbench), software requirements (about the firmware), tool requirements (about Questa), library requirement (about which part of UVM that you will use), etc.
  • Requirements should be ranked or prioritized. This may be a scale of 1-3, or could be a complex risk equation that takes in other parameters.
  • Requirements should be ordered. Have categories and sub-categories, do not just have them entered sporadically, have some logical order.
  • Each design requirement needs to be thought from all three verification perspectives, generation, checking and coverage. How will a situation be generated to exercise this requirement? What will check that it is right; an assertion, a scoreboard, or both? What sort of permutations will need to be covered, how many permutations? Some testplans will have three columns with a brief description of each of these.
  • If a requirement is connected to some reused verification entity, it should be specified. A column for current or future reusability can be added and filled in.
  • It is alright to have a requirement that is earmarked for a special directed test, but these should be not be widespread
  • Testbenches often have levels of abstractions, often labeled with some layering (L1-3) or naming (configuration layer, traffic layer, etc.). A column that specifies each requirements abstraction layer can be added.
  • Normal function and error handling function requirements should be separated, but do not leave out the error requirements.
  • Some requirements might need to be ported across several environments, block, sub-system, system, lab, etc. This should be noted. A designated column can delineate this.
  • Some requirements might be constraints in disguise. This is fine. Just note it.
  • Some requirements are assertions in disguise; they have a cause and effect nature such as "after this, this will always happen". This is fine, just note it. It is wise to categorize assertions in some logical fashion, such as interface, internal, etc.
  • Some requirements are configuration oriented. You may not need to specify each and every configuration, just point to where they are described in other documents, or describe each unique family of sequences. Divide them by how covergroups and coverpoints will capture them.
  • Some requirements are sequence oriented, meaning they are configurations or traffic that need to be generated to stimulate the design. When you define sequence requirements it is best to start by defining each unique family of sequences by categories and sub categories, such as higher categories like configurations, traffic, interrupts, errors, etc., and then break those down into sub categories as needed. You do not need to specify each and every sequence, especially if they are already described in other documents, but make categorizes of them, each of which will lead to an interesting covergoups
  • Some requirements might just be assumptions made, or required, that lead to easier implementation. This is fine.
  • Scoreboard or assertion checking limitations should be included. Often the transfer function of a scoreboard or assertion might be too complex to be fully addressed. Specify what will be addressed, and what will not be addressed. For scoreboard, what actual transaction level elements will be checked?
  • Another more advanced approach is to think covergroups and coverpoints up front and then to work backwards, reverse engineering and writing the requirements.