«…лишь недалекие люди боятся конкуренции, а люди подлинного творчества ценят общение с каждым талантом…» А. Бек, Талант.

Coverage Cookbook/Specification to testplan/ru

Материал из Wiki

Перейти к: навигация, поиск
Aqua pencil.png Эта статья требует перевода на русский язык

Содержание

Методы создания плана тестирования

Целью создания плана тестирования или spreadsheet модели покрытия является захват подмножества intent и поведения проекта, которые предназначены для функционального покрытия. Это отнимает много времени, так как ручной процесс включает в себя рассмотрение различных документов спецификации проекта и извлечение необходимых требований по одному за раз. Лучше всего, если это будет сделано cross functional командой состоящей из архитекторов, проектировщиков, инженерами прошивки и верификации, чтобы получить несколько точек зрения и разные подходы. Без cross functional аспекта, различные подмножества intent проекта легко пропустить. Создание плана тестирования лучше всего сделать путем проведения многочисленных встреч, каждая из которых нацелена на определенную область проекта (блок XYZ), и длящаяся фиксированный период времени (1 час, каждое утро на следующей неделе в 9 утра), а также с целью (50 требований). Вообще, существует два метода, которые могут быть использованы:

  1. Снизу вверх: переходить с блока на бок или с интерфейса на интерфейс
  2. Сверху вниз: следовать используемым моделям или данным на чипе.
Два метода
Снизу вверх Сверху вниз
Оперделение Извлечение требований из низкого уровня, детальное проектирование и реализация спецификации. Этот метод является более ориентированным на проект. Извлечение требований из высокого уровня архитектуры и использование спецификации модели. Этот подход является ориентированным на клиента/верификацию/пользователя.
Плюсы


  • Low hanging fruit: Easiest to find, extract and prioritize.
  • Easier to link to coverage.
  • Easier to close on coverage goals.
  • Because you comb over every block and interface, key, highly specific and important coverage is picked up that might be glossed over by the top down method.
  • Can give more useful, high level, interesting coverage information, such as utilization, to explore tradeoffs.
  • Can be done before design specs are completed, without implementation details.
  • Goes towards intelligent testbench automation (ITA - Infact) using flow chart graphs.
  • Forces a customer centric look at the design.
Минусы
  • Need well developed specs with implementation details.
  • Can lead to an explosion of requirements. Too many to implement in a reasonable amount of time. Needs prioritization.
  • Tend to be low level, uninteresting coverage. Lots of data, little useful information to explore tradeoffs.
  • Needs access to high level specifications or architects with clear use model definitions.
  • Use model(s) can sometimes grow exponentially and result in a huge coverage space with too many iterations.
  • Coverage tends to be more upstream, generation oriented coverage, not downstream DUT or Scoreboard oriented. This can be misleading.
Метод Have a series of meetings each focused on a subset of the design, such as a block or interface, and gather the appropriate specifications and engineering personnel to extract out the requirements, refine them, prioritize them, and link them to some coverage group, coverage point or cross in a spreadsheet. Have a series of meetings with the architect and come up with a single high level use model first, then create a use model(s) document the goes into further details using lots of diagrams (tables, graphs, etc.) and minimal words. Then take this document and rework it into spreadsheet format.

Choosing Between The Bottom Up And Top Down Approaches

Bottom Up Top Down
Small Designs Large Designs
Good Design Specifications Good Architecture Specifications
Good Implementation Specification Access to Use Model(s) information
Control Designs Data Movement Designs
General (multiple) application, used by many customers Single application, used specifically by one or a few customers

Often a combination of top down and bottom up can be used. You can start with a top down and map out the main flow which naturally brings out categories and then do bottom up on each of the categories. It is wise to do this at the beginning of the project; as soon as some form of design specifications are ready. Get started by extracting a few hundred requirements, put them into a spreadsheet and then add more later as the project progresses. Some teams link each requirement to a coverage element right away as each requirement is extracted and refined. Others, enter in all the requirements into the spreadsheet, and then take a second pass to add the coverage linking later on. Neither way is better than the other, the important thing is to get the coverage linking done while the particular requirement(s) details are still fresh in your mind. To leave the links till later in the project will mean that you have to revisit each requirement and its associated documentation all over again, which will take longer.


Bottom Up Example

Below is a block diagram of a Ethernet Chip with an TX and RX path. Each path has a pipeline of blocks that the Ethernet frames pass through. Some of these blocks can be muxed in or muxed out for various configurations. Also there are various clocking configurations and each block has its own configuration setup details. With a bottom up approach we would go through each block's design specification and extract out the requirements for that block. We would also go through the global block and clock mux settings and extract out the requirements for each of those. The key is to divide up the work into small, digestible blocks or sub-blocks, so that the detailed requirements and behaviors can be easily extracted in a reasonable amount of time.

Ethpipe.png

The first thing you need to do to start the bottom up approach is to gather as many people who know the design as possible, architects, designers, verification team, experts on various interfaces, etc. Next, a team of people need to sub-divide up the work into some logical, manageable size. This can be done my making a brainstorming diagram, also called a mindmaps. Microsoft Visio and similar software enable easy capture of these types of diagrams on-the-fly, as the team brainstorms together. Each topic or sub-block can be broken down further and further as needed and they all are correlated in the brainstorming diagram. A simple example for the Ethernet chip is shown in the brainstorming diagram below. For more complicated designs, the brainstorming diagram would have many more sub categories branching off of each block to divide up the requirement extraction work into manageable amounts. Each branch in the brainstorming diagram might end up being a corresponding a category or subcategory in the Ethernet testplan, or if large, might be its own hierarchical spreadsheet. Some of the mindmapping software can take these brainstorming diagrams and export the information into a spreadsheet with section numbers for each category and subcategory. This gives a great starting point and a ready framework for your testplan.

Brainstorming Diagram for Ethernet Pipeline Design

The brainstorming diagram is a great first start. Each grouping or branch can then be broken out and a testplan creation meeting(s) held to flesh out the requirements for that particular topic. At each meeting gather all available design and implementation specifications, as well as any industry specification for that block or topic so they can be consulted.

Once you have a topic you can use the yellow sticky method [1], where you give post-it notes to a team who take 20 min to extract out requirements onto yellow stickies and then stick them all up on a white board for grouping into further categories. Rules and features are extracted out into detailed requirements and then each entered as a row into a spreadsheet with a title, and a brief description that describes the essence of that requirement. See the section on the do's and don'ts of requirements writing below.

Adding some sort of unique, alpha numeric requirements tag number to each requirement is a good idea, especially if you do have requirements written at multiple levels. The tags can then be used to link higher level requirements to lower level requirements and vice versa. Requirements tracing tools, like ReqTracer, can be used to further regiment the requirement tag naming and help by automating the tracking of all your requirements. Another good idea is to to add other useful information that would be helpful to guide further work with each requirement. This extra useful information might be the location in the spec that the requirement came from, the author, notes, priority, estimated effort, questions to answer later, etc. Finally, each requirement needs to be linked to some specific closure element, like a covergroup, coverpoint, cross, assertion, test, etc. A second pass on each requirement where each is refined, and prioritized is a good idea. See the testplan format page for a description and example of the recommended format.

The apb monitor, uart and datapath examples in the coverage cookbook use a bottom up planning approach.

[1] The Yellow Sticky Method is described in more detail in the book - Verification Plans: The Five-Day Verification Strategy for Modern Hardware Verification Languages by Peet James, Springer 2003.

Guidelines for writing requirements are available in the Requirements Writing Guidelines article. It is a good idea for the verification team to compile a list such as this before starting the planning process and to divide them up into rules (must be followed) and suggestions (good ideas). In effect, this is defining the requirements for writing requirements.


Пример метода сверху вниз

Метод сверху вниз может быть использован в таком же общем Ethernet проекте, который рассматривался в методе снизу вверх. Вместо того, чтобы идти блок за блоком, мы разрабатываем и следим за тем, как проект будет использоваться в реальном приложении. Мы следим за использованием модели(ей) микросхемы, или за тем, что иногда называют "днем в жизни" микросхемы. Подается питание. Что происходит в первую очередь? Потом? и т.д. Для этой Ethernet микросхемы верхний уровень использования модели имеет следующий маршрут:

  1. Настройка/Конфигурация
    • Конфигурация блока mux
    • Конфигурация счетчика mux
    • Каждый блок настроен
  2. Трафик
  3. Неожиданные события (LOS, error и т.д.)

Далее, вы подробнее остановитесь на этом основном верхнем уровне маршрута, расширения большим количеством деталей. Вы можете сделать это строкой за строкой, or by path or mode. Например, может быть некоторые комбинации блока и счетчика называются режимами, так что вы можете более детализировать в режиме. Вы также можете просто следовать по пути, как система к строке и строка к системе.


A common problem of this approach is that even with a design like this Ethernet pipeline, with its simple flow, the requirements can easily explode exponentially into what seems like too many combinations. This is common, so the explosion needs to be reworked into some logical breakdown as shown in this diagram:

TDexlode.png
When you look at the two parts of the above diagram the left exponential one looks like one huge uncloseable covergroup, while the one on the right you can see covergroups and coverpoints naturally fallout from each table or diagram. So you take each part of the high level use model flow and you expand out each one using whatever table or diagram that is useful to contain that particular sections exponential nature. For instance, in the above block muxing section of the Setup/Configuration you might develop a table of the potential useful setups and name each one. In other cases a Y-tree, Sequence, Bubble diagram or some other chart would be more useful. Often it is a good idea to gather the high level use model flow and all these diagrams into a new use model document, intermixed with minimal words.
Diagrams.png

Use a table, chart or diagram that best holds the exponential nature of each area of the use model:

  • Tables are good for small space, like a few bits of a register field, or a list of behaviors.
  • Bubble diagrams are good to show relationships between tasks or items, like the power areas and their settings.
  • Y tree diagrams are good for showing choices and decisions, ANDs & ORs, priorities.
  • Sequence diagrams show progression, cause & effect, handshaking
  • You can always combine diagrams together, like the group of tables above, connected by lines.

See the WB SOC design example for use models of how these diagrams are used in a coverage context.

Once you have broken out your use model(s) into a progressive collection of useful diagrams and tables, it is a good idea to put them all in one document for easy viewing and dissemination. Some teams combine them into one big diagram; others put them together in a presentation with descriptive informational slides between the diagrams. Other formats for these diagrams include documents (separate or added as a chapter in the design architecture or implementation specifications) or as html files for a project website. The presentation format is the most common, and most useful. The collection document can go by many names for example:

  • UMD: Use model document
  • DITL: Day in the life document
  • CAD: Coverage Architecture Document

Whatever you call the document, this document typically is very useful for introducing a new team member to the design to give them a clear overview. The team often will revert back to this document and these diagrams to flesh out more details as the verification project progresses.

Once you have a UMD, your verification team can take it and use it as a guide to write a testplan. They can comb through it and extract out the requirements and put them in the testplan. They can take each diagram, chart, and table and make it a section or sub-section in the spreadsheet, or if large, break it out into its own hierachical spreadsheet. The key is to divide up the categories and sub-categories so that each spreadsheet row is for a single requirement and can be usefully to be linked to some coverage element. Another key is to write each requirement at about the same level. Each bubble in a bubble diagram might be a single requirement or an entire subsection of requirements. Each choice on a Y-tree diagram might be a single requirement or more. Each table can be a coverage group, each row or column, a coverpoint.

The extraction of the requirements from the UMD often follows the same bottom up extraction process of described above. The UMD usually makes it easier, because of the inherent flow of the UMD and its diagrams. With practice, the verification team will start to visualize cover groups and coverpoints more readily, simply by looking at all the diagrams in their UMD. Just like with the bottom up approach adding the link and type to a coverage group, coverpoint, cross, assertion or test is best done as you write the requirement.

See the Wishbone SOC example section for more details on how to take the UMD content and create a testplan spreadsheet.

Testplan Review

The verification process has many important aspects that request time and effort of the verification team. The building of the testbench, the running of tests, the schedule, etc., all too often take precedence over the coverage model testplan spreadsheet and its development is deferred. Often, a preliminary testplan is created but the links to actual functional coverage elements are left out. The results are poor coverage implementation and minimal coverage results. The team ends up verifying in the dark, letting random generation occur, but not using coverage as a feedback to guide the testing to any conclusion or closure. They tape out with a "good enough" approach to coverage that is not based on any real coverage metric data. Having a good testplan with well defined requirements that are each linked to real coverage elements links is key. Taking the time to make this testplan will pay off in the long run. Adding the links as the requirements are written is the best approach. It also ensures that the team does not have to revisit all the documentation that inspired each requirement. To avoid this problem, mature verification teams implement a testplan review process modeled after good document or code review processes. A three stage process generally works well:

  1. PRELIMINARY REVIEW: A testplan is made early on and the first review is done early as well. It is a quick review, to make sure the testplan was created, has coverage linking and type, and is on the right track. It does not need to be perfect, but be the best that can be done at the time. It will evolve over the course of the project.
  2. MAIN REVIEW: About two-thirds way through a project, the real review occurs. The testplan is the coverage model which defines a prioritized subset of design behavior and intent. The goal here is to make sure the priorities and the chosen subset is correct. You can't cover everything. You can't verify everything. The team must choose their subset and do the most verification and coverage in the allotted time. This review will take some time, often 2-5 days. The testplan is reviewed in detail, making sure each row's requirement is clear and is being met with the coverage linking. All issues are addressed, and entered into a bug tracking tool. Often some form of reorganization of requirements is needed to bring the testplan up to date. It might need additions to accommodate missing content or design changes, but often it must be reduced so it can be realistically accomplished in the remaining time scheduled. Often reprioritizations occur, and some work is moved to a future tape out. The goal of the review is to find and fix any major problems or missing parts in the coverage model testplan spreadsheet.
  3. FINAL REVIEW: This review is done in the final weeks of the project and if the other two reviews were done well, it is a final confirmation that the plan is valid. All big issues should have already been found and dealt with. In the final review exception details are added and any final concerns addressed before the testplan is closed.

This testplan review process is often combined with a similar three step code review process in which the rtl and testbench code are reviewed.