Flows for Testing Work in Jira

These flows are being developed in late-July 2020, and are intended for use with the SAF and PLACES Jira products (i.e. the GPS Mobile App, and Safe Places).

This is an early draft of the processes we hope to follow. Feedback welcome.

 

Ticket Types

This section covers the different types of ticket we use to track testing work in Jira, and the main flows that we use.

Retesting bug fixes workflow

Bugs in Jira look like this:

Bugs are raised as described here: https://pathcheck.atlassian.net/wiki/spaces/TEST/pages/81264869

Once a bug is fixed, Developers should:

  • Set status to READY FOR TEST

  • Include some details of the fix:

    • Which build will contain the fix

    • Any particular concerns or risks that need testing

  • Assign back to the original raiser (unless they know the original raiser is no longer on the project)

  • If the original raiser is no longer on the project, they should assign to “Triage QA”

 

Bugs ready to test are here: https://pathcheck.atlassian.net/issues/?filter=10055

When a Tester receives a bug in READY FOR TEST state, they should:

  • Once you being to work on the bug fix, transition status to IN TEST

  • Once you have completed testing, transition status to DONE

    • Update the testing you completed for the bug fix in the ticket

  • If you find a problem with the bug fix, then transition status to IN PROGRESS and assign back the the Developer

  • If you are unable to complete a ticket in a reasonable timeframe, please:

    • Add notes to the ticket so that another tester can understand what needs to be done.

    • Assign the ticket to “Test Queue”

  • If you get stuck on a ticket and cannot progress, assign the ticket to “Triage QA” where one of the leads will pick up and review

 

New features workflow

New Features are tracked in Jira by either Stories or Epics, which look like this:

The test activity for such a story or Epic is tracked in “QA:” subtasks that look like this:

Once a feature is ready for test, the Developer should:

  • Set status to READY FOR TEST

  • Include some details of the new feature:

    • Which build will contain the new feature

    • What the design of the new feature is - including links to Figma resources & any other design reference material.

    • Any particular concerns or risks that need particular attention from the QA team.

  • Assign to “Triage QA”

 

Triage QA will analyse the tickets, determine the required testing, and create a number of “QA:” subtasks (details below), which are then passed to Testers in state READY FOR TEST

New ready to test feature work can be found here: https://pathcheck.atlassian.net/issues/?filter=10056

When a Tester receives a new feature in READY FOR TEST state, they should:

  • Assign themselves to the “QA:” subtask ticket

  • Once you being to work on the new feature, transition status to IN TEST

  • Update the ticket with details of the testing you completed for the new feature.

  • Once testing is complete, debrief with the test lead (see “debriefs” below).

  • After the debrief is complete, transition status to DONE

  • If you find bugs in the new feature, raise them in Jira as described here: https://pathcheck.atlassian.net/wiki/spaces/TEST/pages/81264869

  • If you find a blocking bug that means you cannot go on with the testing, then transition status to BLOCKED and link the Jira tickets to show that the bug blocks the testing.

  • If you get stuck on a ticket and cannot progress for any other reason, assign the ticket to “Triage QA” where one of the leads will pick up and review

 

Debriefs

We run debriefs for “QA:” subtasks. We don’t typically run debriefs for bug fixes, but can do so at the tester or test lead’s discretion.

The purpose of a debrief is to review and agree on:

  • What has been learned in testing, and whether any further testing should be planned

  • Whether the documentation of the testing done is adequate

  • Whether any of the bugs found should block us from considering this testing to be “DONE” (typically because the bugs were major, or because significant further testing will be needed once the bugs are fixed).

  • Any testability issues, or impediments, that we should be looking to resolve

A debrief can take several forms, depending on the experience of the tester, and the complexity of what was tested. This could be:

  • a face-to-face video conference, reviewing the testing & bugs in detail

  • or, in simpler cases, a brief slack conversation.

The debrief must reach a clear position on whether or not the required testing can be considered “DONE”. Also, if further testing is needed, whether it will be done under this ticket, or whether a new ticket will be raised to cover it.

 

Tester Responsibilities

We ask that testers do the following:

  • Assign tickets to themselves that they are working on

  • If they don’t have any work assigned, review work in the “Test Queue”, and assign this work to themselves, and do it! Where set, use ticket priority as a hint as to what order to tackle things in.

  • If the tester cannot complete an assigned ticket in a reasonable timeframe, and that work could reasonably be assigned to someone else, pass that work back to “Test Queue” so that someone else can pick it up

  • If a tester finds work assigned to “Test Queue” that is not clearly defined enough to progress, they should comment to explain the issue and assign back to “Triage QA” with comments explaining why

  • For “QA:” subtasks, always debrief with a test lead before marking the task as DONE

Note that Brand New testers will probably find it easier to start with Bugs, and then move on the “QA:” sub-tasks when they have a bit more familiarity with the products.

 

Current Test Leads

Current Test Leads are @Diarmid Mackenzie and @Stella Nelson , currently sharing work across both SAF and PLACES.

Other experienced testers who would like to contribute as test leads, please do let us know.

See also related article on guidance for Test Leads:
https://pathcheck.atlassian.net/wiki/spaces/TEST/pages/216990076