[
Date Prev][
Date Next][
Thread Prev][Thread Next][
Date Index][
Thread Index]
[
List Home]
|
[tsf-dev] TSF process feedback, part 4: Change management, RAFIA
|
Hello,
Here's the final instalment of my reflections on using TSF's processes.
In CTRL we use the following things that you know from TSF:
1. A set of Statements which present an assurance case, which is
managed with the trudag tool.
2. References as evidence for the statements.
3. Evaluation of the statements by subject matter experts (SMEs).
4. Evaluation of the statements using validator scripts.
5. Aggregate scores
6. Monitoring when evidence changes
7. Modifying the statements via Gitlab
8. The RAFIA process and STPA.
This mail covers items 7 and 8. If you missed the first three parts,
you can find them here in the archives:
https://www.eclipse.org/lists/tsf-dev/msg00037.html
https://www.eclipse.org/lists/tsf-dev/msg00040.html
https://www.eclipse.org/lists/tsf-dev/msg00043.html
Modifying the statements via Gitlab
-----------------------------------
All of these mails are building to the key selling point of TSF and the
reason we use it: our assurance case is stored in Git, alongside the
product we are building, and the two evolve together.
As a software engineer, I consider this approach best practice for
maintaining documentation, but I understand that in the world of
compliance it's still something of a stretch goal. We do manage to
release our product every month with an up-to-date assurance case,
and of course I recommend everyone try this approach.
We use Gitlab to review changes and we follow the
approach documented here:
<https://pages.eclipse.dev/eclipse/tsf/tsf/extensions/management.html>.
Here are some of the lessons we've learned along the way.
Firstly, just keeping something in Git doesn't mean engineers will
update it, especially for things like images and diagrams that aren't
easily reviewable or searchable as plain text. As a maintainer you need
to keep your eye on these as always.
We use Gitlab to review changes, and our CI runs `trudag manage lint`
on each MR to highlight any suspect items or links. In fact, we had
to extend this. We also run a `trudag-diff` job which does the
following:
* Lint the graph on the 'main' branch
* Lint the graph on the merge request candidate branch
* Display errors from candidate branch that are *not* present in
'main'.
This is needed because of external references: some statements may
be marked as Suspect through no fault of the MR author. And, since
Trudag fetches the external references each time it runs, the output
of `trudag manage lint` may be different each time you run it, even
on the same Git commit! This is counter-intuitive and the `trudag-diff`
job is a helpful workaround.
The `trudag-diff` job is helpful to show where re-review is needed.
For example, an engineer changes a config file that is referenced in
a statement, and the statement has an SME evaluation. Trudag flags
the statement as suspect: the SME needs to review it and confirm
the statement is still true.
Since Trudag doesn't integrate with Gitlab, we do this as follows
for internal references:
1. First the engineer marks the statement as reviewed (even though
they are NOT necessarily the subject-matter expert), and records
that in the `.dotstop.dot` file.
2. Then, a Trustable reviewer checks the `trudag-diff` job and notices
that a statement with an SME review has changed. They tag the SME
on Gitlab.
3. The SME reviews the evidence and the statement and replies on
Gitlab.
This approach has the positive effect that our graph is reviewed
regularly, and the SME reviews give additional confidence on code
changes, so we rarely have to revert bad or unwanted changes later.
The downside is it does slow down development. Sometimes an SME
is unavailable, and you have to decide whether to wait for them to
return, or have someone else take over their score. It can lead
to four reviewers being called in over a one-line change that just
removes whitespace. And large changes that touch many files can
be much more expensive to land.
This is a problem that all software projects have, and my only advice
is to be flexible. TSF does allow merging changes *before* the
corresponding statements are reviewed, with a corresponding score of
0 for those changes. Sometimes that might be the right choice.
And, of course, the smaller your graph and the fewer references, the
lower the cost. It pays to be minimal!
RAFIA and STPA
--------------
My colleagues published an excellent article on this topic recently, so
instead of repeating that here I'll just share the link:
https://www.codethink.co.uk/articles/building-on-stpa/
---
So, this ends my "lessons learned" mail, I'm interested to hear
feedback from others. I know some of these issues are documented
in Gitlab already, in fact I'd be grateful if readers can reply with
links to any relevant issues you're aware of.
Best regards,
Sam
--
Sam Thursfield (he/him), Software Engineer
Codethink Ltd. http://www.codethink.co.uk/