Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [tsf-dev] TSF process feedback - part 2

Paul,

It's worth thinking about what TSF is claims to do at a high-level. I think the optimal way to do this is evaluating TSF

Who is the potential customer, i.e., the people paying with their attention
and work time?

The major established users are going to stay with the established tools,
those that they have always used.
These people have been trained to write safety cases.

Nobody writes a safety case because they want to.
The next tier down are starting to be asked about conformance to
this that and the other.
External stakeholders need to be satisfied.
Regulators, legal department, major customers.

There seem to be lots of tools already out there
A survey of 10 tools, filtered down from 46 tools
https://dl.acm.org/doi/fullHtml/10.1145/3342481

Grok's responses to a few basic questions
https://x.com/i/grok/share/33b5581aa03f493284b661326197609f

Most of these are academic tools, with no support past
the end of the research grant that paid for them.


The target customer is an infrequent user.
Deterred by the effort needed to build a case.
Deterred by the cost of acquiring necessary skills.

Need to address a broad market and support an ecosystem.


Where are we?

Different interpretations of a scoring function.
Different methods of combining values to create a score.

Defining one scoring function restricts the appeal of the tool.

Why not allow users to calculate any score using any method?
Provide builtin support for the common methods of
combining values and scoring functions.

Also support the calculation of multiple scores.

Writing conformance  statements is a skill that takes
practice to learn and become good enough.

LLMs are very good at analyzing sequences of words.

Why not provide a skills assistant that:

   o checks individual compliance statements

   o checks a set of compliance statements

   o checks a compliance document

A review of LLMs as judges
https://arxiv.org/html/2511.02203v1
More LLM-based tools at the end of this thread
https://x.com/i/grok/share/33b5581aa03f493284b661326197609f

A simplistic analysis of the contents of one of the TA-analysis conformance
files
https://x.com/i/grok/share/f70c5d3b6e654c96a7f72d0b80f3b9bd

I used to spend a lot of time reading programming language standards,
where "shall" and "must" have specific meanings.  The apparent random
switching between the two, in these .md files certainly triggered me.
Also is the "we" set of people the same as the "our" set of people?

These .md files need a lot of cleaning up.

A plug for my hybrid talk at the British Computer Society
tomorrow
https://www.eventbrite.co.uk/e/building-models-of-software-process-behaviours-helped-by-llms-tickets-1987814433481

--
Derek M. Jones           Evidence-based software engineering
blog:https://shape-of-code.com



Back to the top