In the last call we talked about models as abstractions and as templates/patterns. We did not talk about models as in LLMs. We should – specifically, when we talk about ‘Privacy Models for Engineering’, we should be clear about if and how
that relates to models such as those behind language models / neural networks.
It would be very interesting if it turns out to be possible to create a neural network that can generally enforce privacy rules or remove privacy violating approaches / weightings from language models. I don’t know if such a thing exists
– or if it could. That smells like something that could absorb a lot of research funding – and time. Because it could be a huge time sink, I’m not suggesting this group pursue it at this time. Nevertheless, we should have a response when the question inevitably
comes vis-à-vis the relationship of this group and its goals to language models.
Just a thought.
Steve Hickman
Epistimis LLC
651-260-3126
Book a VC with me
--
FYI – In case you’re wondering where the name comes from, see
this description of the 3 kinds of knowledge ( I wish I’d written it).