Regarding endianness: why does each memory byte carry a flag about
endianness? Seems a rather odd way to implement endianness, and it
makes a little backwards to communicate with out debugger backend
which handles all byte-swapping itself. When reading from the target,
I have four (overloaded) functions, which basically look like this:
Read(uint32_t count, uint8_t *buf, ...);
Read(uint32_t count, uint16_t *buf, ...);
Read(uint32_t count, uint32_t *buf, ...);
Read(uint32_t count, uint64_t *buf, ...);
Which means that if you, for example, want to read 32-bit units from
the target, you use a uint32_t buffer, and you automatically get the
correct endianness.
I'm not sure how to present this in a way suitable to DSF. The
getMemory() documentation say that the returned bytes should
"represent the target endianness", but I'm not exactly sure what that
means.
Regarding word sizes: the GDB MIMemory service only supports a 1 byte
word size. I could not find an active bugreport on that. Should I add
one?