Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[dsdp-dd-dev] Memory service questions


I'm implementing the memory service for our DSDP/DD debugger integration, and I got some questions/problems about how DSF expects the memory service to behave.

(We're working against 1.0.0-RC4.)

I noticed is that getMemory() is called several times for each update (5 times and upwards), and when you attempt to scroll it is flooded with getMemory() calls. The MemoryByte array returned has the readable flag set, and all other cleared. Should I be setting other flags as well, or is the flooding caused by something else?

In general, it is not very clearly documented exactly how DSF expects the flags field to be set by the implementing service. What is the exact semantics of the HISTORY_KNOWN field?

Regarding endianness: why does each memory byte carry a flag about endianness? Seems a rather odd way to implement endianness, and it makes a little backwards to communicate with out debugger backend which handles all byte-swapping itself. When reading from the target, I have four (overloaded) functions, which basically look like this:

    Read(uint32_t count, uint8_t *buf, ...);
    Read(uint32_t count, uint16_t *buf, ...);
    Read(uint32_t count, uint32_t *buf, ...);
    Read(uint32_t count, uint64_t *buf, ...);

Which means that if you, for example, want to read 32-bit units from the target, you use a uint32_t buffer, and you automatically get the correct endianness.

I'm not sure how to present this in a way suitable to DSF. The getMemory() documentation say that the returned bytes should "represent the target endianness", but I'm not exactly sure what that means.

Regarding word sizes: the GDB MIMemory service only supports a 1 byte word size. I could not find an active bugreport on that. Should I add one?

--
/Jesper



Back to the top