Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [dsdp-dd-dev] Memory service questions

Hello Jesper, Pawel.

I'm sorry for the delay but I've been away for a while and things have moved quite a bit for DSF in the mean time. I just wanted to check to what extent the service had been tampered with since I last looked at it...


On Fri, Oct 17, 2008 at 5:05 AM, Jesper Eskilson <jesper.eskilson@xxxxxx> wrote:
Pawel Piech wrote:
Hi Jasper,
The memory service and its integration with the memory view has been written by Francois, who has been on a leave last couple of months.  I believe he is returning from his leave shortly so he can give much better answers, but I will try to give you as much information as I remember:

Ok. I'll put this on hold then until he's back. (Should I send him a personal note to take a look at this, or should I just wait for him to catch up on his mail?)

I guess I'm back :-)

 
Jesper Eskilson wrote:

I'm implementing the memory service for our DSDP/DD debugger integration, and I got some questions/problems about how DSF expects the memory service to behave.

(We're working against 1.0.0-RC4.)
Memory service has not changed much in 1.1

I noticed is that getMemory() is called several times for each update (5 times and upwards), and when you attempt to scroll it is flooded with getMemory() calls. The MemoryByte array returned has the readable flag set, and all other cleared. Should I be setting other flags as well, or is the flooding caused by something else?
We assume that the memory service will implement caching and potentially coalescing of requests for memory to target.  So all requests from the memory rendering are channeled directly to the service.

Exact. Each time a memory monitor is created (by the user), the debug platform instantiates a corresponding IMemoryBlockExtension via a call to IMemoryBlockRetrieval.getExtendedMemoryBlock(). For DSF, these interfaces are implemented by DsfMemoryBlockRetrieval and DsfMemoryBlock respectively.

Following that, each time an update is requested, the UI makes a call to IMemoryBlockExtension.getBytesFromAddress(). In our case, depending on the requested block "location" (address and size) with respect to the cached block, and on the update policy (through IMemoryBlockUpdatePolicyProvider), we determine if the cached block (or parts of it) can be re-used and if a trip to the memory service is needed (IMemory.getMemory()).

If so (getMemory() was called), the retrieved block is compared with the cached block to flag the changed MemoryBytes so the UI can highlight them in the corresponding monitor.

The memory service itself should issue its requests through a CommandCache which handles the duplicates quite nicely and limits the number of trips to the back-end.

The issue here is that I get alot of *identical* getMemory() requests (same address, same range, same wordsize, etc.). If I start scrolling, the service is sometimes flooded with calls until I get a OutOfMemory error.

Each of these getMemory() requests is ultimately triggered by the UI and should be factorized by the CommandCache. The UI will request a memory read upon receiving a DebugEvent (DebugEvent.CHANGE, DebugEvent.CONTENT). By any chance, does your service issue such events liberally? That could explain the flood of identical requests.

 


In general, it is not very clearly documented exactly how DSF expects the flags field to be set by the implementing service. What is the exact semantics of the HISTORY_KNOWN field?
MemoryByte (and its flags) come from the IMemoryBlock and IMemoryBlockExtension interfaces.  We decided to use these at the service level as well for simplicity, but some of the flags, especially history and changed are managed by the memory block implementation (DsfMemoryBlock). 

Does that mean that the memory service should just leave these? I noted that the GDB MIMemory service does not set any flags at all; at least at the sites I could find where it manipulated the MemoryByte objects. (I may have missed some places, though).

The HISTORY_KNOWN flag simply means that the CHANGED flag is meaningful (my english is lousy, I know...). In getBytesFromAddress(), DsfMemoryBlock compares the cached block with the newly retrieved one and flags the changed bytes. It also sets the HISTORY_KNOWN flag so the UI will correctly hightlight the changes (and put the little delta decorator too).

The MIMemory service does not tamper with the other flags. But if your debugger provides information about endianness, readability, writeability, etc... you should definitely take advantage of it in your service.

 



Regarding endianness: why does each memory byte carry a flag about endianness? Seems a rather odd way to implement endianness, and it makes a little backwards to communicate with out debugger backend which handles all byte-swapping itself. When reading from the target, I have four (overloaded) functions, which basically look like this:

   Read(uint32_t count, uint8_t *buf, ...);
   Read(uint32_t count, uint16_t *buf, ...);
   Read(uint32_t count, uint32_t *buf, ...);
   Read(uint32_t count, uint64_t *buf, ...);

Which means that if you, for example, want to read 32-bit units from the target, you use a uint32_t buffer, and you automatically get the correct endianness.

I'm not sure how to present this in a way suitable to DSF. The getMemory() documentation say that the returned bytes should "represent the target endianness", but I'm not exactly sure what that means.

Regarding word sizes: the GDB MIMemory service only supports a 1 byte word size. I could not find an active bugreport on that. Should I add one?

This is where my knowledge of the memory service fades :-(  But I do know that the memory service implementation is basically a first-cut implementation, and really only the basic use cases were considered.  So endianness and word size issues have not been worked out.  But some of the CDI framework users, e.g. Freescale have dealt with these issues and it would be useful to look at their implementation and see where the DSF interface is lacking.

Is there any documentation available on how the (new) memory rendering system works? I found the docs for the org.eclipse.debug.ui.memoryRenderings extension point, but it does really help in understanding how the memory rendering architecture works.

As Pawel mentionned, we elected to use a MemoryByte[] to represent our memory blocks. And each MemoryByte comes with its personalized set of flags :-) At first glance this might look inefficient, but this is what the UI expects (see IMemoryBlockExtension).

For the "word size", my understanding is that it corresponds to the size of the smallest addressable item, usually the byte (but, like a few other parameters, this should really be a launch configuration parameter).

Anyway, to keep things simple, I found it is easier to provide an array of bytes then let the Memory View handle the formatting (ASCII, Integer, char, ...). I suggest that you adopt a similar approach and that you provide a specific memory renderer for your more exotic rendering needs. Of course, I might very well have overseen some critical aspect and, if you have one, I would very much like to have an example where this doesn't work.

My understanding of the endianness flag (another launch configuration parameter candidate) is really for the renderer to know how to handle the raw byte array.
 

--
/Jesper


_______________________________________________
dsdp-dd-dev mailing list
dsdp-dd-dev@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/dsdp-dd-dev



I hope this answered most of your questions. Don't hesitate to ask for clarifications if needed.

In particular, I would like to hear about the flood of memory requests.

Best Regards,
/fc



Back to the top