Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Language IDEs » C / C++ IDE (CDT) » Unusable Remote Debugging Performance
Unusable Remote Debugging Performance [message #848766] Wed, 18 April 2012 10:58 Go to next message
Michael Spertus is currently offline Michael Spertus
Messages: 78
Registered: July 2009
Member
Using the techniques from my other recent posts, I am able to run the debugger against gdbserver remotely. Unfortunately, single-stepping takes about 3 minutes for each step! Looking at the gdb-traces, I see most of the time is taken in thread-info being invoked for 50 different threads. This seems awfully long.

I did try the awful hack in http://code.google.com/p/android/issues/detail?id=9713#c23 to hex-edit thread-info to be xhread-info in my gdb executable. After that, the debugger seems to work just as well functionally, and single-step times are reduced to about 10 seconds. Still, this is awfully slow for single stepping , and that awful hack can't be recommended.

Any better suggestions? Why are the gdbserver responses so slow (e.g., the same info and commands are virtually instantaneous using gdb -tui over a console)?

Thanks,

Mike
Re: Unusable Remote Debugging Performance [message #848774 is a reply to message #848766] Wed, 18 April 2012 11:04 Go to previous messageGo to next message
Marc Khouzam is currently offline Marc Khouzam
Messages: 262
Registered: July 2009
Senior Member
I don't know why gdbserver is responding so slowly. However, I wonder why CDT is sending 50 different -thread-info. It should only be sending ones for threads that are currently visible on your debug view. If that is not the case, could you open a bug please? Performance falls into our recent efforts to better support multicore debugging, which should handle (eventually) thousands of threads.

Thanks

marc
Re: Unusable Remote Debugging Performance [message #849295 is a reply to message #848774] Wed, 18 April 2012 23:02 Go to previous messageGo to next message
Michael Spertus is currently offline Michael Spertus
Messages: 78
Registered: July 2009
Member
Marc Khouzam wrote on Wed, 18 April 2012 11:04
I don't know why gdbserver is responding so slowly. However, I wonder why CDT is sending 50 different -thread-info. It should only be sending ones for threads that are currently visible on your debug view. If that is not the case, could you open a bug please? Performance falls into our recent efforts to better support multicore debugging, which should handle (eventually) thousands of threads.

Thanks

marc

OK Marc, I will submit a bug. Note that I have switched to using "plink gdbwrapper" which runs at least 100 times as quickly as using gdbserver (gdbwrapper spawns a gdb and sits on stdin converting file paths between DOS and unix).

Thanks,

Mike
Re: Unusable Remote Debugging Performance [message #898639 is a reply to message #848766] Thu, 26 July 2012 20:51 Go to previous messageGo to next message
Tim Black is currently offline Tim Black
Messages: 17
Registered: September 2009
Junior Member
I'm seeing this same behavior, gdb traces showing thread-info requests for each thread id, every time I single step on remote target. I couldn't find the bug for this and wondered if it got reported, or if there was any progress or suggestions for workaround. ? Also, I'm curious why "Force thread list update on suspend" doesn't alter this behavior. I get the same behavior regardless of this setting in the eclipse launcher. THanks!

BTW I'm using gdb, gdbserver version 7.4.1 (powerpc), Indigo Service Release 2 (Build id: 20120216-1857), org.eclipse.cdt version 8.0.2.201202111925, org.eclipse.cdt.launch.remote version 6.0.0.201202111925.
Re: Unusable Remote Debugging Performance [message #898747 is a reply to message #898639] Fri, 27 July 2012 08:40 Go to previous messageGo to next message
Marc Khouzam is currently offline Marc Khouzam
Messages: 262
Registered: July 2009
Senior Member
Tim Black wrote on Thu, 26 July 2012 20:51
I'm seeing this same behavior, gdb traces showing thread-info requests for each thread id, every time I single step on remote target. I couldn't find the bug for this and wondered if it got reported, or if there was any progress or suggestions for workaround. ? Also, I'm curious why "Force thread list update on suspend" doesn't alter this behavior. I get the same behavior regardless of this setting in the eclipse launcher. THanks!

BTW I'm using gdb, gdbserver version 7.4.1 (powerpc), Indigo Service Release 2 (Build id: 20120216-1857), org.eclipse.cdt version 8.0.2.201202111925, org.eclipse.cdt.launch.remote version 6.0.0.201202111925.


I just tried it with 8.1 (but I don't expect any difference with 8.0.2 on that front) and I don't see what you describe. I made my debug view small enough to only show 5 or 6 threads of the 32 threads running. Only the ones visible got a -thread-info command sent. I tried both non-stop and all-stop with and without "Force thread list update on suspend".

Is there something else special you may be doing?

Marc
Re: Unusable Remote Debugging Performance [message #898799 is a reply to message #898747] Fri, 27 July 2012 12:34 Go to previous messageGo to next message
Tim Black is currently offline Tim Black
Messages: 17
Registered: September 2009
Junior Member
I get this behavior even with 2 or 3 threads in view in the debug window. There are 89 in my app right now and every time I step in Thread 3 I get:

663,326 1058-exec-next --thread 3 1
663,343 1058^running
663,343 *running,thread-id="all"
663,343 (gdb) 
664,433 *stopped,reason="end-stepping-range",frame={addr="0x100ea8ac",func="Err::installHandler<Err:\
:ThreadError<Serialize::ErrorVirt, Err::Error>::Handler>",args=[{name="h",value="0x4c0fea50"}],file=\
"../BOS/include/ThreadError.h",fullname="/home/tblack/eclipse_workspaces/A/labrinth_1.1Devel/B\
OS/include/ThreadError.h",line="756"},thread-id="3",stopped-threads="all",core="0"
664,433 (gdb) 
664,447 1059-stack-info-depth --thread 3 21
664,525 1059^done,depth="21"
664,525 (gdb) 
664,543 1060-list-thread-groups
664,543 1061-thread-info 3
664,565 1062-stack-info-depth --thread 3
664,591 1060^done,groups=[{id="i1",type="process",pid="42000",executable="/home/tblack/eclipse_works\
paces/A/labrinth_1.1Devel/LabrinthDevice/LabrinthDevice_ppc_d",cores=["0"]}]
664,591 (gdb) 
664,591 1063-stack-list-locals --thread 3 --frame 0 1
664,592 1064-stack-list-frames --thread 3 1 3
664,656 1061^done,threads=[{id="3",target-id="Thread 3708",frame={level="0",addr="0x100ea8ac",func="\
Err::installHandler<Err::ThreadError<Serialize::ErrorVirt, Err::Error>::Handler>",args=[{name="h",va\
lue="0x4c0fea50"}],file="../BOS/include/ThreadError.h",fullname="/home/tblack/eclipse_workspaces/A/l\
abrinth_1.1Devel/BOS/include/ThreadError.h",line="756"},state="stopped",core="0"}]
664,656 (gdb) 
664,656 1062^done,depth="21"
664,656 (gdb) 
664,656 1063^done,locals=[{name="defList",value="0x117172e0"},{name="handlers",value="0x11710910"},{\
name="currentInstallCount",value="1"}]
664,656 (gdb) 
664,656 1064^done,stack=[frame={level="1",addr="0x100ea914",func="Err::ThreadError<Serialize::ErrorV\
irt, Err::Error>::Handler::Handler",file="../BOS/include/ThreadError.h",fullname="/home/tblack/eclip\
se_workspaces/A/labrinth_1.1Devel/BOS/include/ThreadError.h",line="806"},frame={level="2",addr\
="0x1130dcf8",func="Serialize::BoaErrorHandler::BoaErrorHandler",file="InterfaceError.cpp",fullname=\
"/home/tblack/eclipse_workspaces/A/labrinth_1.1Devel/Serialization/InterfaceError.cpp",line="6\
2"},frame={level="3",addr="0x10179874",func="Layout::LayoutBOA_BoaInterface::acceptBuf",file=".obj/g\
en/Layout_BoaHandler.cpp",fullname="/home/tblack/eclipse_workspaces/A/labrinth_1.1Devel/Layout\
/.obj/gen/Layout_BoaHandler.cpp",line="44"}]
664,656 (gdb) 
664,657 1065-var-create --thread 3 --frame 0 - * servId
664,682 1065^error,msg="-var-create: unable to create variable object"
664,682 (gdb) 
664,701 1066-thread-info 89
664,701 1067-thread-info 88
664,701 1068-thread-info 87
664,701 1069-thread-info 86
.
.
.
664,704 1153-thread-info 1
664,765 1066^done,threads=[{id="89",target-id="Thread 3709",frame={level="0",addr="0x0ff626d0",func=\
"recvfrom",args=[],from="/opt/powerpc/powerpc-unknown-linux-gnu/sys-root/lib/libpthread.so.0"},state\
="stopped",core="0"}]
664,765 (gdb) 
664,765 1154-data-evaluate-expression --thread 3 --frame 0 servId
664,765 1155-var-create --thread 3 --frame 0 - * servId
664,829 1067^done,threads=[{id="88",target-id="Thread 3707",frame={level="0",addr="0x0fc45d44",func=\
"nanosleep",args=[],from="/opt/powerpc/powerpc-unknown-linux-gnu/sys-root/lib/libc.so.6"},state="sto\
pped",core="0"}]
664,829 (gdb) 
664,830 1156-stack-list-frames --thread 3 0 3
664,830 1157-data-evaluate-expression --thread 3 --frame 0 &(h)
664,893 1068^done,threads=[{id="87",target-id="Thread 3706",frame={level="0",addr="0x0ff626d0",func=\
"recvfrom",args=[],from="/opt/powerpc/powerpc-unknown-linux-gnu/sys-root/lib/libpthread.so.0"},state\
="stopped",core="0"}]
664,893 (gdb) 
664,893 1158-data-evaluate-expression --thread 3 --frame 0 &(defList)
664,893 1159-data-evaluate-expression --thread 3 --frame 0 &(handlers)
664,961 1069^done,threads=[{id="86",target-id="Thread 3705",frame={level="0",addr="0x0fc73044",func=\
"select",args=[],from="/opt/powerpc/powerpc-unknown-linux-gnu/sys-root/lib/libc.so.6"},state="stoppe\
d",core="0"}]
.
.
.
671,328 (gdb) 
671,407 1153^done,threads=[{id="1",target-id="Thread 3598",frame={level="0",addr="0x0fc45d44",func="\
nanosleep",args=[],from="/opt/powerpc/powerpc-unknown-linux-gnu/sys-root/lib/libc.so.6"},state="stop\
ped",core="0"}]
671,408 (gdb) 


Then it lists the stack frames from thread 3. The data looks correct but it takes a horrendously long time to get the thread-info.

Are you using powerpc target for your remote debug session?

My DSF launcher has none of the options checked (Non-stop, Reverse, Force thread list update, debug forked).

Here is my gdbinit file:

set sysroot /opt/powerpc/powerpc-unknown-linux-gnu/sys-root
show sysroot
show solib-search-path

# perform gdb tasks common to all platforms. 
python print "Sourcing common.gdbinit..."
# add .. to search path so common.gdbinit will be found when cwd is <metaproject-root>/LabrinthUnitTest
set directories "$cdir:$cwd:.." 
source -s -v common.gdbinit


where common.gdbinit contains:

set print thread-events off

# This command file performs operations commonly required by gdb sessions.
# It is invoked by other gdb command files after they have performed platform- or
# application-specific gdb initialization commands.
python print "Entered common.gdbinit..."

maintenance set python print-stack on
show directories

# register the python pretty printers 
python
import os, sys
# Path is relative to CWD, which is set by "Working Directory" setting in Arguments tab of eclipse launcher (or $PWD if running at command line)
gppDir = "gdb_python_pretty_printers"
metaprojectRoot = os.path.normpath(os.getcwd())
while True:
    print "Looking for %s in %s..." % (gppDir, metaprojectRoot)
    if metaprojectRoot == "/":
        print "From cwd = '%s', could not find %s dir to use as reference to locating pretty_printers! Pretty printing feature will not be functional in this debug session!" % (gppDir, os.getcwd())
        break
    if gppDir in os.listdir(metaprojectRoot):
        pretty_printer_dir = os.path.join(metaprojectRoot, gppDir)  
        pretty_printer_module = os.path.join(pretty_printer_dir, "libstdcxx", "v6", "printers.py") 
        if not os.path.exists(pretty_printer_module):
            print "Python pretty printers do not exist at specified location (%s)! Pretty printing feature will not be functional in this debug session!" % pretty_printer_module
            break
        else:
            sys.path.insert(0, pretty_printer_dir)
            from libstdcxx.v6.printers import register_libstdcxx_printers
            register_libstdcxx_printers (None)
            print "Successfully installed python pretty printers at %s." % pretty_printer_dir
            break
    metaprojectRoot = os.path.split(metaprojectRoot)[0]
end

python print "Done with common.gdbinit..."

[Updated on: Fri, 27 July 2012 12:36]

Report message to a moderator

Re: Unusable Remote Debugging Performance [message #898814 is a reply to message #898799] Fri, 27 July 2012 14:05 Go to previous messageGo to next message
Tim Black is currently offline Tim Black
Messages: 17
Registered: September 2009
Junior Member
OK, I think I see what the problem is now. It looks like the list of "thread ids that need to get updated on suspend" that cdt maintains is not based on the instantaneous snapshop of what threads are visible Debug View, but instead is an accumulation of what threads have ever been visible Debug View.

Starting over with a new debug session, I can see that at first, when I step I only see thread-info requests corresponding to the threads initially in view, but if I scroll the thread list up and step again, I see thread-info requests for the superset including those threads that were visible before, and the ones that are visible now. So all that is required to reproduce the behavior I was seeing is to step, scroll through the complete list of threads in debug view, then step again.
Re: Unusable Remote Debugging Performance [message #898819 is a reply to message #898814] Fri, 27 July 2012 14:53 Go to previous messageGo to next message
Tim Black is currently offline Tim Black
Messages: 17
Registered: September 2009
Junior Member
Part of problem is that is is commonly necessary to scroll through all threads bc when a breakpoint is hit, that thread does not become "active", or selected, in the Debug view. When a breakpoint is hit, the corresponding source code is opened with the correct line highlighted, but the variables window is empty bc there is no thread selected in the Debug view. So, you have to scroll through the threads, and find the one that is suspended at a breakpoint, then select it, and proceed. So I am commonly scrolling through the thread list, which is why I frequently see the main problem at hand (polling unnecessary thread-info on suspend).

Do you have any insight into this secondary problem: why the "suspended-at-breakpoint" thread isn't always automatically made active/selected? I bring this up here bc I believe that if this secondary problem never occurred, my normal debug workflow would not require scrolling through all the threads, so the list of thread ids that need to get updated would stay small, speeding up UI response when stepping.

In some recent tests, I've noticed this sequence of events to repro the secondary problem:
* break in thread N
* thread N active/selected
* set bkpt "downstream" in thread N
* resume
* the bkpt it hit, and momentarily the correct thread N is active/selected, but then the flood of thread-info requests occurs, and when it's complete, no threads are selected in the view (actually the top-level-process-under-debug is selected). Because no threads are selected, you have to scroll through the thread list to see which thread is suspended at bkpt.

This behavior doesn't happen every time breakpoints are hit, but it is my belief that it happens when the total thread count changes over the course of running to a bkpt. For example, my main() makes a sequence of calls, may of which create new threads. I can place several breakpoints in main(), and when I run to each one, I notice this secondary problem (not selecting active thread but instead selecting the top-level-process in the debug view) occurs every time the thread count changes and never seems to occur when the thread count doesn't change. This seems like an important clue.
Re: Unusable Remote Debugging Performance [message #898822 is a reply to message #898819] Fri, 27 July 2012 15:10 Go to previous messageGo to next message
Marc Khouzam is currently offline Marc Khouzam
Messages: 262
Registered: July 2009
Senior Member
Good work Tim. I can reproduce the problem now. I don't know why all threads are being refreshed in this case.

Could you open two bugs please?
1- all threads that have ever been displayed get refreshed
2- selection after suspend does not always correct

and explain how to reproduce each one?

Thanks!

Marc
Re: Unusable Remote Debugging Performance [message #898844 is a reply to message #898822] Fri, 27 July 2012 19:06 Go to previous messageGo to next message
Tim Black is currently offline Tim Black
Messages: 17
Registered: September 2009
Junior Member
Done. Cool

https://bugs.eclipse.org/bugs/show_bug.cgi?id=386175
https://bugs.eclipse.org/bugs/show_bug.cgi?id=386176
Re: Unusable Remote Debugging Performance [message #1004877 is a reply to message #898844] Wed, 23 January 2013 22:36 Go to previous message
qin yungao is currently offline qin yungao
Messages: 1
Registered: January 2013
Junior Member
Hi,
we meet the same problem when using Eclipse CDT to debug a process which has 74 threads under stop mode. from the log, we observed large amount of --thread-info MI exchange. it worse the performance. May I ask whether community has planned to fix this performance defect recently. thanks.

286,866 [MI] 86^done,threads=[{id="28",target-id="Thread 433",frame={level="0",addr="0x400e5f5c",fu\
nc="__new_sem_wait",args=[{name="sem",value="0xc05d44"}],file="libpthread/nptl/sysdeps/unix/sysv/lin\
ux/sem_wait.c",line="59"},state="stopped"}]
286,866 [MI] (gdb)

287,615 [MI] 87^done,threads=[{id="27",target-id="Thread 434",frame={level="0",addr="0x400e5f5c",fu\
nc="__new_sem_wait",args=[{name="sem",value="0xc05d00"}],file="libpthread/nptl/sysdeps/unix/sysv/lin\
ux/sem_wait.c",line="59"},state="stopped"}]
287,631 [MI] (gdb)

288,397 [MI] 88^done,threads=[{id="11",target-id="Thread 450",frame={level="0",addr="0x400e5f5c",fu\
nc="__new_sem_wait",args=[{name="sem",value="0xc058c0"}],file="libpthread/nptl/sysdeps/unix/sysv/lin\
ux/sem_wait.c",line="59"},state="stopped"}]
288,397 [MI] (gdb)

289,178 [MI] 89^done,threads=[{id="10",target-id="Thread 451",frame={level="0",addr="0x400e5f5c",fu\
nc="__new_sem_wait",args=[{name="sem",value="0xc0587c"}],file="libpthread/nptl/sysdeps/unix/sysv/lin\
ux/sem_wait.c",line="59"},state="stopped"}]
289,178 [MI] (gdb)
Previous Topic:breakpoints on remote embedded system
Next Topic:Build linux with eclipse
Goto Forum:
  


Current Time: Sat Apr 19 09:04:38 EDT 2014

Powered by FUDForum. Page generated in 0.04119 seconds