|WebCrawler and URL-Parser [message #565149]
||Tue, 24 August 2010 12:31
| Andrej Rosenheinrich
Registered: August 2010
once again I have some questions about the webcrawler.
First, how are seeds parsed containing a "#"? Seems to me like everything after a "#" is ignored. What would be a problem, because some sites use this character in the get options (you can consider this bad style, at least i do, but it works and its used out there). So ignoring the information would lead to a completely different site. Can the parser be configured or modified to parse such URL?
Second, could you give a little bit more information about the crawlingmodels, what a crawler would behave like with different models? For instance MaxDepth, when I provide several seeds, will the first seed be crawled until the depthlimit is reached and then the second seed is looked at, or will all seeds be crawled before going deeper?
Third, where can i find more information about the filter format? Without description its a bit tricky ;)
Thanks in advance!
Powered by FUDForum
. Page generated in 0.12168 seconds