Eclipse Community Forums
Forum Search:

Search      Help    Register    Login    Home
Home » Eclipse Projects » SeMantic Information Logistics Architecture (SMILA) » WebCrawler and URL-Parser
WebCrawler and URL-Parser [message #565149] Tue, 24 August 2010 12:31
Andrej Rosenheinrich is currently offline Andrej RosenheinrichFriend
Messages: 22
Registered: August 2010
Junior Member

once again I have some questions about the webcrawler.

First, how are seeds parsed containing a "#"? Seems to me like everything after a "#" is ignored. What would be a problem, because some sites use this character in the get options (you can consider this bad style, at least i do, but it works and its used out there). So ignoring the information would lead to a completely different site. Can the parser be configured or modified to parse such URL?

Second, could you give a little bit more information about the crawlingmodels, what a crawler would behave like with different models? For instance MaxDepth, when I provide several seeds, will the first seed be crawled until the depthlimit is reached and then the second seed is looked at, or will all seeds be crawled before going deeper?

Third, where can i find more information about the filter format? Without description its a bit tricky ;)

Thanks in advance!
Previous Topic:Deploying SMILA with Eclipse 3.6
Next Topic:BPEL Designer extensionActivity bug
Goto Forum:

Current Time: Wed Oct 07 00:37:41 GMT 2015

Powered by FUDForum. Page generated in 0.08703 seconds
.:: Contact :: Home ::.

Powered by: FUDforum 3.0.2.
Copyright ©2001-2010 FUDforum Bulletin Board Software