Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [servlet-dev] [jakartaee-platform-dev] What do we exactly dislike in Servlet?


Mark,

On Fri, 2 Sept 2022 at 19:30, Mark Thomas <markt@xxxxxxxxxx> wrote:
On 26/08/2022 03:52, Greg Wilkins wrote:
>
> On Wed, 24 Aug 2022 at 20:01, Mark Thomas <markt@xxxxxxxxxx
> <mailto:markt@xxxxxxxxxx>> wrote:
>
> But what nobody has been able to tell me, is why it will be different
> this time around?   The approach has been tried before with Green
> threads and Solaris multitasking and ultimately failed, as described in
> this paper from 2002 on Multithreading in Solaris
> <https://webtide.com/wp-content/uploads/2020/12/multithread.pdf>.  I
> cannot see in Loom any innovations that will generally solve the
> problems of m:n scheduling outlined in that paper.

The question for me is to what extent the problems with the generic case
outlined in that paper apply in this specific case.

I don't see why not.  The general problem is to use multiple CPU cores to run M kernel threads to execute N virtual threads.    The issue being that the OS does the scheduling between cores and kernel threads and the user-space code must do the scheduling between kernel threads and virtual threads.   That paper indicated that it was a struggle to come up with a one size fits all scheduler and ultimately the attempt was abandonned in favour of simple 1:1 kernel threads.  Operating systems have since not reversed that decision to back away from M:N, so it is safe to assume that nothing new has been invented in this space.

Loom will perhaps have better integration into the JVM libraries with more/better scheduling points, but I've yet to see anything that indicates that they have solved the general problem.     I think it is up to Loom to prove that these issues no longer apply rather than for us to assume they have been solved.
 
>   This is not a bad thing and will probably be an excellent solution for
> the vast majority of applications, for which the benefits of simple
> blocking code will far outweigh any downsides with striving for the
> ultimate scheduling efficiencies.

Exactly.

However, it should also be noted that modern OSs are able to run 10s of thousands of kernel threads will little issue.   Thus the vast majority of applications can already be simply written in blocking style without the need for Loom.

This is supported by the fact that the vaste majority of servlet  applications already use the blocking APIs and few have scalability issues that directly result from thread starvation as a result.

To re-phrase my question above: What is the scope of the minority of
cases for Servlet based web applications where this approach does not work.

This kind of hinges of what we mean by "Servlet" web applications.   If we think of servlets as an application development environment, then I think very few applications have issues with the blocking API.

But if we think of servlets as a protocol API that might be used to implement other applications frameworks, then I think there are more cases where the blocking API is not sufficient.  For example, some RESTful frameworks just want super efficient HTTP services and could not care less about the servlet API for application purposes.   I think it is precisely these use-cases that would be interested in a low level servlet API....  it follows that a significant proportion of the target use-case for a low level HTTP API will be interesting in maximal efficiency and scalability.


> But we are not talking about an application API.  The proposal being
> discussed here is for a low level HTTP API: just the minimum to get HTTP
> messages efficiently received and sent, that can be used as the basis
> for various application frameworks and/or very specific direct usages. 
> To put all our eggs in the Loom basket for such an API would be to make
> the bet that Loom will be sufficient and optimal for not just most, but
> all HTTP usages in java.  I don't think that Loom can claim to be that
> general of a solution.

I don't think async can make that claim either. There are always going
to be edge cases where one approach is better than the other. And a huge
overlap in the middle where there is very little distinction to draw
between either approach.

Really?  What are some examples of low level HTTP usage that can't be efficiently serviced by an async API?   Sure there are many that don't justify the complexity of an async API, but given that there is normally a trivial async to blocking conversertion possible, what can't be done?

I don't think async is a general application API, but I think it is the best you can get for a one size fits all for a protocol API.


Re-phrasing my question again: What are the relative sizes of scopes of
use cases for Servlet based web applications where one approach (async
or Loom) is clearly more optimal than the other.

I think for applications built on servlets, there are very few that care about async. For most blocking is fine and for most of them, kernel threads are fine.   There is a small subset of applications that need blocking APIs to scale beyond the capacity of modern OSs.

For frameworks built on servlets... which really are just looking for a HTTP API and not an application API, then I think most will want to boast that they can scale to huge volumes at low latencies.  Thus whilst few of their ultimate applications might actually need that scale, it is important for those frameworks to be able to support massive scale and low latency.... something that async well supports.

 
There is a trade-off involved here. The benefits of the simplicity of
blocking code with Loom vs the performance benefits of async. I'd be a
lot more comfortable with some relevant, objective data to help compare
the the two approaches. Are you aware of any work that looks at this in
the context of web applications? If not, I have some time available to
work on putting something together - WDYT?

I did some very rough benchmarks in my blog(s) on an early version of Loom: https://webtide.com/do-looms-claims-stack-up-part-1/  But more current and more thorough evaluation is needed.

cheers


--

Back to the top