I'm interested in the layering of Gyrex as we could then work out what
combinations of Gyrex and Virgo bundles would work together or overlap.
We also have the notion of a "kernel" (or "core"). It contains the
bootstrapping logic, the clustering components, the context runtime core
as well as helpers for monitoring (metrics, debugging, logging). Within
that kernel I see some overlapping in terms of monitoring. The
bootstrapping is different because we don't rely on bundle start but on
the OSGi applications.
The clustering part offers APIs to participate (eg. a simple queuing
service, a distributed locking service, OSGi/Eclipse preferences). It
does not expose ZooKeeper APIs but sometimes it's necessary to expose
behavior that typically happens in a distributed environment but not
locally (eg. concurrent modification of preferences during flush).
Part of the clustering part is also a provisioning component that uses
p2 to install bundles locally on nodes based on instruction persisted in
ZooKeeper. Thus instructions could be also persisted elswere (eg. in a
database). But some refactoring would be necessary to extract a generic
storage API.
We setteled on SLF4J as logging API and ship Logback as an
implementation. Gyrex generally avoids to store configuration
information locally but some bootstrapping is necessary. We decided on
using the instance location only to allow re-using the same
installation/configuration multiple times for different instances. This
is similar to what Eclipse does with different workspaces. Thus, (by
default) things like ZooKeeper connection settings and Logback
configuration are read from the instance location. Log files and other
local data is also written there (by default).
The "context runtime" is what we use to implement multi-tenency. It's in
the kernel because that also affects some of the components in there.
For example, you wan't to be able to diagnose problems related to a
specific tenant. I'm curious about what Virgo plans to implement with
sub-systems.