| Hi Peter,
 On 7/26/2018 1:41 AM, Peter Hermsdorf wrote:
 
 
      
      Hi Scott, 
 Am 21.07.2018 um 00:11 schrieb Scott
        Lewis:
 When
        the mapping is done, who/what does the mapping?..and how is it
        done?   It seems to me that's the problematic case.  Does it map
        both IP address and port, or just IP address?   Do you know if
        it's using NAT or some other tech? Typically mapping is only done for the ip addresses. Actually i
      can't tell exactly how it's done. Probably it's custom to every
      customer....
 Are there other services on this net...e.g. a web server...that
        are working properly with the addressing properties that you are
        looking for?   If so, how is that done?   A reverse proxy or
        load balancing hw or ?
 
 
  Other
      services that work that way are JDBC connections to oracle
      databases. They don't care if you reach them by hostname or the
      one ip address or the other ... ;)
 Given the smiley, I'm not sure if you are joking but here's the
    rub:   JDBC connections are point-to-point (strict
    client-server)...and so the addressing is relatively simple.   Part
    of its simplicity is that it creates isolation between clients, but
    with DB connections that's generally what you want.
 
 However, the ECF generic provider...and some of the
    others...provides a group model, where every process (clients and
    server) can both export and import remote services as opposed to
    server-export and client-import only.
 
 Just to explain a little more:   In a group model (i.e. ECF
    container), every process in the group has to
 
 a) have a unique identity;
 b) agree (membership) to use the same ID to refer to the same
    process.
 
 So in the three group members case:
 
 Serverid -> ecftcp://some.name:3333/server
 
 Client1id -> 1
 
 Client2id -> 2
 
 When the two clients connect to this server, all three processes
    receive the IDs of the other two processes...i.e. Server gets 1, 2,
    Client1 gets serverid, 2; and Client2 gets serverid and 1.    Note
    that if the Client1 serverid != Client2 serverid (i.e. the clients
    are on different networks) then it violates b above.    This seems
    to be your situation with the generic provider.
 
 All I'm saying is that the introduction of NAT, firewalls, proxies,
    VPNs, etc changes the addressing.   This is less of a problem for
    client-server communication because there are only two processes
    'aware' of each other instead of a group.
 
 When I wrote the generic provider (originally > 14 years ago),
    the addressing introduced by NAT, VPN, etc wasn't nearly as
    prevalent.   I *could* have introduced some additional
    connection/group join protocol to associate some separate/unique
    name (uuid, etc) with the server ip address, so that clients didn't
    use some.name:3333.   However, at that time I didn't anticipate it
    would be necessary, and so I didn't do that...using the (guaranteed
    unique) some.name:3333 to both identify the process and client uses
    to connect to the server.  In retrospect it would have been nice if
    I did, but OTOH given the complexity involved in doing it in the
    'general case' I'm kind of glad I didn't :).
 
 It's possible that a new/extended generic container could be created
    that had this additional protocol to have connected clients use a
    non-ip-based name for the server in the group.   It would probably
    be necessary, however, to first understand what the name mapping was
    doing for a given network topology (i.e. your customer) at least if
    one was interested in keeping the 'group' nature (i.e. not be
    'strict client-server' at the service level).
 
 As we've been discussing, another option is to use a strict
    client-server topology rather than a group, and use or create a
    distribution provider based upon a strict client-server model.   See
    below for more comments about this.
 
 
 
      <stuff deleted>
  1)  currently we only use "strict" client->server setup, but: i
      could image use-cases where it could be useful if the server could
      import services from the clients to realize something like push
      information to the client from the server (without polling etc)  -
      but that's probably another story...
 
 Indeed it is :).
 
 
 
      If
        having a strict client->server works for your services, then
        I would suggest you try the either the JaxRSRemoteService
        providers [1] which are based upon HttpService (jetty server
        usually).   It still seems to me that you would need a reverse
        proxy like nginx to expose the same server to access via
        multiple IP addresses/networks, and I'm not sure if that's
        possible on your target network, but nginx is frequently used
        for that. 2) i don't think it's a good idea to switch to a http based
      communication - performance wise. i would like to stick with a
      binary transportation layer rather than have http protocol
      overhead (remember my kryo serialization implementation)... and we
      don't have a use case where other services/participants would
      benefit from a http based communication...
 
 we have a jetty on server side, but it has nothing to deal with
      the osgi remote services - just provides some jax-rs rest services
 
 Given that you already have a jetty server working in this topology,
    perhaps it would be worth it to give it a try with the
    JaxRSProvider...and see how the performance is for a test service.  
    I understand the concern about performance with http...especially if
    you are sending lots of messages.   However, as you know
    jetty/websockets, caching proxies, hw, etc., etc have improved the
    performance of http under many usage scenarios so maybe it will be
    less of a problem than you think.
 
 Another thought:   Once you were confident that a strict
    client-server model would get you want you want in terms of
    connection, you could create a simple
    websocket-or-regular-socket-based distribution provider based upon
    your Kyro serialization provider [1] or at least starting from
    that.   With Photon I've tried to make it easier to create new
    distribution providers (more/more useful abstract classes).   There
    are a other providers at [2] that you could model from (e.g.
    Chronicle, grcp, etc) or just use a simple socket connection based
    upon the trivial provider [3].
 
 You can also combine multiple distribution providers if you need to
    (i.e. some services with JaxRS, others with a custom-socket-based
    distribution provider for others).
 
 
 <stuff
      deleted>
  3) switching to a different provider is an option, if there is
      "no" performance problem and this "connection" issue would be
      solved. additional infrastructure for translating/mapping/proxying
      is a problem and is at the end no real option.... from my point of
      view that's the job of the underlying tcp/ip network...
 
 That would be very nice, but unfortunately these days with NAT, VPN,
    etc we are not dealing with just one tcp/ip network :).
 
 
  Because of the many questions regarding how the network mapping
      etc. is done i would like to describe another scenario which shows
      the same problem, but is probably better reproducible:
 
 Thanks, this is helpful.
 
 
  Use virtualbox on your host machine and install a virtual machine
      (e.g. running linux). let's name it server1. deploy a service with
      the generic provider on server1. the service will bind to the
      local network interface and use the local name of the linux
      machine: server1. that means e.g. the endpoint ID is
      ecftcp://server1.local:3282/server .
 
 in virtualbox on the host machine configure a port mapping from
      port 3282 into the virtual machine with the same port 3282.
 
 from a network perspective you are now able to reach the service
      in the linux box from your host machine using tcp localhost:3282.
 
 if you now start a service consumer on your host machine which
      uses the endpoint ecftcp://localhost:3282/server or the real
      hostname of the host machine e.g.
      ecftcp://scotts-machine:3282/server you will get a succesful
      tcp/ip connection between client and server, but (of course) the
      service import is not working because of the different endpoint
      id's ....
 
 Right.   See explanation above for why this is the case with a group
    model/generic distribution provider.
 
 So, if you can give up the group model and the ability to export
    services from peers...and it sounds like you can...then you should
    be able to give one of the JaxRSProviders a try.
 
 If tcp is needed for your required performance, then to be safe, I
    would suggest trying a very very small Java application to create a
    socket connection, read and write a few bytes and make sure that
    your target network will allow such client-server comm, as many VPN
    networks limit traffic by port and/or protocol, etc.  This is one
    reason it's so hard for me or anyone to create a 'general' socket
    provider that will work on any VPN, NAT, network, etc.  They can be
    configured in ways that will allow some things (e.g. odbc, http over
    specific ports) and not allow others (e.g. socket comm over port
    xxxx).
 
 If the Java socket app works in your target environment then I would
    suggest creating a very simple new distribution provider starting
    with the trivial provider [3].   If you decide to do this I can/will
    help and would welcome it, but for full-effort on it I would need
    some additional arrangement.
 
 
  i need a solution for that without adding local hostname entries,
      dns changes and additional servers or infrastructure ;)
 
 Ok.
 
 Regards,
 
 Scott
 
 [1] https://github.com/ECF/kryo-serialization
 [2] https://github.com/ECF
 [3]
http://git.eclipse.org/c/ecf/org.eclipse.ecf.git/tree/examples/bundles/org.eclipse.ecf.examples.provider.trivial
 
 
 |