Home » Modeling » EMF "Technology" (Ecore Tools, EMFatic, etc) » [CDO][0.8.0]
|
Re: [CDO][0.8.0] [message #94265 is a reply to message #94255] |
Mon, 27 August 2007 14:45 |
Eclipse User |
|
|
|
Originally posted by: stepper.sympedia.de
Hi Simon,
I've just provided the infrastructure for wrapping the streams to be
used by signals.
The easiest way to do so is adding an element processor to the managed
container:
<extension point="org.eclipse.net4j.util.elementProcessors">
<elementProcessor
class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
</extension>
The element processor injects an IStreamWrapper into protocols of the
given name (here cdo).
The example above is taken from the cdo.ui plugin and wraps the signal
streams with xor'ing filter streams.
This is mostly useful for testing or simple cyphering.
For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
of GZIPStreamWrapperInjector.
Sounds good? Unfortunately both of them don't work correctly ;-(
I've tried for hours to track it down, without success until now.
I guess that it relates somehow to the transmission of byte arrays.
The Java stream API is evil in my opinion for the following reason:
The read() method of InputStream and the write() methop of OutputStream
return/take an int value instead of a byte value.
I suspect it is somehow to enable read() to signal an EndOfStream (-1)
without throwing an exception.
Certainly only Sun knows why a magic number in a too large data type is
better than a matching data type plus an exception for a special case.
Maybe they have good reason that I'm just to silly to imagine ;-)
I believe that my BufferInputStream and BufferOutputStream have
correctly implemented read() and write().
Without the newly introduced stream wrapping they pass all test cases.
This could of course also be because two bugs eleminate themselves
counterwise.
That would mean that this elimination of bugs does not happen when the
streams are wrapped.
But that's only a vague idea.
Anyway the result when wrapping the streams leads to an exception when
opening a CDOSession.
At first the repository name is transmitted as
[00 00 00 05]
[114, 101, 112, 111, 49]
But the following is received:
[00 00 00 05]
[-40, 48, -38, 58, -101]
Interesting that the length is correctly received. Only the byte array
is crippled somehow.
Maybe you can have a look at it and have an idea what's going on.
I suggest that you set breakpoints in
ExtendedIOUtil.readByteArray(DataInput) and
ExtendedIOUtil.writeByteArray(DataOutput, byte[])
Cheers
/Eike
Simon McDuff schrieb:
> Hi Eike,
>
> Did you try using GZIPOutpuStream and GZIPInputStream in your framework ?
>
> If yes do you have some result ?
>
> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in your
> framework ?
>
> Simon
>
>
>
|
|
|
Re: [CDO][0.8.0] [message #94384 is a reply to message #94265] |
Tue, 28 August 2007 07:04 |
Eclipse User |
|
|
|
Originally posted by: stepper.sympedia.de
Hi Simon,
I think I found the root cause of the problem. It seems to be another
flaw in the Java stream API.
java.io.FilterOutputStream.write(byte[], int, int) calls its own int
read() method so that arrays are filtered by the override of int read().
This is not necessary since the super implementation of
java.io.InputStream seems to do the same logic.
java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*
call its own implementation/override of int read().
Instead it calls in.read(b, off, len) where in is the delegate
InputStream. This way an override logic in its own read() method is
ignored ;-(
I believe in both cases it'd be better to simply not override the array
methods and rather use the base implementations from InputStream and
OutputStream which both delegate to read().
At least I would expect the same (redundant) logic as in FilterOutputStream!
What do you think?
Cheers
/Eike
Eike Stepper schrieb:
> Hi Simon,
>
> I've just provided the infrastructure for wrapping the streams to be
> used by signals.
> The easiest way to do so is adding an element processor to the managed
> container:
>
> <extension point="org.eclipse.net4j.util.elementProcessors">
> <elementProcessor
> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
> </extension>
>
> The element processor injects an IStreamWrapper into protocols of the
> given name (here cdo).
> The example above is taken from the cdo.ui plugin and wraps the signal
> streams with xor'ing filter streams.
> This is mostly useful for testing or simple cyphering.
> For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
> of GZIPStreamWrapperInjector.
>
> Sounds good? Unfortunately both of them don't work correctly ;-(
> I've tried for hours to track it down, without success until now.
> I guess that it relates somehow to the transmission of byte arrays.
> The Java stream API is evil in my opinion for the following reason:
> The read() method of InputStream and the write() methop of
> OutputStream return/take an int value instead of a byte value.
> I suspect it is somehow to enable read() to signal an EndOfStream (-1)
> without throwing an exception.
> Certainly only Sun knows why a magic number in a too large data type
> is better than a matching data type plus an exception for a special case.
> Maybe they have good reason that I'm just to silly to imagine ;-)
>
> I believe that my BufferInputStream and BufferOutputStream have
> correctly implemented read() and write().
> Without the newly introduced stream wrapping they pass all test cases.
> This could of course also be because two bugs eleminate themselves
> counterwise.
> That would mean that this elimination of bugs does not happen when the
> streams are wrapped.
> But that's only a vague idea.
>
> Anyway the result when wrapping the streams leads to an exception when
> opening a CDOSession.
> At first the repository name is transmitted as [00 00 00 05]
> [114, 101, 112, 111, 49]
>
> But the following is received:
> [00 00 00 05]
> [-40, 48, -38, 58, -101]
>
> Interesting that the length is correctly received. Only the byte array
> is crippled somehow.
>
> Maybe you can have a look at it and have an idea what's going on.
> I suggest that you set breakpoints in
> ExtendedIOUtil.readByteArray(DataInput) and
> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>
> Cheers
> /Eike
>
>
> Simon McDuff schrieb:
>> Hi Eike,
>>
>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>> framework ?
>>
>> If yes do you have some result ?
>>
>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>> your framework ?
>>
>> Simon
>>
>>
|
|
|
Re: [CDO][0.8.0] [message #94398 is a reply to message #94384] |
Tue, 28 August 2007 09:15 |
Eclipse User |
|
|
|
Originally posted by: stepper.sympedia.de
Update:
My XORInputStream and XOROutputStream now work correctly because they no
longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
problem I described in the previous post.
GZIPInputStream and GZIPOutputStream still don't work but it seems to be
a different root cause. They seem to correctly override the array read()
and write() methods.
A GZIPOutputStream properly deflates written data and the corresponding
GZIPInputStream properly receives the deflated bytes.
However there's a fill() method in InflaterInputStream which tries to
fill a full (512 bytes) buffer.
Since the the output stream has (correctly) written only parts of its
512 bytes buffer the input stream indefinately waits for the rest of the
bytes. They will never come ;-(
I have never used GZIP streams before and have no idea who's fault this
is or how it could be fixed.
If you want to use GZIP stream wrapping you'll have to help me.
Cheers
/Eike
Eike Stepper schrieb:
> Hi Simon,
>
> I think I found the root cause of the problem. It seems to be another
> flaw in the Java stream API.
>
> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
> read() method so that arrays are filtered by the override of int read().
> This is not necessary since the super implementation of
> java.io.InputStream seems to do the same logic.
>
> java.io.FilterInputStream.read(byte[], int, int) in contrast does
> *not* call its own implementation/override of int read().
> Instead it calls in.read(b, off, len) where in is the delegate
> InputStream. This way an override logic in its own read() method is
> ignored ;-(
>
> I believe in both cases it'd be better to simply not override the
> array methods and rather use the base implementations from InputStream
> and OutputStream which both delegate to read().
> At least I would expect the same (redundant) logic as in
> FilterOutputStream!
>
> What do you think?
>
> Cheers
> /Eike
>
>
> Eike Stepper schrieb:
>> Hi Simon,
>>
>> I've just provided the infrastructure for wrapping the streams to be
>> used by signals.
>> The easiest way to do so is adding an element processor to the
>> managed container:
>>
>> <extension point="org.eclipse.net4j.util.elementProcessors">
>> <elementProcessor
>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>> </extension>
>>
>> The element processor injects an IStreamWrapper into protocols of the
>> given name (here cdo).
>> The example above is taken from the cdo.ui plugin and wraps the
>> signal streams with xor'ing filter streams.
>> This is mostly useful for testing or simple cyphering.
>> For you I've provided CDOGZIPStreamWrapperInjector which is a
>> subclass of GZIPStreamWrapperInjector.
>>
>> Sounds good? Unfortunately both of them don't work correctly ;-(
>> I've tried for hours to track it down, without success until now.
>> I guess that it relates somehow to the transmission of byte arrays.
>> The Java stream API is evil in my opinion for the following reason:
>> The read() method of InputStream and the write() methop of
>> OutputStream return/take an int value instead of a byte value.
>> I suspect it is somehow to enable read() to signal an EndOfStream
>> (-1) without throwing an exception.
>> Certainly only Sun knows why a magic number in a too large data type
>> is better than a matching data type plus an exception for a special
>> case.
>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>
>> I believe that my BufferInputStream and BufferOutputStream have
>> correctly implemented read() and write().
>> Without the newly introduced stream wrapping they pass all test cases.
>> This could of course also be because two bugs eleminate themselves
>> counterwise.
>> That would mean that this elimination of bugs does not happen when
>> the streams are wrapped.
>> But that's only a vague idea.
>>
>> Anyway the result when wrapping the streams leads to an exception
>> when opening a CDOSession.
>> At first the repository name is transmitted as [00 00 00 05]
>> [114, 101, 112, 111, 49]
>>
>> But the following is received:
>> [00 00 00 05]
>> [-40, 48, -38, 58, -101]
>>
>> Interesting that the length is correctly received. Only the byte
>> array is crippled somehow.
>>
>> Maybe you can have a look at it and have an idea what's going on.
>> I suggest that you set breakpoints in
>> ExtendedIOUtil.readByteArray(DataInput) and
>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>
>> Cheers
>> /Eike
>>
>>
>> Simon McDuff schrieb:
>>> Hi Eike,
>>>
>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>> framework ?
>>>
>>> If yes do you have some result ?
>>>
>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>> your framework ?
>>>
>>> Simon
>>>
>>>
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #94428 is a reply to message #94398] |
Tue, 28 August 2007 10:31 |
Eclipse User |
|
|
|
Originally posted by: stepper.sympedia.de
Finally I found the root cause: in the subclasses of Signal I used
flush() instead of flushWithEOS() so that the receiving party assumed
there are more buffers to read from.
It is fixed now in CVS and the GZIPStreamWrapper seems to work like a
charme ;-)
Please tell me if you can compare the results, for example with your
huge transactions or huge revisions.
Cheers
/Eike
Eike Stepper schrieb:
> Update:
>
> My XORInputStream and XOROutputStream now work correctly because they
> no longer extend java.io.FilterInputStream and
> java.io.FilterOutputStream.
> Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
> org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
> problem I described in the previous post.
>
> GZIPInputStream and GZIPOutputStream still don't work but it seems to
> be a different root cause. They seem to correctly override the array
> read() and write() methods.
> A GZIPOutputStream properly deflates written data and the
> corresponding GZIPInputStream properly receives the deflated bytes.
> However there's a fill() method in InflaterInputStream which tries to
> fill a full (512 bytes) buffer.
> Since the the output stream has (correctly) written only parts of its
> 512 bytes buffer the input stream indefinately waits for the rest of
> the bytes. They will never come ;-(
>
> I have never used GZIP streams before and have no idea who's fault
> this is or how it could be fixed.
> If you want to use GZIP stream wrapping you'll have to help me.
>
> Cheers
> /Eike
>
>
> Eike Stepper schrieb:
>> Hi Simon,
>>
>> I think I found the root cause of the problem. It seems to be another
>> flaw in the Java stream API.
>>
>> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
>> read() method so that arrays are filtered by the override of int read().
>> This is not necessary since the super implementation of
>> java.io.InputStream seems to do the same logic.
>>
>> java.io.FilterInputStream.read(byte[], int, int) in contrast does
>> *not* call its own implementation/override of int read().
>> Instead it calls in.read(b, off, len) where in is the delegate
>> InputStream. This way an override logic in its own read() method is
>> ignored ;-(
>>
>> I believe in both cases it'd be better to simply not override the
>> array methods and rather use the base implementations from
>> InputStream and OutputStream which both delegate to read().
>> At least I would expect the same (redundant) logic as in
>> FilterOutputStream!
>>
>> What do you think?
>>
>> Cheers
>> /Eike
>>
>>
>> Eike Stepper schrieb:
>>> Hi Simon,
>>>
>>> I've just provided the infrastructure for wrapping the streams to be
>>> used by signals.
>>> The easiest way to do so is adding an element processor to the
>>> managed container:
>>>
>>> <extension point="org.eclipse.net4j.util.elementProcessors">
>>> <elementProcessor
>>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>>> </extension>
>>>
>>> The element processor injects an IStreamWrapper into protocols of
>>> the given name (here cdo).
>>> The example above is taken from the cdo.ui plugin and wraps the
>>> signal streams with xor'ing filter streams.
>>> This is mostly useful for testing or simple cyphering.
>>> For you I've provided CDOGZIPStreamWrapperInjector which is a
>>> subclass of GZIPStreamWrapperInjector.
>>>
>>> Sounds good? Unfortunately both of them don't work correctly ;-(
>>> I've tried for hours to track it down, without success until now.
>>> I guess that it relates somehow to the transmission of byte arrays.
>>> The Java stream API is evil in my opinion for the following reason:
>>> The read() method of InputStream and the write() methop of
>>> OutputStream return/take an int value instead of a byte value.
>>> I suspect it is somehow to enable read() to signal an EndOfStream
>>> (-1) without throwing an exception.
>>> Certainly only Sun knows why a magic number in a too large data type
>>> is better than a matching data type plus an exception for a special
>>> case.
>>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>>
>>> I believe that my BufferInputStream and BufferOutputStream have
>>> correctly implemented read() and write().
>>> Without the newly introduced stream wrapping they pass all test cases.
>>> This could of course also be because two bugs eleminate themselves
>>> counterwise.
>>> That would mean that this elimination of bugs does not happen when
>>> the streams are wrapped.
>>> But that's only a vague idea.
>>>
>>> Anyway the result when wrapping the streams leads to an exception
>>> when opening a CDOSession.
>>> At first the repository name is transmitted as [00 00 00 05]
>>> [114, 101, 112, 111, 49]
>>>
>>> But the following is received:
>>> [00 00 00 05]
>>> [-40, 48, -38, 58, -101]
>>>
>>> Interesting that the length is correctly received. Only the byte
>>> array is crippled somehow.
>>>
>>> Maybe you can have a look at it and have an idea what's going on.
>>> I suggest that you set breakpoints in
>>> ExtendedIOUtil.readByteArray(DataInput) and
>>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>>
>>> Cheers
>>> /Eike
>>>
>>>
>>> Simon McDuff schrieb:
>>>> Hi Eike,
>>>>
>>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>>> framework ?
>>>>
>>>> If yes do you have some result ?
>>>>
>>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>>> your framework ?
>>>>
>>>> Simon
>>>>
>>>>
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #94534 is a reply to message #94428] |
Tue, 28 August 2007 11:15 |
Simon Mc Duff Messages: 596 Registered: July 2009 |
Senior Member |
|
|
Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) ... I will
benchmark it at home and let you know the result tonight.
Simon
"Eike Stepper" <stepper@sympedia.de> wrote in message
news:fb0thn$odv$6@build.eclipse.org...
> Finally I found the root cause: in the subclasses of Signal I used flush()
> instead of flushWithEOS() so that the receiving party assumed there are
> more buffers to read from.
> It is fixed now in CVS and the GZIPStreamWrapper seems to work like a
> charme ;-)
>
> Please tell me if you can compare the results, for example with your huge
> transactions or huge revisions.
>
> Cheers
> /Eike
>
>
> Eike Stepper schrieb:
>> Update:
>>
>> My XORInputStream and XOROutputStream now work correctly because they no
>> longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
>> Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
>> org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
>> problem I described in the previous post.
>>
>> GZIPInputStream and GZIPOutputStream still don't work but it seems to be
>> a different root cause. They seem to correctly override the array read()
>> and write() methods.
>> A GZIPOutputStream properly deflates written data and the corresponding
>> GZIPInputStream properly receives the deflated bytes.
>> However there's a fill() method in InflaterInputStream which tries to
>> fill a full (512 bytes) buffer.
>> Since the the output stream has (correctly) written only parts of its 512
>> bytes buffer the input stream indefinately waits for the rest of the
>> bytes. They will never come ;-(
>>
>> I have never used GZIP streams before and have no idea who's fault this
>> is or how it could be fixed.
>> If you want to use GZIP stream wrapping you'll have to help me.
>>
>> Cheers
>> /Eike
>>
>>
>> Eike Stepper schrieb:
>>> Hi Simon,
>>>
>>> I think I found the root cause of the problem. It seems to be another
>>> flaw in the Java stream API.
>>>
>>> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
>>> read() method so that arrays are filtered by the override of int read().
>>> This is not necessary since the super implementation of
>>> java.io.InputStream seems to do the same logic.
>>>
>>> java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*
>>> call its own implementation/override of int read().
>>> Instead it calls in.read(b, off, len) where in is the delegate
>>> InputStream. This way an override logic in its own read() method is
>>> ignored ;-(
>>>
>>> I believe in both cases it'd be better to simply not override the array
>>> methods and rather use the base implementations from InputStream and
>>> OutputStream which both delegate to read().
>>> At least I would expect the same (redundant) logic as in
>>> FilterOutputStream!
>>>
>>> What do you think?
>>>
>>> Cheers
>>> /Eike
>>>
>>>
>>> Eike Stepper schrieb:
>>>> Hi Simon,
>>>>
>>>> I've just provided the infrastructure for wrapping the streams to be
>>>> used by signals.
>>>> The easiest way to do so is adding an element processor to the managed
>>>> container:
>>>>
>>>> <extension point="org.eclipse.net4j.util.elementProcessors">
>>>> <elementProcessor
>>>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>>>> </extension>
>>>>
>>>> The element processor injects an IStreamWrapper into protocols of the
>>>> given name (here cdo).
>>>> The example above is taken from the cdo.ui plugin and wraps the signal
>>>> streams with xor'ing filter streams.
>>>> This is mostly useful for testing or simple cyphering.
>>>> For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
>>>> of GZIPStreamWrapperInjector.
>>>>
>>>> Sounds good? Unfortunately both of them don't work correctly ;-(
>>>> I've tried for hours to track it down, without success until now.
>>>> I guess that it relates somehow to the transmission of byte arrays.
>>>> The Java stream API is evil in my opinion for the following reason:
>>>> The read() method of InputStream and the write() methop of
>>>> OutputStream return/take an int value instead of a byte value.
>>>> I suspect it is somehow to enable read() to signal an EndOfStream (-1)
>>>> without throwing an exception.
>>>> Certainly only Sun knows why a magic number in a too large data type is
>>>> better than a matching data type plus an exception for a special case.
>>>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>>>
>>>> I believe that my BufferInputStream and BufferOutputStream have
>>>> correctly implemented read() and write().
>>>> Without the newly introduced stream wrapping they pass all test cases.
>>>> This could of course also be because two bugs eleminate themselves
>>>> counterwise.
>>>> That would mean that this elimination of bugs does not happen when the
>>>> streams are wrapped.
>>>> But that's only a vague idea.
>>>>
>>>> Anyway the result when wrapping the streams leads to an exception when
>>>> opening a CDOSession.
>>>> At first the repository name is transmitted as [00 00 00 05]
>>>> [114, 101, 112, 111, 49]
>>>>
>>>> But the following is received:
>>>> [00 00 00 05]
>>>> [-40, 48, -38, 58, -101]
>>>>
>>>> Interesting that the length is correctly received. Only the byte array
>>>> is crippled somehow.
>>>>
>>>> Maybe you can have a look at it and have an idea what's going on.
>>>> I suggest that you set breakpoints in
>>>> ExtendedIOUtil.readByteArray(DataInput) and
>>>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>>>
>>>> Cheers
>>>> /Eike
>>>>
>>>>
>>>> Simon McDuff schrieb:
>>>>> Hi Eike,
>>>>>
>>>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>>>> framework ?
>>>>>
>>>>> If yes do you have some result ?
>>>>>
>>>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>>>> your framework ?
>>>>>
>>>>> Simon
>>>>>
>>>>>
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #94596 is a reply to message #94534] |
Tue, 28 August 2007 11:25 |
Eclipse User |
|
|
|
Originally posted by: stepper.sympedia.de
This is a multi-part message in MIME format.
--------------050409080204030607050901
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 7bit
Simon McDuff schrieb:
> Wow perfect!!!
>
> Maybe it will fix the hangling problem that we had ?? :-) ...
That's indeed possible, although I'm wondering how it could ever run
without deadlocks then ;-)
> I will benchmark it at home and let you know the result tonight.
>
Looking forward to see the results!
Cheers
/Eike
> Simon
>
> "Eike Stepper" <stepper@sympedia.de> wrote in message
> news:fb0thn$odv$6@build.eclipse.org...
>
>> Finally I found the root cause: in the subclasses of Signal I used flush()
>> instead of flushWithEOS() so that the receiving party assumed there are
>> more buffers to read from.
>> It is fixed now in CVS and the GZIPStreamWrapper seems to work like a
>> charme ;-)
>>
>> Please tell me if you can compare the results, for example with your huge
>> transactions or huge revisions.
>>
>> Cheers
>> /Eike
>>
>>
>> Eike Stepper schrieb:
>>
>>> Update:
>>>
>>> My XORInputStream and XOROutputStream now work correctly because they no
>>> longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
>>> Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
>>> org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
>>> problem I described in the previous post.
>>>
>>> GZIPInputStream and GZIPOutputStream still don't work but it seems to be
>>> a different root cause. They seem to correctly override the array read()
>>> and write() methods.
>>> A GZIPOutputStream properly deflates written data and the corresponding
>>> GZIPInputStream properly receives the deflated bytes.
>>> However there's a fill() method in InflaterInputStream which tries to
>>> fill a full (512 bytes) buffer.
>>> Since the the output stream has (correctly) written only parts of its 512
>>> bytes buffer the input stream indefinately waits for the rest of the
>>> bytes. They will never come ;-(
>>>
>>> I have never used GZIP streams before and have no idea who's fault this
>>> is or how it could be fixed.
>>> If you want to use GZIP stream wrapping you'll have to help me.
>>>
>>> Cheers
>>> /Eike
>>>
>>>
>>> Eike Stepper schrieb:
>>>
>>>> Hi Simon,
>>>>
>>>> I think I found the root cause of the problem. It seems to be another
>>>> flaw in the Java stream API.
>>>>
>>>> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
>>>> read() method so that arrays are filtered by the override of int read().
>>>> This is not necessary since the super implementation of
>>>> java.io.InputStream seems to do the same logic.
>>>>
>>>> java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*
>>>> call its own implementation/override of int read().
>>>> Instead it calls in.read(b, off, len) where in is the delegate
>>>> InputStream. This way an override logic in its own read() method is
>>>> ignored ;-(
>>>>
>>>> I believe in both cases it'd be better to simply not override the array
>>>> methods and rather use the base implementations from InputStream and
>>>> OutputStream which both delegate to read().
>>>> At least I would expect the same (redundant) logic as in
>>>> FilterOutputStream!
>>>>
>>>> What do you think?
>>>>
>>>> Cheers
>>>> /Eike
>>>>
>>>>
>>>> Eike Stepper schrieb:
>>>>
>>>>> Hi Simon,
>>>>>
>>>>> I've just provided the infrastructure for wrapping the streams to be
>>>>> used by signals.
>>>>> The easiest way to do so is adding an element processor to the managed
>>>>> container:
>>>>>
>>>>> <extension point="org.eclipse.net4j.util.elementProcessors">
>>>>> <elementProcessor
>>>>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>>>>> </extension>
>>>>>
>>>>> The element processor injects an IStreamWrapper into protocols of the
>>>>> given name (here cdo).
>>>>> The example above is taken from the cdo.ui plugin and wraps the signal
>>>>> streams with xor'ing filter streams.
>>>>> This is mostly useful for testing or simple cyphering.
>>>>> For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
>>>>> of GZIPStreamWrapperInjector.
>>>>>
>>>>> Sounds good? Unfortunately both of them don't work correctly ;-(
>>>>> I've tried for hours to track it down, without success until now.
>>>>> I guess that it relates somehow to the transmission of byte arrays.
>>>>> The Java stream API is evil in my opinion for the following reason:
>>>>> The read() method of InputStream and the write() methop of
>>>>> OutputStream return/take an int value instead of a byte value.
>>>>> I suspect it is somehow to enable read() to signal an EndOfStream (-1)
>>>>> without throwing an exception.
>>>>> Certainly only Sun knows why a magic number in a too large data type is
>>>>> better than a matching data type plus an exception for a special case.
>>>>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>>>>
>>>>> I believe that my BufferInputStream and BufferOutputStream have
>>>>> correctly implemented read() and write().
>>>>> Without the newly introduced stream wrapping they pass all test cases.
>>>>> This could of course also be because two bugs eleminate themselves
>>>>> counterwise.
>>>>> That would mean that this elimination of bugs does not happen when the
>>>>> streams are wrapped.
>>>>> But that's only a vague idea.
>>>>>
>>>>> Anyway the result when wrapping the streams leads to an exception when
>>>>> opening a CDOSession.
>>>>> At first the repository name is transmitted as [00 00 00 05]
>>>>> [114, 101, 112, 111, 49]
>>>>>
>>>>> But the following is received:
>>>>> [00 00 00 05]
>>>>> [-40, 48, -38, 58, -101]
>>>>>
>>>>> Interesting that the length is correctly received. Only the byte array
>>>>> is crippled somehow.
>>>>>
>>>>> Maybe you can have a look at it and have an idea what's going on.
>>>>> I suggest that you set breakpoints in
>>>>> ExtendedIOUtil.readByteArray(DataInput) and
>>>>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>>>>
>>>>> Cheers
>>>>> /Eike
>>>>>
>>>>>
>>>>> Simon McDuff schrieb:
>>>>>
>>>>>> Hi Eike,
>>>>>>
>>>>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>>>>> framework ?
>>>>>>
>>>>>> If yes do you have some result ?
>>>>>>
>>>>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>>>>> your framework ?
>>>>>>
>>>>>> Simon
>>>>>>
>>>>>>
>>>>>>
>
>
>
--------------050409080204030607050901
Content-Type: text/html; charset=ISO-8859-15
Content-Transfer-Encoding: 8bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-15"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Simon McDuff schrieb:
<blockquote cite="mid:fb104a$ot1$1@build.eclipse.org" type="cite">
<pre wrap="">Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) ...</pre>
</blockquote>
That's indeed possible, although I'm wondering how it could ever run
without deadlocks then
|
|
| |
Re: [CDO][0.8.0] [message #94837 is a reply to message #94775] |
Wed, 29 August 2007 06:43 |
Eclipse User |
|
|
|
Originally posted by: stepper.sympedia.de
This is a multi-part message in MIME format.
--------------010007080201090501040706
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 8bit
Simon McDuff schrieb:
> Without GZIP 1080 objects/sec...
> With GZip 200 Objects/sec.
>
Gosh! So I spent two whole days to reduce throughput by 80% !!!
That's more than I ever managed before ;-)
> My memory usage goes very HIGH... I believe we create too much objects for
> so little.
>
Who is "we"? I don't know what the GZIP streams do internally but I
don't create objects other than the GZIP streams.
Maybe they are not garbage collected?
> I will look where it allocate so much memory... But for now.. it is not that
> good....
>
Agreed ;-)
Cheers
/Eike
>
> "Simon McDuff" <smcduff@hotmail.com> a
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #94888 is a reply to message #94596] |
Wed, 29 August 2007 11:38 |
Simon Mc Duff Messages: 596 Registered: July 2009 |
Senior Member |
|
|
This is a multi-part message in MIME format.
------=_NextPart_000_0276_01C7EA0F.A20A1FD0
Content-Type: text/plain;
charset="ISO-8859-15"
Content-Transfer-Encoding: quoted-printable
Not for nothing...=20
I didn't have the problem yesterday.. usaully I have it...
Maybe I was lucky!!! I will update the code internally... and see if it =
solved the problem.
"Eike Stepper" <stepper@sympedia.de> wrote in message =
news:fb10mm$odv$13@build.eclipse.org...
Simon McDuff schrieb:=20
Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) ...That's =
indeed possible, although I'm wondering how it could ever run without =
deadlocks then ;-)
I will benchmark it at home and let you know the result tonight.
Looking forward to see the results!
Cheers
/Eike
Simon
"Eike Stepper" <stepper@sympedia.de> wrote in message=20
news:fb0thn$odv$6@build.eclipse.org...
Finally I found the root cause: in the subclasses of Signal I used =
flush()=20
instead of flushWithEOS() so that the receiving party assumed there are=20
more buffers to read from.
It is fixed now in CVS and the GZIPStreamWrapper seems to work like a=20
charme ;-)
Please tell me if you can compare the results, for example with your =
huge=20
transactions or huge revisions.
Cheers
/Eike
Eike Stepper schrieb:
Update:
My XORInputStream and XOROutputStream now work correctly because they no =
longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and=20
org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the=20
problem I described in the previous post.
GZIPInputStream and GZIPOutputStream still don't work but it seems to be =
a different root cause. They seem to correctly override the array read() =
and write() methods.
A GZIPOutputStream properly deflates written data and the corresponding=20
GZIPInputStream properly receives the deflated bytes.
However there's a fill() method in InflaterInputStream which tries to=20
fill a full (512 bytes) buffer.
Since the the output stream has (correctly) written only parts of its =
512=20
bytes buffer the input stream indefinately waits for the rest of the=20
bytes. They will never come ;-(
I have never used GZIP streams before and have no idea who's fault this=20
is or how it could be fixed.
If you want to use GZIP stream wrapping you'll have to help me.
Cheers
/Eike
Eike Stepper schrieb:
Hi Simon,
I think I found the root cause of the problem. It seems to be another=20
flaw in the Java stream API.
java.io.FilterOutputStream.write(byte[], int, int) calls its own int=20
read() method so that arrays are filtered by the override of int read().
This is not necessary since the super implementation of=20
java.io.InputStream seems to do the same logic.
java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*=20
call its own implementation/override of int read().
Instead it calls in.read(b, off, len) where in is the delegate=20
InputStream. This way an override logic in its own read() method is=20
ignored ;-(
I believe in both cases it'd be better to simply not override the array=20
methods and rather use the base implementations from InputStream and=20
OutputStream which both delegate to read().
At least I would expect the same (redundant) logic as in=20
FilterOutputStream!
What do you think?
Cheers
/Eike
Eike Stepper schrieb:
Hi Simon,
I've just provided the infrastructure for wrapping the streams to be=20
used by signals.
The easiest way to do so is adding an element processor to the managed=20
container:
<extension point=3D"org.eclipse.net4j.util.elementProcessors">
<elementProcessor=20
class=3D" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>=
</extension>
The element processor injects an IStreamWrapper into protocols of the=20
given name (here cdo).
The example above is taken from the cdo.ui plugin and wraps the signal=20
streams with xor'ing filter streams.
This is mostly useful for testing or simple cyphering.
For you I've provided CDOGZIPStreamWrapperInjector which is a subclass=20
of GZIPStreamWrapperInjector.
Sounds good? Unfortunately both of them don't work correctly ;-(
I've tried for hours to track it down, without success until now.
I guess that it relates somehow to the transmission of byte arrays.
The Java stream API is evil in my opinion for the following reason:
The read() method of InputStream and the write() methop of=20
OutputStream return/take an int value instead of a byte value.
I suspect it is somehow to enable read() to signal an EndOfStream (-1)=20
without throwing an exception.
Certainly only Sun knows why a magic number in a too large data type is=20
better than a matching data type plus an exception for a special case.
Maybe they have good reason that I'm just to silly to imagine ;-)
I believe that my BufferInputStream and BufferOutputStream have=20
correctly implemented read() and write().
Without the newly introduced stream wrapping they pass all test cases.
This could of course also be because two bugs eleminate themselves=20
counterwise.
That would mean that this elimination of bugs does not happen when the=20
streams are wrapped.
But that's only a vague idea.
Anyway the result when wrapping the streams leads to an exception when=20
opening a CDOSession.
At first the repository name is transmitted as [00 00 00 05]
[114, 101, 112, 111, 49]
But the following is received:
[00 00 00 05]
[-40, 48, -38, 58, -101]
Interesting that the length is correctly received. Only the byte array=20
is crippled somehow.
Maybe you can have a look at it and have an idea what's going on.
I suggest that you set breakpoints in=20
ExtendedIOUtil.readByteArray(DataInput) and=20
ExtendedIOUtil.writeByteArray(DataOutput, byte[])
Cheers
/Eike
Simon McDuff schrieb:
Hi Eike,
Did you try using GZIPOutpuStream and GZIPInputStream in your=20
framework ?
If yes do you have some result ?
If no, where should be plug GZIPOutpuStream and GZipInputStrea, in=20
your framework ?
Simon
=20
------=_NextPart_000_0276_01C7EA0F.A20A1FD0
Content-Type: text/html;
charset="ISO-8859-15"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type =
content=3Dtext/html;charset=3DISO-8859-15>
<META content=3D"MSHTML 6.00.2900.3157" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY text=3D#000000 bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>Not for nothing... </FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV>I didn't have the problem yesterday.. usaully I have it...</DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Maybe I was lucky!!! I will update the =
code=20
internally... and see if it solved the problem.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<BLOCKQUOTE dir=3Dltr=20
style=3D"PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; =
BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV>"Eike Stepper" <<A=20
href=3D"mailto:stepper@sympedia.de">stepper@sympedia.de</A>> wrote =
in message=20
<A=20
=
href=3D"news:fb10mm$odv$13@build.eclipse.org">news:fb10mm$odv$13@build.ec=
lipse.org</A>...</DIV>Simon=20
McDuff schrieb:=20
<BLOCKQUOTE cite=3Dmid:fb104a$ot1$1@build.eclipse.org =
type=3D"cite"><PRE wrap=3D"">Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) =
....</PRE></BLOCKQUOTE>That's=20
indeed possible, although I'm wondering how it could ever run without=20
deadlocks then ;-)<BR><BR>
<BLOCKQUOTE cite=3Dmid:fb104a$ot1$1@build.eclipse.org =
type=3D"cite"><PRE wrap=3D""> I will benchmark it at home and let you =
know the result tonight.
</PRE></BLOCKQUOTE>Looking forward to see the=20
results!<BR><BR>Cheers<BR>/Eike<BR><BR><BR>
<BLOCKQUOTE cite=3Dmid:fb104a$ot1$1@build.eclipse.org =
type=3D"cite"><PRE wrap=3D"">Simon
"Eike Stepper" <A class=3Dmoz-txt-link-rfc2396E =
href=3D"mailto:stepper@sympedia.de"><stepper@sympedia.de></A> =
wrote in message=20
<A class=3Dmoz-txt-link-freetext =
href=3D"news:fb0thn$odv$6@build.eclipse.org">news:fb0thn$odv$6@build.ecli=
pse.org</A>...
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Finally I found the root =
cause: in the subclasses of Signal I used flush()=20
instead of flushWithEOS() so that the receiving party assumed there are=20
more buffers to read from.
It is fixed now in CVS and the GZIPStreamWrapper seems to work like a=20
charme ;-)
Please tell me if you can compare the results, for example with your =
huge=20
transactions or huge revisions.
Cheers
/Eike
Eike Stepper schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Update:
My XORInputStream and XOROutputStream now work correctly because they no =
longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and=20
org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the=20
problem I described in the previous post.
GZIPInputStream and GZIPOutputStream still don't work but it seems to be =
a different root cause. They seem to correctly override the array read() =
and write() methods.
A GZIPOutputStream properly deflates written data and the corresponding=20
GZIPInputStream properly receives the deflated bytes.
However there's a fill() method in InflaterInputStream which tries to=20
fill a full (512 bytes) buffer.
Since the the output stream has (correctly) written only parts of its =
512=20
bytes buffer the input stream indefinately waits for the rest of the=20
bytes. They will never come ;-(
I have never used GZIP streams before and have no idea who's fault this=20
is or how it could be fixed.
If you want to use GZIP stream wrapping you'll have to help me.
Cheers
/Eike
Eike Stepper schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Hi Simon,
I think I found the root cause of the problem. It seems to be another=20
flaw in the Java stream API.
java.io.FilterOutputStream.write(byte[], int, int) calls its own int=20
read() method so that arrays are filtered by the override of int read().
This is not necessary since the super implementation of=20
java.io.InputStream seems to do the same logic.
java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*=20
call its own implementation/override of int read().
Instead it calls in.read(b, off, len) where in is the delegate=20
InputStream. This way an override logic in its own read() method is=20
ignored ;-(
I believe in both cases it'd be better to simply not override the array=20
methods and rather use the base implementations from InputStream and=20
OutputStream which both delegate to read().
At least I would expect the same (redundant) logic as in=20
FilterOutputStream!
What do you think?
Cheers
/Eike
Eike Stepper schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Hi Simon,
I've just provided the infrastructure for wrapping the streams to be=20
used by signals.
The easiest way to do so is adding an element processor to the managed=20
container:
<extension point=3D"org.eclipse.net4j.util.elementProcessors">
<elementProcessor=20
class=3D" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/&=
gt;
</extension>
The element processor injects an IStreamWrapper into protocols of the=20
given name (here cdo).
The example above is taken from the cdo.ui plugin and wraps the signal=20
streams with xor'ing filter streams.
This is mostly useful for testing or simple cyphering.
For you I've provided CDOGZIPStreamWrapperInjector which is a subclass=20
of GZIPStreamWrapperInjector.
Sounds good? Unfortunately both of them don't work correctly ;-(
I've tried for hours to track it down, without success until now.
I guess that it relates somehow to the transmission of byte arrays.
The Java stream API is evil in my opinion for the following reason:
The read() method of InputStream and the write() methop of=20
OutputStream return/take an int value instead of a byte value.
I suspect it is somehow to enable read() to signal an EndOfStream (-1)=20
without throwing an exception.
Certainly only Sun knows why a magic number in a too large data type is=20
better than a matching data type plus an exception for a special case.
Maybe they have good reason that I'm just to silly to imagine ;-)
I believe that my BufferInputStream and BufferOutputStream have=20
correctly implemented read() and write().
Without the newly introduced stream wrapping they pass all test cases.
This could of course also be because two bugs eleminate themselves=20
counterwise.
That would mean that this elimination of bugs does not happen when the=20
streams are wrapped.
But that's only a vague idea.
Anyway the result when wrapping the streams leads to an exception when=20
opening a CDOSession.
At first the repository name is transmitted as [00 00 00 05]
[114, 101, 112, 111, 49]
But the following is received:
[00 00 00 05]
[-40, 48, -38, 58, -101]
Interesting that the length is correctly received. Only the byte array=20
is crippled somehow.
Maybe you can have a look at it and have an idea what's going on.
I suggest that you set breakpoints in=20
ExtendedIOUtil.readByteArray(DataInput) and=20
ExtendedIOUtil.writeByteArray(DataOutput, byte[])
Cheers
/Eike
Simon McDuff schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Hi Eike,
Did you try using GZIPOutpuStream and GZIPInputStream in your=20
framework ?
If yes do you have some result ?
If no, where should be plug GZIPOutpuStream and GZipInputStrea, in=20
your framework ?
Simon
=
</PRE></BLOCKQUOTE></BLOCKQUOTE></BLOCKQUOTE></BLOCKQUOTE ></BLOCKQUOTE><P=
RE wrap=3D""><!---->
</PRE></BLOCKQUOTE></BLOCKQUOTE></BODY></HTML>
------=_NextPart_000_0276_01C7EA0F.A20A1FD0--
|
|
|
Re: [CDO][0.8.0] [message #609534 is a reply to message #94255] |
Mon, 27 August 2007 14:45 |
|
Hi Simon,
I've just provided the infrastructure for wrapping the streams to be
used by signals.
The easiest way to do so is adding an element processor to the managed
container:
<extension point="org.eclipse.net4j.util.elementProcessors">
<elementProcessor
class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
</extension>
The element processor injects an IStreamWrapper into protocols of the
given name (here cdo).
The example above is taken from the cdo.ui plugin and wraps the signal
streams with xor'ing filter streams.
This is mostly useful for testing or simple cyphering.
For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
of GZIPStreamWrapperInjector.
Sounds good? Unfortunately both of them don't work correctly ;-(
I've tried for hours to track it down, without success until now.
I guess that it relates somehow to the transmission of byte arrays.
The Java stream API is evil in my opinion for the following reason:
The read() method of InputStream and the write() methop of OutputStream
return/take an int value instead of a byte value.
I suspect it is somehow to enable read() to signal an EndOfStream (-1)
without throwing an exception.
Certainly only Sun knows why a magic number in a too large data type is
better than a matching data type plus an exception for a special case.
Maybe they have good reason that I'm just to silly to imagine ;-)
I believe that my BufferInputStream and BufferOutputStream have
correctly implemented read() and write().
Without the newly introduced stream wrapping they pass all test cases.
This could of course also be because two bugs eleminate themselves
counterwise.
That would mean that this elimination of bugs does not happen when the
streams are wrapped.
But that's only a vague idea.
Anyway the result when wrapping the streams leads to an exception when
opening a CDOSession.
At first the repository name is transmitted as
[00 00 00 05]
[114, 101, 112, 111, 49]
But the following is received:
[00 00 00 05]
[-40, 48, -38, 58, -101]
Interesting that the length is correctly received. Only the byte array
is crippled somehow.
Maybe you can have a look at it and have an idea what's going on.
I suggest that you set breakpoints in
ExtendedIOUtil.readByteArray(DataInput) and
ExtendedIOUtil.writeByteArray(DataOutput, byte[])
Cheers
/Eike
Simon McDuff schrieb:
> Hi Eike,
>
> Did you try using GZIPOutpuStream and GZIPInputStream in your framework ?
>
> If yes do you have some result ?
>
> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in your
> framework ?
>
> Simon
>
>
>
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
|
Re: [CDO][0.8.0] [message #609548 is a reply to message #94265] |
Tue, 28 August 2007 07:04 |
|
Hi Simon,
I think I found the root cause of the problem. It seems to be another
flaw in the Java stream API.
java.io.FilterOutputStream.write(byte[], int, int) calls its own int
read() method so that arrays are filtered by the override of int read().
This is not necessary since the super implementation of
java.io.InputStream seems to do the same logic.
java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*
call its own implementation/override of int read().
Instead it calls in.read(b, off, len) where in is the delegate
InputStream. This way an override logic in its own read() method is
ignored ;-(
I believe in both cases it'd be better to simply not override the array
methods and rather use the base implementations from InputStream and
OutputStream which both delegate to read().
At least I would expect the same (redundant) logic as in FilterOutputStream!
What do you think?
Cheers
/Eike
Eike Stepper schrieb:
> Hi Simon,
>
> I've just provided the infrastructure for wrapping the streams to be
> used by signals.
> The easiest way to do so is adding an element processor to the managed
> container:
>
> <extension point="org.eclipse.net4j.util.elementProcessors">
> <elementProcessor
> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
> </extension>
>
> The element processor injects an IStreamWrapper into protocols of the
> given name (here cdo).
> The example above is taken from the cdo.ui plugin and wraps the signal
> streams with xor'ing filter streams.
> This is mostly useful for testing or simple cyphering.
> For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
> of GZIPStreamWrapperInjector.
>
> Sounds good? Unfortunately both of them don't work correctly ;-(
> I've tried for hours to track it down, without success until now.
> I guess that it relates somehow to the transmission of byte arrays.
> The Java stream API is evil in my opinion for the following reason:
> The read() method of InputStream and the write() methop of
> OutputStream return/take an int value instead of a byte value.
> I suspect it is somehow to enable read() to signal an EndOfStream (-1)
> without throwing an exception.
> Certainly only Sun knows why a magic number in a too large data type
> is better than a matching data type plus an exception for a special case.
> Maybe they have good reason that I'm just to silly to imagine ;-)
>
> I believe that my BufferInputStream and BufferOutputStream have
> correctly implemented read() and write().
> Without the newly introduced stream wrapping they pass all test cases.
> This could of course also be because two bugs eleminate themselves
> counterwise.
> That would mean that this elimination of bugs does not happen when the
> streams are wrapped.
> But that's only a vague idea.
>
> Anyway the result when wrapping the streams leads to an exception when
> opening a CDOSession.
> At first the repository name is transmitted as [00 00 00 05]
> [114, 101, 112, 111, 49]
>
> But the following is received:
> [00 00 00 05]
> [-40, 48, -38, 58, -101]
>
> Interesting that the length is correctly received. Only the byte array
> is crippled somehow.
>
> Maybe you can have a look at it and have an idea what's going on.
> I suggest that you set breakpoints in
> ExtendedIOUtil.readByteArray(DataInput) and
> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>
> Cheers
> /Eike
>
>
> Simon McDuff schrieb:
>> Hi Eike,
>>
>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>> framework ?
>>
>> If yes do you have some result ?
>>
>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>> your framework ?
>>
>> Simon
>>
>>
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
|
Re: [CDO][0.8.0] [message #609549 is a reply to message #94384] |
Tue, 28 August 2007 09:15 |
|
Update:
My XORInputStream and XOROutputStream now work correctly because they no
longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
problem I described in the previous post.
GZIPInputStream and GZIPOutputStream still don't work but it seems to be
a different root cause. They seem to correctly override the array read()
and write() methods.
A GZIPOutputStream properly deflates written data and the corresponding
GZIPInputStream properly receives the deflated bytes.
However there's a fill() method in InflaterInputStream which tries to
fill a full (512 bytes) buffer.
Since the the output stream has (correctly) written only parts of its
512 bytes buffer the input stream indefinately waits for the rest of the
bytes. They will never come ;-(
I have never used GZIP streams before and have no idea who's fault this
is or how it could be fixed.
If you want to use GZIP stream wrapping you'll have to help me.
Cheers
/Eike
Eike Stepper schrieb:
> Hi Simon,
>
> I think I found the root cause of the problem. It seems to be another
> flaw in the Java stream API.
>
> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
> read() method so that arrays are filtered by the override of int read().
> This is not necessary since the super implementation of
> java.io.InputStream seems to do the same logic.
>
> java.io.FilterInputStream.read(byte[], int, int) in contrast does
> *not* call its own implementation/override of int read().
> Instead it calls in.read(b, off, len) where in is the delegate
> InputStream. This way an override logic in its own read() method is
> ignored ;-(
>
> I believe in both cases it'd be better to simply not override the
> array methods and rather use the base implementations from InputStream
> and OutputStream which both delegate to read().
> At least I would expect the same (redundant) logic as in
> FilterOutputStream!
>
> What do you think?
>
> Cheers
> /Eike
>
>
> Eike Stepper schrieb:
>> Hi Simon,
>>
>> I've just provided the infrastructure for wrapping the streams to be
>> used by signals.
>> The easiest way to do so is adding an element processor to the
>> managed container:
>>
>> <extension point="org.eclipse.net4j.util.elementProcessors">
>> <elementProcessor
>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>> </extension>
>>
>> The element processor injects an IStreamWrapper into protocols of the
>> given name (here cdo).
>> The example above is taken from the cdo.ui plugin and wraps the
>> signal streams with xor'ing filter streams.
>> This is mostly useful for testing or simple cyphering.
>> For you I've provided CDOGZIPStreamWrapperInjector which is a
>> subclass of GZIPStreamWrapperInjector.
>>
>> Sounds good? Unfortunately both of them don't work correctly ;-(
>> I've tried for hours to track it down, without success until now.
>> I guess that it relates somehow to the transmission of byte arrays.
>> The Java stream API is evil in my opinion for the following reason:
>> The read() method of InputStream and the write() methop of
>> OutputStream return/take an int value instead of a byte value.
>> I suspect it is somehow to enable read() to signal an EndOfStream
>> (-1) without throwing an exception.
>> Certainly only Sun knows why a magic number in a too large data type
>> is better than a matching data type plus an exception for a special
>> case.
>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>
>> I believe that my BufferInputStream and BufferOutputStream have
>> correctly implemented read() and write().
>> Without the newly introduced stream wrapping they pass all test cases.
>> This could of course also be because two bugs eleminate themselves
>> counterwise.
>> That would mean that this elimination of bugs does not happen when
>> the streams are wrapped.
>> But that's only a vague idea.
>>
>> Anyway the result when wrapping the streams leads to an exception
>> when opening a CDOSession.
>> At first the repository name is transmitted as [00 00 00 05]
>> [114, 101, 112, 111, 49]
>>
>> But the following is received:
>> [00 00 00 05]
>> [-40, 48, -38, 58, -101]
>>
>> Interesting that the length is correctly received. Only the byte
>> array is crippled somehow.
>>
>> Maybe you can have a look at it and have an idea what's going on.
>> I suggest that you set breakpoints in
>> ExtendedIOUtil.readByteArray(DataInput) and
>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>
>> Cheers
>> /Eike
>>
>>
>> Simon McDuff schrieb:
>>> Hi Eike,
>>>
>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>> framework ?
>>>
>>> If yes do you have some result ?
>>>
>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>> your framework ?
>>>
>>> Simon
>>>
>>>
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #609551 is a reply to message #94398] |
Tue, 28 August 2007 10:31 |
|
Finally I found the root cause: in the subclasses of Signal I used
flush() instead of flushWithEOS() so that the receiving party assumed
there are more buffers to read from.
It is fixed now in CVS and the GZIPStreamWrapper seems to work like a
charme ;-)
Please tell me if you can compare the results, for example with your
huge transactions or huge revisions.
Cheers
/Eike
Eike Stepper schrieb:
> Update:
>
> My XORInputStream and XOROutputStream now work correctly because they
> no longer extend java.io.FilterInputStream and
> java.io.FilterOutputStream.
> Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
> org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
> problem I described in the previous post.
>
> GZIPInputStream and GZIPOutputStream still don't work but it seems to
> be a different root cause. They seem to correctly override the array
> read() and write() methods.
> A GZIPOutputStream properly deflates written data and the
> corresponding GZIPInputStream properly receives the deflated bytes.
> However there's a fill() method in InflaterInputStream which tries to
> fill a full (512 bytes) buffer.
> Since the the output stream has (correctly) written only parts of its
> 512 bytes buffer the input stream indefinately waits for the rest of
> the bytes. They will never come ;-(
>
> I have never used GZIP streams before and have no idea who's fault
> this is or how it could be fixed.
> If you want to use GZIP stream wrapping you'll have to help me.
>
> Cheers
> /Eike
>
>
> Eike Stepper schrieb:
>> Hi Simon,
>>
>> I think I found the root cause of the problem. It seems to be another
>> flaw in the Java stream API.
>>
>> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
>> read() method so that arrays are filtered by the override of int read().
>> This is not necessary since the super implementation of
>> java.io.InputStream seems to do the same logic.
>>
>> java.io.FilterInputStream.read(byte[], int, int) in contrast does
>> *not* call its own implementation/override of int read().
>> Instead it calls in.read(b, off, len) where in is the delegate
>> InputStream. This way an override logic in its own read() method is
>> ignored ;-(
>>
>> I believe in both cases it'd be better to simply not override the
>> array methods and rather use the base implementations from
>> InputStream and OutputStream which both delegate to read().
>> At least I would expect the same (redundant) logic as in
>> FilterOutputStream!
>>
>> What do you think?
>>
>> Cheers
>> /Eike
>>
>>
>> Eike Stepper schrieb:
>>> Hi Simon,
>>>
>>> I've just provided the infrastructure for wrapping the streams to be
>>> used by signals.
>>> The easiest way to do so is adding an element processor to the
>>> managed container:
>>>
>>> <extension point="org.eclipse.net4j.util.elementProcessors">
>>> <elementProcessor
>>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>>> </extension>
>>>
>>> The element processor injects an IStreamWrapper into protocols of
>>> the given name (here cdo).
>>> The example above is taken from the cdo.ui plugin and wraps the
>>> signal streams with xor'ing filter streams.
>>> This is mostly useful for testing or simple cyphering.
>>> For you I've provided CDOGZIPStreamWrapperInjector which is a
>>> subclass of GZIPStreamWrapperInjector.
>>>
>>> Sounds good? Unfortunately both of them don't work correctly ;-(
>>> I've tried for hours to track it down, without success until now.
>>> I guess that it relates somehow to the transmission of byte arrays.
>>> The Java stream API is evil in my opinion for the following reason:
>>> The read() method of InputStream and the write() methop of
>>> OutputStream return/take an int value instead of a byte value.
>>> I suspect it is somehow to enable read() to signal an EndOfStream
>>> (-1) without throwing an exception.
>>> Certainly only Sun knows why a magic number in a too large data type
>>> is better than a matching data type plus an exception for a special
>>> case.
>>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>>
>>> I believe that my BufferInputStream and BufferOutputStream have
>>> correctly implemented read() and write().
>>> Without the newly introduced stream wrapping they pass all test cases.
>>> This could of course also be because two bugs eleminate themselves
>>> counterwise.
>>> That would mean that this elimination of bugs does not happen when
>>> the streams are wrapped.
>>> But that's only a vague idea.
>>>
>>> Anyway the result when wrapping the streams leads to an exception
>>> when opening a CDOSession.
>>> At first the repository name is transmitted as [00 00 00 05]
>>> [114, 101, 112, 111, 49]
>>>
>>> But the following is received:
>>> [00 00 00 05]
>>> [-40, 48, -38, 58, -101]
>>>
>>> Interesting that the length is correctly received. Only the byte
>>> array is crippled somehow.
>>>
>>> Maybe you can have a look at it and have an idea what's going on.
>>> I suggest that you set breakpoints in
>>> ExtendedIOUtil.readByteArray(DataInput) and
>>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>>
>>> Cheers
>>> /Eike
>>>
>>>
>>> Simon McDuff schrieb:
>>>> Hi Eike,
>>>>
>>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>>> framework ?
>>>>
>>>> If yes do you have some result ?
>>>>
>>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>>> your framework ?
>>>>
>>>> Simon
>>>>
>>>>
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #609558 is a reply to message #94428] |
Tue, 28 August 2007 11:15 |
Simon Mc Duff Messages: 596 Registered: July 2009 |
Senior Member |
|
|
Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) ... I will
benchmark it at home and let you know the result tonight.
Simon
"Eike Stepper" <stepper@sympedia.de> wrote in message
news:fb0thn$odv$6@build.eclipse.org...
> Finally I found the root cause: in the subclasses of Signal I used flush()
> instead of flushWithEOS() so that the receiving party assumed there are
> more buffers to read from.
> It is fixed now in CVS and the GZIPStreamWrapper seems to work like a
> charme ;-)
>
> Please tell me if you can compare the results, for example with your huge
> transactions or huge revisions.
>
> Cheers
> /Eike
>
>
> Eike Stepper schrieb:
>> Update:
>>
>> My XORInputStream and XOROutputStream now work correctly because they no
>> longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
>> Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
>> org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
>> problem I described in the previous post.
>>
>> GZIPInputStream and GZIPOutputStream still don't work but it seems to be
>> a different root cause. They seem to correctly override the array read()
>> and write() methods.
>> A GZIPOutputStream properly deflates written data and the corresponding
>> GZIPInputStream properly receives the deflated bytes.
>> However there's a fill() method in InflaterInputStream which tries to
>> fill a full (512 bytes) buffer.
>> Since the the output stream has (correctly) written only parts of its 512
>> bytes buffer the input stream indefinately waits for the rest of the
>> bytes. They will never come ;-(
>>
>> I have never used GZIP streams before and have no idea who's fault this
>> is or how it could be fixed.
>> If you want to use GZIP stream wrapping you'll have to help me.
>>
>> Cheers
>> /Eike
>>
>>
>> Eike Stepper schrieb:
>>> Hi Simon,
>>>
>>> I think I found the root cause of the problem. It seems to be another
>>> flaw in the Java stream API.
>>>
>>> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
>>> read() method so that arrays are filtered by the override of int read().
>>> This is not necessary since the super implementation of
>>> java.io.InputStream seems to do the same logic.
>>>
>>> java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*
>>> call its own implementation/override of int read().
>>> Instead it calls in.read(b, off, len) where in is the delegate
>>> InputStream. This way an override logic in its own read() method is
>>> ignored ;-(
>>>
>>> I believe in both cases it'd be better to simply not override the array
>>> methods and rather use the base implementations from InputStream and
>>> OutputStream which both delegate to read().
>>> At least I would expect the same (redundant) logic as in
>>> FilterOutputStream!
>>>
>>> What do you think?
>>>
>>> Cheers
>>> /Eike
>>>
>>>
>>> Eike Stepper schrieb:
>>>> Hi Simon,
>>>>
>>>> I've just provided the infrastructure for wrapping the streams to be
>>>> used by signals.
>>>> The easiest way to do so is adding an element processor to the managed
>>>> container:
>>>>
>>>> <extension point="org.eclipse.net4j.util.elementProcessors">
>>>> <elementProcessor
>>>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>>>> </extension>
>>>>
>>>> The element processor injects an IStreamWrapper into protocols of the
>>>> given name (here cdo).
>>>> The example above is taken from the cdo.ui plugin and wraps the signal
>>>> streams with xor'ing filter streams.
>>>> This is mostly useful for testing or simple cyphering.
>>>> For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
>>>> of GZIPStreamWrapperInjector.
>>>>
>>>> Sounds good? Unfortunately both of them don't work correctly ;-(
>>>> I've tried for hours to track it down, without success until now.
>>>> I guess that it relates somehow to the transmission of byte arrays.
>>>> The Java stream API is evil in my opinion for the following reason:
>>>> The read() method of InputStream and the write() methop of
>>>> OutputStream return/take an int value instead of a byte value.
>>>> I suspect it is somehow to enable read() to signal an EndOfStream (-1)
>>>> without throwing an exception.
>>>> Certainly only Sun knows why a magic number in a too large data type is
>>>> better than a matching data type plus an exception for a special case.
>>>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>>>
>>>> I believe that my BufferInputStream and BufferOutputStream have
>>>> correctly implemented read() and write().
>>>> Without the newly introduced stream wrapping they pass all test cases.
>>>> This could of course also be because two bugs eleminate themselves
>>>> counterwise.
>>>> That would mean that this elimination of bugs does not happen when the
>>>> streams are wrapped.
>>>> But that's only a vague idea.
>>>>
>>>> Anyway the result when wrapping the streams leads to an exception when
>>>> opening a CDOSession.
>>>> At first the repository name is transmitted as [00 00 00 05]
>>>> [114, 101, 112, 111, 49]
>>>>
>>>> But the following is received:
>>>> [00 00 00 05]
>>>> [-40, 48, -38, 58, -101]
>>>>
>>>> Interesting that the length is correctly received. Only the byte array
>>>> is crippled somehow.
>>>>
>>>> Maybe you can have a look at it and have an idea what's going on.
>>>> I suggest that you set breakpoints in
>>>> ExtendedIOUtil.readByteArray(DataInput) and
>>>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>>>
>>>> Cheers
>>>> /Eike
>>>>
>>>>
>>>> Simon McDuff schrieb:
>>>>> Hi Eike,
>>>>>
>>>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>>>> framework ?
>>>>>
>>>>> If yes do you have some result ?
>>>>>
>>>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>>>> your framework ?
>>>>>
>>>>> Simon
>>>>>
>>>>>
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #609562 is a reply to message #94534] |
Tue, 28 August 2007 11:25 |
|
This is a multi-part message in MIME format.
--------------050409080204030607050901
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 7bit
Simon McDuff schrieb:
> Wow perfect!!!
>
> Maybe it will fix the hangling problem that we had ?? :-) ...
That's indeed possible, although I'm wondering how it could ever run
without deadlocks then ;-)
> I will benchmark it at home and let you know the result tonight.
>
Looking forward to see the results!
Cheers
/Eike
> Simon
>
> "Eike Stepper" <stepper@sympedia.de> wrote in message
> news:fb0thn$odv$6@build.eclipse.org...
>
>> Finally I found the root cause: in the subclasses of Signal I used flush()
>> instead of flushWithEOS() so that the receiving party assumed there are
>> more buffers to read from.
>> It is fixed now in CVS and the GZIPStreamWrapper seems to work like a
>> charme ;-)
>>
>> Please tell me if you can compare the results, for example with your huge
>> transactions or huge revisions.
>>
>> Cheers
>> /Eike
>>
>>
>> Eike Stepper schrieb:
>>
>>> Update:
>>>
>>> My XORInputStream and XOROutputStream now work correctly because they no
>>> longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
>>> Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and
>>> org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the
>>> problem I described in the previous post.
>>>
>>> GZIPInputStream and GZIPOutputStream still don't work but it seems to be
>>> a different root cause. They seem to correctly override the array read()
>>> and write() methods.
>>> A GZIPOutputStream properly deflates written data and the corresponding
>>> GZIPInputStream properly receives the deflated bytes.
>>> However there's a fill() method in InflaterInputStream which tries to
>>> fill a full (512 bytes) buffer.
>>> Since the the output stream has (correctly) written only parts of its 512
>>> bytes buffer the input stream indefinately waits for the rest of the
>>> bytes. They will never come ;-(
>>>
>>> I have never used GZIP streams before and have no idea who's fault this
>>> is or how it could be fixed.
>>> If you want to use GZIP stream wrapping you'll have to help me.
>>>
>>> Cheers
>>> /Eike
>>>
>>>
>>> Eike Stepper schrieb:
>>>
>>>> Hi Simon,
>>>>
>>>> I think I found the root cause of the problem. It seems to be another
>>>> flaw in the Java stream API.
>>>>
>>>> java.io.FilterOutputStream.write(byte[], int, int) calls its own int
>>>> read() method so that arrays are filtered by the override of int read().
>>>> This is not necessary since the super implementation of
>>>> java.io.InputStream seems to do the same logic.
>>>>
>>>> java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*
>>>> call its own implementation/override of int read().
>>>> Instead it calls in.read(b, off, len) where in is the delegate
>>>> InputStream. This way an override logic in its own read() method is
>>>> ignored ;-(
>>>>
>>>> I believe in both cases it'd be better to simply not override the array
>>>> methods and rather use the base implementations from InputStream and
>>>> OutputStream which both delegate to read().
>>>> At least I would expect the same (redundant) logic as in
>>>> FilterOutputStream!
>>>>
>>>> What do you think?
>>>>
>>>> Cheers
>>>> /Eike
>>>>
>>>>
>>>> Eike Stepper schrieb:
>>>>
>>>>> Hi Simon,
>>>>>
>>>>> I've just provided the infrastructure for wrapping the streams to be
>>>>> used by signals.
>>>>> The easiest way to do so is adding an element processor to the managed
>>>>> container:
>>>>>
>>>>> <extension point="org.eclipse.net4j.util.elementProcessors">
>>>>> <elementProcessor
>>>>> class=" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>
>>>>> </extension>
>>>>>
>>>>> The element processor injects an IStreamWrapper into protocols of the
>>>>> given name (here cdo).
>>>>> The example above is taken from the cdo.ui plugin and wraps the signal
>>>>> streams with xor'ing filter streams.
>>>>> This is mostly useful for testing or simple cyphering.
>>>>> For you I've provided CDOGZIPStreamWrapperInjector which is a subclass
>>>>> of GZIPStreamWrapperInjector.
>>>>>
>>>>> Sounds good? Unfortunately both of them don't work correctly ;-(
>>>>> I've tried for hours to track it down, without success until now.
>>>>> I guess that it relates somehow to the transmission of byte arrays.
>>>>> The Java stream API is evil in my opinion for the following reason:
>>>>> The read() method of InputStream and the write() methop of
>>>>> OutputStream return/take an int value instead of a byte value.
>>>>> I suspect it is somehow to enable read() to signal an EndOfStream (-1)
>>>>> without throwing an exception.
>>>>> Certainly only Sun knows why a magic number in a too large data type is
>>>>> better than a matching data type plus an exception for a special case.
>>>>> Maybe they have good reason that I'm just to silly to imagine ;-)
>>>>>
>>>>> I believe that my BufferInputStream and BufferOutputStream have
>>>>> correctly implemented read() and write().
>>>>> Without the newly introduced stream wrapping they pass all test cases.
>>>>> This could of course also be because two bugs eleminate themselves
>>>>> counterwise.
>>>>> That would mean that this elimination of bugs does not happen when the
>>>>> streams are wrapped.
>>>>> But that's only a vague idea.
>>>>>
>>>>> Anyway the result when wrapping the streams leads to an exception when
>>>>> opening a CDOSession.
>>>>> At first the repository name is transmitted as [00 00 00 05]
>>>>> [114, 101, 112, 111, 49]
>>>>>
>>>>> But the following is received:
>>>>> [00 00 00 05]
>>>>> [-40, 48, -38, 58, -101]
>>>>>
>>>>> Interesting that the length is correctly received. Only the byte array
>>>>> is crippled somehow.
>>>>>
>>>>> Maybe you can have a look at it and have an idea what's going on.
>>>>> I suggest that you set breakpoints in
>>>>> ExtendedIOUtil.readByteArray(DataInput) and
>>>>> ExtendedIOUtil.writeByteArray(DataOutput, byte[])
>>>>>
>>>>> Cheers
>>>>> /Eike
>>>>>
>>>>>
>>>>> Simon McDuff schrieb:
>>>>>
>>>>>> Hi Eike,
>>>>>>
>>>>>> Did you try using GZIPOutpuStream and GZIPInputStream in your
>>>>>> framework ?
>>>>>>
>>>>>> If yes do you have some result ?
>>>>>>
>>>>>> If no, where should be plug GZIPOutpuStream and GZipInputStrea, in
>>>>>> your framework ?
>>>>>>
>>>>>> Simon
>>>>>>
>>>>>>
>>>>>>
>
>
>
--------------050409080204030607050901
Content-Type: text/html; charset=ISO-8859-15
Content-Transfer-Encoding: 8bit
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-15"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Simon McDuff schrieb:
<blockquote cite="mid:fb104a$ot1$1@build.eclipse.org" type="cite">
<pre wrap="">Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) ...</pre>
</blockquote>
That's indeed possible, although I'm wondering how it could ever run
without deadlocks then
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
| |
Re: [CDO][0.8.0] [message #609577 is a reply to message #94775] |
Wed, 29 August 2007 06:43 |
|
This is a multi-part message in MIME format.
--------------010007080201090501040706
Content-Type: text/plain; charset=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 8bit
Simon McDuff schrieb:
> Without GZIP 1080 objects/sec...
> With GZip 200 Objects/sec.
>
Gosh! So I spent two whole days to reduce throughput by 80% !!!
That's more than I ever managed before ;-)
> My memory usage goes very HIGH... I believe we create too much objects for
> so little.
>
Who is "we"? I don't know what the GZIP streams do internally but I
don't create objects other than the GZIP streams.
Maybe they are not garbage collected?
> I will look where it allocate so much memory... But for now.. it is not that
> good....
>
Agreed ;-)
Cheers
/Eike
>
> "Simon McDuff" <smcduff@hotmail.com> a
Cheers
/Eike
----
http://www.esc-net.de
http://thegordian.blogspot.com
http://twitter.com/eikestepper
|
|
|
Re: [CDO][0.8.0] GZIP solved [message #609580 is a reply to message #94596] |
Wed, 29 August 2007 11:38 |
Simon Mc Duff Messages: 596 Registered: July 2009 |
Senior Member |
|
|
This is a multi-part message in MIME format.
------=_NextPart_000_0276_01C7EA0F.A20A1FD0
Content-Type: text/plain;
charset="ISO-8859-15"
Content-Transfer-Encoding: quoted-printable
Not for nothing...=20
I didn't have the problem yesterday.. usaully I have it...
Maybe I was lucky!!! I will update the code internally... and see if it =
solved the problem.
"Eike Stepper" <stepper@sympedia.de> wrote in message =
news:fb10mm$odv$13@build.eclipse.org...
Simon McDuff schrieb:=20
Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) ...That's =
indeed possible, although I'm wondering how it could ever run without =
deadlocks then ;-)
I will benchmark it at home and let you know the result tonight.
Looking forward to see the results!
Cheers
/Eike
Simon
"Eike Stepper" <stepper@sympedia.de> wrote in message=20
news:fb0thn$odv$6@build.eclipse.org...
Finally I found the root cause: in the subclasses of Signal I used =
flush()=20
instead of flushWithEOS() so that the receiving party assumed there are=20
more buffers to read from.
It is fixed now in CVS and the GZIPStreamWrapper seems to work like a=20
charme ;-)
Please tell me if you can compare the results, for example with your =
huge=20
transactions or huge revisions.
Cheers
/Eike
Eike Stepper schrieb:
Update:
My XORInputStream and XOROutputStream now work correctly because they no =
longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and=20
org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the=20
problem I described in the previous post.
GZIPInputStream and GZIPOutputStream still don't work but it seems to be =
a different root cause. They seem to correctly override the array read() =
and write() methods.
A GZIPOutputStream properly deflates written data and the corresponding=20
GZIPInputStream properly receives the deflated bytes.
However there's a fill() method in InflaterInputStream which tries to=20
fill a full (512 bytes) buffer.
Since the the output stream has (correctly) written only parts of its =
512=20
bytes buffer the input stream indefinately waits for the rest of the=20
bytes. They will never come ;-(
I have never used GZIP streams before and have no idea who's fault this=20
is or how it could be fixed.
If you want to use GZIP stream wrapping you'll have to help me.
Cheers
/Eike
Eike Stepper schrieb:
Hi Simon,
I think I found the root cause of the problem. It seems to be another=20
flaw in the Java stream API.
java.io.FilterOutputStream.write(byte[], int, int) calls its own int=20
read() method so that arrays are filtered by the override of int read().
This is not necessary since the super implementation of=20
java.io.InputStream seems to do the same logic.
java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*=20
call its own implementation/override of int read().
Instead it calls in.read(b, off, len) where in is the delegate=20
InputStream. This way an override logic in its own read() method is=20
ignored ;-(
I believe in both cases it'd be better to simply not override the array=20
methods and rather use the base implementations from InputStream and=20
OutputStream which both delegate to read().
At least I would expect the same (redundant) logic as in=20
FilterOutputStream!
What do you think?
Cheers
/Eike
Eike Stepper schrieb:
Hi Simon,
I've just provided the infrastructure for wrapping the streams to be=20
used by signals.
The easiest way to do so is adding an element processor to the managed=20
container:
<extension point=3D"org.eclipse.net4j.util.elementProcessors">
<elementProcessor=20
class=3D" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/>=
</extension>
The element processor injects an IStreamWrapper into protocols of the=20
given name (here cdo).
The example above is taken from the cdo.ui plugin and wraps the signal=20
streams with xor'ing filter streams.
This is mostly useful for testing or simple cyphering.
For you I've provided CDOGZIPStreamWrapperInjector which is a subclass=20
of GZIPStreamWrapperInjector.
Sounds good? Unfortunately both of them don't work correctly ;-(
I've tried for hours to track it down, without success until now.
I guess that it relates somehow to the transmission of byte arrays.
The Java stream API is evil in my opinion for the following reason:
The read() method of InputStream and the write() methop of=20
OutputStream return/take an int value instead of a byte value.
I suspect it is somehow to enable read() to signal an EndOfStream (-1)=20
without throwing an exception.
Certainly only Sun knows why a magic number in a too large data type is=20
better than a matching data type plus an exception for a special case.
Maybe they have good reason that I'm just to silly to imagine ;-)
I believe that my BufferInputStream and BufferOutputStream have=20
correctly implemented read() and write().
Without the newly introduced stream wrapping they pass all test cases.
This could of course also be because two bugs eleminate themselves=20
counterwise.
That would mean that this elimination of bugs does not happen when the=20
streams are wrapped.
But that's only a vague idea.
Anyway the result when wrapping the streams leads to an exception when=20
opening a CDOSession.
At first the repository name is transmitted as [00 00 00 05]
[114, 101, 112, 111, 49]
But the following is received:
[00 00 00 05]
[-40, 48, -38, 58, -101]
Interesting that the length is correctly received. Only the byte array=20
is crippled somehow.
Maybe you can have a look at it and have an idea what's going on.
I suggest that you set breakpoints in=20
ExtendedIOUtil.readByteArray(DataInput) and=20
ExtendedIOUtil.writeByteArray(DataOutput, byte[])
Cheers
/Eike
Simon McDuff schrieb:
Hi Eike,
Did you try using GZIPOutpuStream and GZIPInputStream in your=20
framework ?
If yes do you have some result ?
If no, where should be plug GZIPOutpuStream and GZipInputStrea, in=20
your framework ?
Simon
=20
------=_NextPart_000_0276_01C7EA0F.A20A1FD0
Content-Type: text/html;
charset="ISO-8859-15"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=3DContent-Type =
content=3Dtext/html;charset=3DISO-8859-15>
<META content=3D"MSHTML 6.00.2900.3157" name=3DGENERATOR>
<STYLE></STYLE>
</HEAD>
<BODY text=3D#000000 bgColor=3D#ffffff>
<DIV><FONT face=3DArial size=3D2>Not for nothing... </FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV>I didn't have the problem yesterday.. usaully I have it...</DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<DIV><FONT face=3DArial size=3D2>Maybe I was lucky!!! I will update the =
code=20
internally... and see if it solved the problem.</FONT></DIV>
<DIV><FONT face=3DArial size=3D2></FONT> </DIV>
<BLOCKQUOTE dir=3Dltr=20
style=3D"PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; =
BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px">
<DIV>"Eike Stepper" <<A=20
href=3D"mailto:stepper@sympedia.de">stepper@sympedia.de</A>> wrote =
in message=20
<A=20
=
href=3D"news:fb10mm$odv$13@build.eclipse.org">news:fb10mm$odv$13@build.ec=
lipse.org</A>...</DIV>Simon=20
McDuff schrieb:=20
<BLOCKQUOTE cite=3Dmid:fb104a$ot1$1@build.eclipse.org =
type=3D"cite"><PRE wrap=3D"">Wow perfect!!!
Maybe it will fix the hangling problem that we had ?? :-) =
....</PRE></BLOCKQUOTE>That's=20
indeed possible, although I'm wondering how it could ever run without=20
deadlocks then ;-)<BR><BR>
<BLOCKQUOTE cite=3Dmid:fb104a$ot1$1@build.eclipse.org =
type=3D"cite"><PRE wrap=3D""> I will benchmark it at home and let you =
know the result tonight.
</PRE></BLOCKQUOTE>Looking forward to see the=20
results!<BR><BR>Cheers<BR>/Eike<BR><BR><BR>
<BLOCKQUOTE cite=3Dmid:fb104a$ot1$1@build.eclipse.org =
type=3D"cite"><PRE wrap=3D"">Simon
"Eike Stepper" <A class=3Dmoz-txt-link-rfc2396E =
href=3D"mailto:stepper@sympedia.de"><stepper@sympedia.de></A> =
wrote in message=20
<A class=3Dmoz-txt-link-freetext =
href=3D"news:fb0thn$odv$6@build.eclipse.org">news:fb0thn$odv$6@build.ecli=
pse.org</A>...
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Finally I found the root =
cause: in the subclasses of Signal I used flush()=20
instead of flushWithEOS() so that the receiving party assumed there are=20
more buffers to read from.
It is fixed now in CVS and the GZIPStreamWrapper seems to work like a=20
charme ;-)
Please tell me if you can compare the results, for example with your =
huge=20
transactions or huge revisions.
Cheers
/Eike
Eike Stepper schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Update:
My XORInputStream and XOROutputStream now work correctly because they no =
longer extend java.io.FilterInputStream and java.io.FilterOutputStream.
Instead they use org.eclipse.net4j.util.io.DelegatingInputStream and=20
org.eclipse.net4j.util.io.DelegatingOutputStream which don't have the=20
problem I described in the previous post.
GZIPInputStream and GZIPOutputStream still don't work but it seems to be =
a different root cause. They seem to correctly override the array read() =
and write() methods.
A GZIPOutputStream properly deflates written data and the corresponding=20
GZIPInputStream properly receives the deflated bytes.
However there's a fill() method in InflaterInputStream which tries to=20
fill a full (512 bytes) buffer.
Since the the output stream has (correctly) written only parts of its =
512=20
bytes buffer the input stream indefinately waits for the rest of the=20
bytes. They will never come ;-(
I have never used GZIP streams before and have no idea who's fault this=20
is or how it could be fixed.
If you want to use GZIP stream wrapping you'll have to help me.
Cheers
/Eike
Eike Stepper schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Hi Simon,
I think I found the root cause of the problem. It seems to be another=20
flaw in the Java stream API.
java.io.FilterOutputStream.write(byte[], int, int) calls its own int=20
read() method so that arrays are filtered by the override of int read().
This is not necessary since the super implementation of=20
java.io.InputStream seems to do the same logic.
java.io.FilterInputStream.read(byte[], int, int) in contrast does *not*=20
call its own implementation/override of int read().
Instead it calls in.read(b, off, len) where in is the delegate=20
InputStream. This way an override logic in its own read() method is=20
ignored ;-(
I believe in both cases it'd be better to simply not override the array=20
methods and rather use the base implementations from InputStream and=20
OutputStream which both delegate to read().
At least I would expect the same (redundant) logic as in=20
FilterOutputStream!
What do you think?
Cheers
/Eike
Eike Stepper schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Hi Simon,
I've just provided the infrastructure for wrapping the streams to be=20
used by signals.
The easiest way to do so is adding an element processor to the managed=20
container:
<extension point=3D"org.eclipse.net4j.util.elementProcessors">
<elementProcessor=20
class=3D" org.eclipse.emf.cdo.protocol.util.CDOXORStreamWrapperInjecto r "/&=
gt;
</extension>
The element processor injects an IStreamWrapper into protocols of the=20
given name (here cdo).
The example above is taken from the cdo.ui plugin and wraps the signal=20
streams with xor'ing filter streams.
This is mostly useful for testing or simple cyphering.
For you I've provided CDOGZIPStreamWrapperInjector which is a subclass=20
of GZIPStreamWrapperInjector.
Sounds good? Unfortunately both of them don't work correctly ;-(
I've tried for hours to track it down, without success until now.
I guess that it relates somehow to the transmission of byte arrays.
The Java stream API is evil in my opinion for the following reason:
The read() method of InputStream and the write() methop of=20
OutputStream return/take an int value instead of a byte value.
I suspect it is somehow to enable read() to signal an EndOfStream (-1)=20
without throwing an exception.
Certainly only Sun knows why a magic number in a too large data type is=20
better than a matching data type plus an exception for a special case.
Maybe they have good reason that I'm just to silly to imagine ;-)
I believe that my BufferInputStream and BufferOutputStream have=20
correctly implemented read() and write().
Without the newly introduced stream wrapping they pass all test cases.
This could of course also be because two bugs eleminate themselves=20
counterwise.
That would mean that this elimination of bugs does not happen when the=20
streams are wrapped.
But that's only a vague idea.
Anyway the result when wrapping the streams leads to an exception when=20
opening a CDOSession.
At first the repository name is transmitted as [00 00 00 05]
[114, 101, 112, 111, 49]
But the following is received:
[00 00 00 05]
[-40, 48, -38, 58, -101]
Interesting that the length is correctly received. Only the byte array=20
is crippled somehow.
Maybe you can have a look at it and have an idea what's going on.
I suggest that you set breakpoints in=20
ExtendedIOUtil.readByteArray(DataInput) and=20
ExtendedIOUtil.writeByteArray(DataOutput, byte[])
Cheers
/Eike
Simon McDuff schrieb:
</PRE>
<BLOCKQUOTE type=3D"cite"><PRE wrap=3D"">Hi Eike,
Did you try using GZIPOutpuStream and GZIPInputStream in your=20
framework ?
If yes do you have some result ?
If no, where should be plug GZIPOutpuStream and GZipInputStrea, in=20
your framework ?
Simon
=
</PRE></BLOCKQUOTE></BLOCKQUOTE></BLOCKQUOTE></BLOCKQUOTE ></BLOCKQUOTE><P=
RE wrap=3D""><!---->
</PRE></BLOCKQUOTE></BLOCKQUOTE></BODY></HTML>
------=_NextPart_000_0276_01C7EA0F.A20A1FD0--
|
|
|
Goto Forum:
Current Time: Mon Sep 23 14:31:01 GMT 2024
Powered by FUDForum. Page generated in 0.06203 seconds
|