Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [geomesa-users] Geomesa is not returning Entire WW data.

Are you using geomesa.query.timeout? That should be the only instance where the outQueue would not be drained. Or, if you don't actually read the results. In either case, it should be de-referenced and garbage collected eventually.
terminate() is expected to be called before close(), in the normal workflow. terminate() just inserts a poison-pill to the queue, so that the iterator knows no more data is coming, and can return false from hasNext(). There is some logic to account for the fact that the queue might be temporarily or permanently full. terminate() should always get invoked due to the executor pool being created with the exact number of threads needed, so the 'run' method should always be executed, and terminate() is in the finally block there. It is potentially possible that the run method would never be invoked, if the scan was closed immediately before the executor even started any tasks, but the consequence would be that the iterator would never return false for hasNext(), so I don't think that is the problem. I have refactored that code a bit in a branch I'll be putting up for PR soon to remove that potential issue.

Thanks,

Emilio

On 2/24/20 12:20 PM, Amit Srivastava wrote:
Hi Emilio,

I am seeing outQueue is never getting cleaned-up and the memory is keep increasing. Looks like terminate() (at line number 161) is getting called before close() function (at line number 163). Should we have following code at line 163 override def close(): Unit = try { closed.set(true) } finally { terminate() }


On Wed, Feb 19, 2020 at 8:33 AM Amit Srivastava <amit.bit96@xxxxxxxxx> wrote:
Ok, thanks for the clarification.

On Wed, Feb 19, 2020 at 5:30 AM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
That code should only be invoked when you close() the returned simple feature iterator. If the iterator has been fully read, then there would not be any data lost. If the iterator is closed prematurely (for instance, if you enable geomesa.query.timeout), then it is assumed that you do not intend to read any more data so it may drop results.

Thanks,

Emilio

On 2/18/20 6:12 PM, Amit Srivastava wrote:
Thanks Emilio for the reply. 

I was looking into below code. In what scenario the below code can cause the data loss?


On Mon, Feb 17, 2020 at 9:29 AM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Shards are described here: https://www.geomesa.org/documentation/user/datastores/index_config.html#configuring-z-index-shards

Given your data set, you should get sufficient parallelism from size-based splits (as you have, since you have 508 regions). If you want to tweak the size and number of regions, you can do that through configuring hbase splits (see https://hbase.apache.org/book.html#disable.splitting). You probably would do better by disabling shards (setting them to 0) and using pre-splitting (see https://www.geomesa.org/documentation/user/datastores/index_config.html#configuring-index-splits) to get some initial parallelism. Pre-splitting is optional, but should speed up the initial ingest time.

The downside of shards is that each set of scan ranges for a given query has to be executed against each shard. It might be worthwhile to re-ingest without shards and see if it helps your scan performance.

Thanks,

Emilio

On 2/17/20 11:32 AM, Amit Srivastava wrote:
Thanks Emilio, No I didn't find anything weird by putting extra logging.

I also want to understand the implication of number of shards on the scan query performance. I have 5TBs of OSM data with 60 shards in Geomesa Table and 508 regions in HBase Table, Will number of shards impacts the scan quey performance? Should I increase the number of shards to 127? If yes the why?

On Mon, Feb 17, 2020 at 5:51 AM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Hi Amit,

I'm fairly sure we are setting the inclusivity of the scan correctly, as otherwise we'd have a lot of obvious failures in the unit tests. The code for generating scans is somewhat distributed around the project, but here are some important places:

Convert the ranges generated by the specific index (e.g. Z3) from byte arrays to scan objects: https://github.com/locationtech/geomesa/blob/geomesa_2.11-2.4.0/geomesa-hbase/geomesa-hbase-datastore/src/main/scala/org/locationtech/geomesa/hbase/data/HBaseIndexAdapter.scala#L206

Convert the scans to grouped scans using the hbase multi-row-range filter: https://github.com/locationtech/geomesa/blob/geomesa_2.11-2.4.0/geomesa-hbase/geomesa-hbase-datastore/src/main/scala/org/locationtech/geomesa/hbase/data/HBaseIndexAdapter.scala#L324

The method for executing the scans: https://github.com/locationtech/geomesa/blob/geomesa_2.11-2.4.0/geomesa-hbase/geomesa-hbase-datastore/src/main/scala/org/locationtech/geomesa/hbase/data/HBaseQueryPlan.scala#L110

The invocation of the scans against a table: https://github.com/locationtech/geomesa/blob/geomesa_2.11-2.4.0/geomesa-hbase/geomesa-hbase-datastore/src/main/scala/org/locationtech/geomesa/hbase/utils/HBaseBatchScan.scala#L22

Did you have a chance to try out the branch I put together with extra logging? That might help pinpoint what is taking so long in the region servers.

Thanks,

Emilio

On 2/17/20 5:23 AM, Amit Srivastava wrote:
Hi Emilio,

Can you point me to the code where you are calling HBase to fetch data for a given bounding box? Also, For HBase Scan are you handling range properly where the startrow is inclusive and the stoprow is exclusive?

On Tue, Feb 11, 2020 at 11:10 PM Amit Srivastava <amit.bit96@xxxxxxxxx> wrote:
Hi Emilio,

I didn't found anything weird. I need you expertise to find out the root cause.

On Mon, Feb 3, 2020 at 10:46 AM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Hi Amit,

Have you been able to figure anything out? I haven't found anything on my end, but it does seem like something that needs to be addressed.

Thanks,

Emilio

On 1/23/20 4:40 PM, Emilio Lahr-Vivaz wrote:
I haven't been able to find anything. I ingested ~700k osm way points from the D.C. area[1] in a local psuedo-distributed 1 node cluster, and ran some bbox queries against them. I'm not going to have time to look into it further until next week, but if you want to do your own investigation I created a 2.3.3-SNAPSHOT branch with rather verbose logging here: https://github.com/elahrvivaz/geomesa/tree/fcr_filter_debug_2.3

You would have to build that branch and deploy the geomesa-hbase-distributed-runtime jar to your region servers, then enable the logging by setting `org.locationtech.geomesa` to DEBUG level in your region server log4j configuration. If you also update your client, you can toggle between using the old java filter implementation by setting the system property `-Dgeomesa.hbase.java=true` on your client (e.g. geoserver), though I didn't see any difference between the two.

The region servers will log any filter serialization/deserialization, and how long it takes to filter/transform each row. In my testing the deserialization was usually < 10ms and the filtering was almost always 0ms and occasionally 1 or 2 ms.

Just as a guess, it may be something with the large number of scans your query is generating, or maybe the selectivity of your filter compared to the data set, or possibly memory constraints in the region servers.

Thanks,

Emilio


[1]: https://download.bbbike.org/osm/bbbike/WashingtonDC/

On 1/23/20 9:16 AM, Emilio Lahr-Vivaz wrote:
Interesting. I will add some debug logging and see if I can replicate the issue and see where the remote filter slowdown is coming from.

Thanks,

Emilio

On 1/22/20 5:33 PM, Amit Srivastava wrote:
Hi Emilio,

We played with hbase.client.operation.timeout value which reduced the number of missing feature to 7K, but when we disabled the remote filtering, we start getting full data and scan time also reduced to 6K from 120K. I think we should investigate more on what is happening with remote filtering?

On Fri, Jan 17, 2020 at 8:10 AM Amit Srivastava <amit.bit96@xxxxxxxxx> wrote:
We are using 1800,000 as a timeout. Below are the Ganglia metrics for scan Max Time. It is showing the maximum time taken is around 120,000 ms that is the default value of hbase.client.operation.timeout. Increasing hbase.client.operation.timeout and see if it is blocking it to reach 30 mins.

image.png

On Fri, Jan 17, 2020 at 7:59 AM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Is the query taking over 30 minutes to run? I believe the timeout is specified in milliseconds, so if you are setting it as 30 that might be the problem.

Thanks,

Emilio

On 1/17/20 10:57 AM, Amit Srivastava wrote:
Thanks, currently we are using 30mins as scanner timeout. I will increase the value to see, if we get whole result with this. It will help us to scope down the problem. I am increasing timeout temroraly to 1 hour and see the difference. I will get back to you based on the result.

On Fri, Jan 17, 2020 at 6:45 AM Austin Heyne <aheyne@xxxxxxxx> wrote:

We use hbase.client.scanner.timeout.period = 300000 in addition to >60 other custom configurations on our production EMR clusters. I can't promise that would fix the problem in your case but we haven't experienced this symptom. Tuning HBase to work well on S3 can be considerably complicated.

-Austin

On 1/17/20 8:56 AM, Emilio Lahr-Vivaz wrote:
Hello,

That's interesting. Were you able to return all the data by increasing the scanner timeout?

We don't currently renew leases. Did the expert suggest how to know when to renew a lease? I can't find much/any documentation on that method. The associated JIRA ticket [1] seems to indicate that you would only have to renew the lease if you are not advancing the scanner. Possibly the results are not being processed quickly enough in geoserver to where the scanners are sitting idle and timing out. Similarly, there is another ticket about timeouts from pruning results in server-side filters[2] which seems relevant, but that also seems to have been fixed.

In general, I've found that changing timeouts in hbase can have unexpected consequences, as the various timeout flags seem to be used in more places than you might expect. I don't fully understand which timeouts would even need to be increased where to help with long-running scans, but not affect other parts of the system.

Thanks,

Emilio

[1]: https://issues.apache.org/jira/browse/HBASE-13333
[2]: https://issues.apache.org/jira/browse/HBASE-13090

On 1/17/20 1:53 AM, Amit Srivastava wrote:
Hi Emilio,

I did further deep dive into this issue. We see below exception [1] in the impacted region server. When I spoke about this with the AWS EMR HBase expert, then they told me this exception might cause data loss for the scan query. If the time limit exceeds (set via hbase.client.scanner.timeout.period), then the server returns the results that got accumulated up to that point. Due to this, we might be seeing the data loss.

There suggested two solutions for it:
  1. Renew lease programmatically [2]
  2. Increase the hbase.client.scanner.timeout.period, but it will be just a temporary solution. Since currently, we are using hbase.client.scanner.timeout.period as 30. Do you have any recommendations? I think solution 1 is the right long term approach. Do you guys renew the scanner lease programmatically? If not, can you help in fixing it?
Do you think it is the issue? If yes, do you have any alternate solution?

[1] 2020-01-17 05:04:58,527 INFO [regionserver/ip-10-0-21-111.ec2.internal/10.0.21.111:16020.leaseChecker]regionserver.RSRpcServices: Scanner -3738486765916797690 lease expired on region atlas_OSMWays_xz3_geometry_ingestionTimestamp_v2,9,1579207349986.38c67f9825538ce5326181f4cc15b913.

[2] https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/ResultScanner.html#renewLease--




On Thu, Jan 16, 2020 at 10:38 PM Amit Srivastava <amit.bit96@xxxxxxxxx> wrote:
Hi Emilio,

I did further deep dive on this issue, we are seeing below exception [1] in the region server. When I spoke about this with AWS EMR HBase experts then they told me this exception may cause loss of data during scan query. When the time limit is reached (set via hbase.client.scanner.timeout.period), the server returns the results it has accumulated up to that point. Due to this, you might be seeing loss of data.

There suggested 2 solution for it:

1. Renew lease programmatically [2]
2. Increase the hbase.client.scanner.timeout.period, but it will be just a temporary solution. Since currently we are using hbase.client.scanner.timeout.period as 30. Do you have any recommendations? I think 1 solution is the right long term solution. Do you guys renew the scanner lease programmatically? If not, can you help in fixing it?

  • [1] 2020-01-17 05:04:58,527 INFO [regionserver/ip-10-0-21-111.ec2.internal/10.0.21.111:16020.leaseChecker] regionserver.RSRpcServices: Scanner -3738486765916797690 lease expired on region atlas_OSMWays_xz3_geometry_ingestionTimestamp_v2,9,1579207349986.38c67f9825538ce5326181f4cc15b913.

On Thu, Jan 16, 2020 at 8:34 AM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Interesting - it appears that bug was fixed a while ago though. I don't have a suggestion on an appropriate value to set, but it would be interesting to see if changing the various scan flags had any impact on the results.

Thanks,

Emilio

On 1/16/20 11:26 AM, Amit Srivastava wrote:
Thanks Emilio, We are trying and get back to you if it worked or not. I was going through geomesa doc [1], where I found geomesa allow client to use hbase scanner cache size. I found similar error in HBase (https://issues.apache.org/jira/browse/HBASE-13262) where client reported same bug in one of the HBase version. We are using HBase version 1.4.10 do you think that it could be a potential issue? If yes, then what value will you suggest?


On Thu, Jan 16, 2020 at 6:02 AM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Hello,

Could you try running the query with remote filtering disabled? You can do that by settings 'hbase.remote.filtering' to false in the data store params[1].

We had a user report a similar issue a while ago, and it seemed to be related to our remote cql filtering. That user was able to patch the issue by reverting to an older version of our cql filtering class written in java. We added a bunch of error handling and removed all caching in our newer scala cql filtering class in an attempt to fix the problem, but we never were able to follow up with the user to verify if it was resolved. Disabling remote filtering should bypass that class entirely, so if that works then it would indicate there are still some issues there.

Thanks,

Emilio

[1]: https://www.geomesa.org/documentation/user/hbase/usage.html#hbase-data-store-parameters

On 1/15/20 6:17 PM, Jun Cai wrote:
Sorry. This is the actual BBOX we were using:

[hadoop@ip-10-0-22-146 ~]$ geomesa-hbase explain -c atlas -f OSMWays -q "BBOX(geometry,-126.0,27.0,-117.0,36.0) AND ingestionTimestamp <= '2019-12-23 06:00:00' AND nextTimestamp > '2019-12-23 06:00:00'"
Planning 'OSMWays' (BBOX(geometry, -126.0,27.0,-117.0,36.0) AND ingestionTimestamp <= 2019-12-23T06:00:00+00:00) AND nextTimestamp > 2019-12-23T06:00:00+00:00
  Original filter: (BBOX(geometry, -126.0,27.0,-117.0,36.0) AND ingestionTimestamp <= '2019-12-23 06:00:00') AND nextTimestamp > '2019-12-23 06:00:00'
  Hints: bin[false] arrow[false] density[false] stats[false] sampling[none]
  Sort: none
  Transforms: none
  Strategy selection:
    Query processing took 16ms for 1 options
    Filter plan: FilterPlan[XZ3Index(geometry,ingestionTimestamp)[BBOX(geometry, -126.0,27.0,-117.0,36.0) AND ingestionTimestamp <= 2019-12-23T06:00:00+00:00][nextTimestamp > 2019-12-23T06:00:00+00:00]]
    Strategy selection took 1ms for 1 options
  Strategy 1 of 1: XZ3Index(geometry,ingestionTimestamp)
    Strategy filter: XZ3Index(geometry,ingestionTimestamp)[BBOX(geometry, -126.0,27.0,-117.0,36.0) AND ingestionTimestamp <= 2019-12-23T06:00:00+00:00][nextTimestamp > 2019-12-23T06:00:00+00:00]
    Geometries: FilterValues(ArrayBuffer(POLYGON ((-126 27, -126 36, -117 36, -117 27, -126 27))),true,false)
    Intervals: FilterValues(List((-∞,2019-12-23T06:00Z]),true,false)
    Plan: ScanPlan
      Tables: atlas_OSMWays_xz3_geometry_ingestionTimestamp_v2
      Ranges (31860): [%00;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%00;%0a;/%00;%00;%00;%00;%00;%00;%00;%02;], [%01;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%01;%0a;/%00;%00;%00;%00;%00;%00;%00;%02;], [%02;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%02;%0a;/%00;%00;%00;%00;%00;%00;%00;%02;], [%03;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%03;%0a;/%00;%00;%00;%00;%00;%00;%00;%02;], [%04;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%04;%0a;/%00;%00;%00;%00;%00;%00;%00;%02;]
      Scans (360): [%0d;%0a;/%00;%00;%00;%0d;%cb;I$%95;::%0d;%0a;/%00;%00;%00;%0d;%d9;%e4;%92;M], [(%0a;/%00;%00;%00;%04;%ae;%bc;%92;N::(%0a;/%00;%00;%00;%04;%b3;NI*], [%05;%0a;/%00;%00;%00;%05;%cf;%db;m%b9;::%05;%0a;/%00;%00;%00;%0d;%cb;%00;%00;%04;], [9%0a;/%00;%00;%00;%04;%b3;N%db;r::9%0a;/%00;%00;%00;%04;%d2;%bf;m%bc;], [%19;%0a;/%00;%00;%00;%04;%d2;%c0;%00;%04;::%19;%0a;/%00;%00;%00;%05;%cf;%92;I(]
      Column families: d
      Remote filters: MultiRowRangeFilter, CqlFilter[(BBOX(geometry, -126.0,27.0,-117.0,36.0) AND ingestionTimestamp <= 2019-12-23T06:00:00+00:00) AND nextTimestamp > 2019-12-23T06:00:00+00:00]
    Plan creation took 167ms
  Query planning took 454ms

On Wed, Jan 15, 2020 at 3:14 PM Jun Cai <joncai2012@xxxxxxxxx> wrote:
And here is the output from the explain query CLI:

Planning 'OSMWays' ingestionTimestamp <= 2019-12-23T06:00:00+00:00 AND nextTimestamp > 2019-12-23T06:00:00+00:00
  Original filter: (BBOX(geometry, -180.0,-90.0,180.0,90.0) AND ingestionTimestamp <= '2019-12-23 06:00:00') AND nextTimestamp > '2019-12-23 06:00:00'
  Hints: bin[false] arrow[false] density[false] stats[false] sampling[none]
  Sort: none
  Transforms: none
  Strategy selection:
    Query processing took 17ms for 1 options
    Filter plan: FilterPlan[XZ3Index(geometry,ingestionTimestamp)[ingestionTimestamp <= 2019-12-23T06:00:00+00:00][nextTimestamp > 2019-12-23T06:00:00+00:00]]
    Strategy selection took 2ms for 1 options
  Strategy 1 of 1: XZ3Index(geometry,ingestionTimestamp)
    Strategy filter: XZ3Index(geometry,ingestionTimestamp)[ingestionTimestamp <= 2019-12-23T06:00:00+00:00][nextTimestamp > 2019-12-23T06:00:00+00:00]
    Geometries: FilterValues(List(POLYGON ((-180 -90, 180 -90, 180 90, -180 90, -180 -90))),true,false)
    Intervals: FilterValues(List((-∞,2019-12-23T06:00Z]),true,false)
    Plan: ScanPlan
      Tables: atlas_OSMWays_xz3_geometry_ingestionTimestamp_v2
      Ranges (1020): [%00;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%00;%0a;/%00;%00;%00;%09;I$%92;L], [%01;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%01;%0a;/%00;%00;%00;%09;I$%92;L], [%02;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%02;%0a;/%00;%00;%00;%09;I$%92;L], [%03;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%03;%0a;/%00;%00;%00;%09;I$%92;L], [%04;%0a;/%00;%00;%00;%00;%00;%00;%00;%01;::%04;%0a;/%00;%00;%00;%09;I$%92;L]
      Scans (60): [%0b;::%0b;%0a;/%00;%00;%00;%11;%00;%00;%00;%02;], [%1d;::%1d;%0a;/%00;%00;%00;%11;%00;%00;%00;%02;], [%0d;::%0d;%0a;/%00;%00;%00;%11;%00;%00;%00;%02;], [%04;::%04;%0a;/%00;%00;%00;%11;%00;%00;%00;%02;], [%00;::%00;%0a;/%00;%00;%00;%11;%00;%00;%00;%02;]
      Column families: d
      Remote filters: MultiRowRangeFilter, CqlFilter[ingestionTimestamp <= 2019-12-23T06:00:00+00:00 AND nextTimestamp > 2019-12-23T06:00:00+00:00]
    Plan creation took 108ms
  Query planning took 329ms

On Wed, Jan 15, 2020 at 3:08 PM Jun Cai <joncai2012@xxxxxxxxx> wrote:
Hi Emilio,

This is Jun. I am working with Amit on this issue. Here is the query summary from our log:

15 Jan 2020 22:10:40,583 org.locationtech.geomesa.utils.audit.AuditLogger$: {"storeType":"hbase","typeName":"OSMWays","date":1579126240583,"user":"unknown","filter":"(BBOX(geometry, 0.0,45.0,9.0,54.0) AND ingestionTimestamp \u003c\u003d \u00272019-12-23 06:00:00\u0027) AND nextTimestamp \u003e \u00272019-12-23 06:00:00\u0027","hints":"RETURN_SFT\u003d*geometry:LineString:srid\u003d4326,ingestionTimestamp:Timestamp,nextTimestamp:Timestamp,serializerVersion:String,featurePayload:String","planTime":31,"scanTime":1139426,"hits":82089533,"deleted":false}

We are firing the query via GeoTools interface. The missing features are not consistent between runs. Later runs always tend to have more data than previous ones.

Thanks,
Jun

On Wed, Jan 15, 2020 at 2:51 PM Emilio Lahr-Vivaz <elahrvivaz@xxxxxxxx> wrote:
Sorry, I pointed you to the wrong mailing list. I've included the right one now.

The bug I am thinking of was related to partitioned tables, but as you aren't using partitioning then that wouldn't affect you. Can you provide the explain plan[1] for the query? Are you querying this through geoserver? Are the missing features consistent when you run the same filter?

Thanks,

Emilio

[1]: https://www.geomesa.org/documentation/user/datastores/query_planning.html#explaining-query-plans

On 1/15/20 5:42 PM, Amit Srivastava wrote:
Hi Emilio,

Can you also point me to the bug which got fixed in 2.3.2. and 2.4.0?

On Wed, Jan 15, 2020 at 2:37 PM Amit Srivastava <amit.bit96@xxxxxxxxx> wrote:
Thanks Emilio for quick response. Below are the required details. Regarding update from 2.3.2 to 2.4.0, we can upgrade it but it will require some effort and time which I want to avoid for now.

Exact filter which I am using: BBOX(geometry,-180.0,-90.0,180.0,90.0) AND ingestionTimestamp <= '2019-12-23 06:00:00' AND nextTimestamp > '2019-12-23 06:00:00'
hbase(main):002:0> scan 'atlas' ROW                                                                                                                         COLUMN+CELL                                                                                                                                                                                                                                                                                                                                                                    OSMNodes~attributes                                                                                                        column=m:v, timestamp=1577234629875, value=*geometry:Point:srid=4326,ingestionTimestamp:Timestamp,nextTimestamp:Timestamp,serializerVersion:String,featurePayload:String;geomesa.index.dtg='ingestionTimestamp',geomesa.z.splits='60',geomesa.indices='z3:6:3:geometry:ingestionTimestamp,id:4:3:'                                                                              OSMNodes~stats-date                                                                                                        column=m:v, timestamp=1577234629875, value=2019-12-25T00:43:49.836Z                                         ��                                                                                                                                                                                                                                                                  OSMNodes~table.id.v4                                                                                         ��             column=m:v, timestamp=1577234646266, value=atlas_OSMNodes_id_v4                                                                                                                                                                                                                                                                                                                OSMNodes~table.z3.geometry.ingestionTimestamp.v6                                                                           column=m:v, timestamp=1577234629897, value=atlas_OSMNodes_z3_geometry_ingestionTimestamp_v6                                                                                                                                                                                                                                                                                    OSMRelationMembers~attributes                                                                                              column=m:v, timestamp=1577234747359, value=ingestionTimestamp:Timestamp,relationId:String,featureTypeId:String,serializerVersion:String,featurePayload:String;geomesa.index.dtg='ingestionTimestamp',geomesa.indices='attr:8:3:relationId:ingestionTimestamp,attr:8:3:featureTypeId:ingestionTimestamp,id:4:3:'                                                                OSMRelationMembers~stats-date                                                                                              column=m:v, timestamp=1577234747359, value=2019-12-25T00:45:47.320Z                                                                                                                                                                                                                                                                                                            OSMRelationMembers~table.attr.featureTypeId.ingestionTimestamp.v8                                                          column=m:v, timestamp=1577234751575, value=atlas_OSMRelationMembers_attr_featureTypeId_ingestionTimestamp_v8                                                                                                                                                                                                                                                                    OSMRelationMembers~table.attr.relationId.ingestionTimestamp.v8                                                             column=m:v, timestamp=1577234747380, value=atlas_OSMRelationMembers_attr_relationId_ingestionTimestamp_v8                                                                                                                                                                                                                                                                      OSMRelationMembers~table.id.v4                                                                                             column=m:v, timestamp=1577234755743, value=atlas_OSMRelationMembers_id_v4                                                                                                                                                                                                                                                                                                      OSMRelations~attributes                                                                                                    column=m:v, timestamp=1577234692949, value=*geometry:MultiPolygon:srid=4326,ingestionTimestamp:Timestamp,nextTimestamp:Timestamp,serializerVersion:String,featurePayload:String;geomesa.index.dtg='ingestionTimestamp',geomesa.z.splits='60',geomesa.indices='xz3:2:3:geometry:ingestionTimestamp,id:4:3:'                                                                      OSMRelations~stats-date                                                                                                    column=m:v, timestamp=1577234692949, value=2019-12-25T00:44:52.909Z                                                                                                                                                                   ��                                                                                                                                        OSMRelations~table.id.v4                                                                                                   column=m:v, timestamp=1577234710295, value=atlas_OSMRelations_id_v4                                                                                                                                                                                                                                                                                                            OSMRelations~table.xz3.geometry.ingestionTimestamp.v2                                                                      column=m:v, timestamp=1577234692970, value=atlas_OSMRelations_xz3_geometry_ingestionTimestamp_v2                                                                                                                                                                                                                                                                                OSMTestNodes~attributes                   ���                                                                                column=m:v, timestamp=1577143864743, value=*geometry:Point:srid=4326,ingestionTimestamp:Timestamp,nextTimestamp:Timestamp,serializerVersion:String,featurePayload:String;geomesa.index.dtg='ingestionTimestamp',geomesa.z.splits='60',geomesa.indices='z3:6:3:geometry:ingestionTimestamp,id:4:3:'                                                                              OSMTestNodes~stats-date                                                                                                    column=m:v, timestamp=1577143864743, value=2019-12-23T23:30:56.200Z                                                                                                                                                                                                                                                                                                            OSMTestNodes~table.id.v4                                                                                                   column=m:v, timestamp=1577143890005, value=atlas_OSMTestNodes_id_v4                                                                                                                                                                                                                                                                                                            OSMTestNodes~table.z3.geometry.ingestionTimestamp.v6                                                                       column=m:v, timestamp=1577143864809, value=atlas_OSMTestNodes_z3_geometry_ingestionTimestamp_v6                                                                                      


--

Regards,

Amit Kumar Srivastava




--

Regards,

Amit Kumar Srivastava



--

Regards,

Amit Kumar Srivastava



Back to the top