| Ok, thank you for all :) 
 m.
 
 
 On 16.06.2016 16:31, Emilio Lahr-Vivaz
      wrote:
 
      
      Hi Milan,
 I'm not sure what that error is... it might be the auditing
      service which runs in the background and logs your queries. Or it
      might just be an accumulo
 error that it recovered from. At any rate, it seems that things
      are working so I wouldn't worry about it too much... if it keeps
      happening write back and
 we can dig into it more.
 
 Thanks,
 
 Emilio
 
 
 On 06/16/2016 09:59 AM, Milan Muňko
        wrote:
 
        
        Hi Emilio, 
 I hope that this is my last "basic" problem.
 When I run quickstart accumulo, i See following output:
 
 Submitting query
 1.  Bierce|931|Sat Jul 05 00:25:38 CEST 2014|POINT
        (-76.51304097832912 -37.49406125975311)|null
 2.  Bierce|589|Sat Jul 05 08:02:15 CEST 2014|POINT
        (-76.88146600670152 -37.40156607152168)|null
 3.  Bierce|322|Tue Jul 15 23:09:42 CEST 2014|POINT
        (-77.01760098223343 -37.30933767159561)|null
 4.  Bierce|886|Tue Jul 22 20:12:36 CEST 2014|POINT
        (-76.59795732474399 -37.18420917493149)|null
 5.  Bierce|394|Sat Aug 02 01:55:05 CEST 2014|POINT
        (-77.42555615743139 -37.26710898726304)|null
 6.  Bierce|343|Wed Aug 06 10:59:22 CEST 2014|POINT
        (-76.66826220670282 -37.44503877750368)|null
 7.  Bierce|925|Mon Aug 18 05:28:33 CEST 2014|POINT
        (-76.5621106573523 -37.34321201566148)|null
 8.  Bierce|259|Thu Aug 28 21:59:30 CEST 2014|POINT
        (-76.90122194030118 -37.148525741002466)|null
 9.  Bierce|640|Sun Sep 14 21:48:25 CEST 2014|POINT
        (-77.36222958792739 -37.13013846773835)|null
 Submitting secondary index query
 Feature ID Observation.859 | Who: Bierce
 Feature ID Observation.355 | Who: Bierce
 Feature ID Observation.940 | Who: Bierce
 Feature ID Observation.631 | Who: Bierce
 Feature ID Observation.817 | Who: Bierce
 Submitting secondary index query with sorting (sorted by 'What'
        descending)
 Error closing output stream.
 java.io.IOException: The stream is closed
 at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
 at
        java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
 at
        java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
 at
        java.io.FilterOutputStream.close(FilterOutputStream.java:158)
 at
org.apache.thrift.transport.TIOStreamTransport.close(TIOStreamTransport.java:110)
 at
org.apache.thrift.transport.TFramedTransport.close(TFramedTransport.java:89)
 at
org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.close(ThriftTransportPool.java:312)
 at
org.apache.accumulo.core.client.impl.ThriftTransportPool.returnTransport(ThriftTransportPool.java:584)
 at
org.apache.accumulo.core.util.ThriftUtil.returnClient(ThriftUtil.java:134)
 at
org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:714)
 at
org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:376)
 at
org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
 at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at
org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
 at
org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
 at java.lang.Thread.run(Thread.java:745)
 Feature ID Observation.999 | Who: Addams | What: 999
 Feature ID Observation.996 | Who: Addams | What: 996
 Feature ID Observation.993 | Who: Addams | What: 993
 Feature ID Observation.990 | Who: Addams | What: 990
 Feature ID Observation.987 | Who: Addams | What: 987
 
 What that error means?
 
 Otherway everything works fine,
 
 Thank for your help,
 Milan
 
 
 On 15.06.2016 22:43, Emilio
          Lahr-Vivaz wrote:
 
          
          Hi Milan,
 Nice, glad you're making progress. They quickstart error
          you're seeing is due to a mismatch in versions - try checking
          out the 'tags/geomesa-1.2.2' branch of that project.
 
 Thanks,
 
 Emilio
 
 
 On 06/15/2016 04:37 PM, Milan
            Muňko wrote:
 
            
            Thank you Emilio,
 fs.defaultFS is set up to localhost:9000, when I changed
            this in the namespace configuration, example ingest seems to
            work normaly.
 
 I tried to compile and run
            geomesa-tutorials/geomesa-quickstart-accumulo, but when I
            run:
 
 java -cp
            geomesa-quickstart-accumulo/target/geomesa-quickstart-accumulo-1.2.3-SNAPSHOT.jar
            com.example.geomesa.accumulo.AccumuloQuickStart -instanceId
            geomesa -zookeepers localhost:2181 -user geomesa -password
            geomesa -tableName geomesa.quickstart_accumulo
 
 I get following error:
 
 
 Exception in thread "main"
            com.google.common.util.concurrent.UncheckedExecutionException:
            java.lang.RuntimeException:
            org.apache.accumulo.core.client.impl.AccumuloServerException:
            Error on server localhost:9997
 at
            com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201)
 at
            com.google.common.cache.LocalCache.get(LocalCache.java:3934)
 at
            com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938)
 at
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821)
 at
org.locationtech.geomesa.accumulo.data.AccumuloBackedMetadata.read(GeoMesaMetadata.scala:170)
 at
org.locationtech.geomesa.accumulo.data.stats.GeoMesaMetadataStats.org$locationtech$geomesa$accumulo$data$stats$GeoMesaMetadataStats$$readStat(GeoMesaMetadataStats.scala:278)
 at
org.locationtech.geomesa.accumulo.data.stats.GeoMesaMetadataStats$$anonfun$20.apply(GeoMesaMetadataStats.scala:327)
 at
org.locationtech.geomesa.accumulo.data.stats.GeoMesaMetadataStats$$anonfun$20.apply(GeoMesaMetadataStats.scala:323)
 at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
 at
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
 at
            scala.collection.immutable.List.foreach(List.scala:381)
 at
            scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
 at scala.collection.immutable.List.map(List.scala:285)
 at
org.locationtech.geomesa.accumulo.data.stats.GeoMesaMetadataStats.org$locationtech$geomesa$accumulo$data$stats$GeoMesaMetadataStats$$buildStatsFor(GeoMesaMetadataStats.scala:323)
 at
org.locationtech.geomesa.accumulo.data.stats.GeoMesaMetadataStats$$anonfun$statUpdater$1.apply(GeoMesaMetadataStats.scala:197)
 at
org.locationtech.geomesa.accumulo.data.stats.GeoMesaMetadataStats$$anonfun$statUpdater$1.apply(GeoMesaMetadataStats.scala:197)
 at
org.locationtech.geomesa.accumulo.data.stats.MetadataStatUpdater.<init>(GeoMesaMetadataStats.scala:365)
 at
org.locationtech.geomesa.accumulo.data.stats.GeoMesaMetadataStats.statUpdater(GeoMesaMetadataStats.scala:197)
 at
org.locationtech.geomesa.accumulo.data.AccumuloFeatureWriter.<init>(AccumuloFeatureWriter.scala:138)
 at
org.locationtech.geomesa.accumulo.data.AppendAccumuloFeatureWriter.<init>(AccumuloFeatureWriter.scala:172)
 at
org.locationtech.geomesa.accumulo.data.AccumuloDataStore.getFeatureWriterAppend(AccumuloDataStore.scala:382)
 at
org.locationtech.geomesa.accumulo.data.AccumuloFeatureStore.addFeatures(AccumuloFeatureStore.scala:36)
 at
com.example.geomesa.accumulo.AccumuloQuickStart.insertFeatures(AccumuloQuickStart.java:205)
 at
com.example.geomesa.accumulo.AccumuloQuickStart.main(AccumuloQuickStart.java:330)
 Caused by: java.lang.RuntimeException:
            org.apache.accumulo.core.client.impl.AccumuloServerException:
            Error on server localhost:9997
 at
org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:187)
 at
org.locationtech.geomesa.accumulo.data.AccumuloBackedMetadata.org$locationtech$geomesa$accumulo$data$AccumuloBackedMetadata$$scanEntry(GeoMesaMetadata.scala:236)
 at
org.locationtech.geomesa.accumulo.data.AccumuloBackedMetadata$$anon$1.load(GeoMesaMetadata.scala:139)
 at
org.locationtech.geomesa.accumulo.data.AccumuloBackedMetadata$$anon$1.load(GeoMesaMetadata.scala:138)
 at
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524)
 at
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317)
 at
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280)
 at
            com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195)
 ... 23 more
 
 
 Thank you very much for your help,
 
 Milan
 
 
 On 14.06.2016 16:36, Emilio
              Lahr-Vivaz wrote:
 
              
              Hi Milan,
 You probably just need to change the 'localhost' portion
              of you jar path with the appropriate namenode. You might
              be able to determine it with the following command,
              although I'm not sure if it will work exactly with hadoop
              2.7:
 
 hdfs getconf -confkey 'fs.defaultFS'
 
 Make sure that the port is correct as well.
 
 Thanks,
 
 Emilio
 
 
 On 06/14/2016 10:23 AM, Milan
                Muňko wrote:
 
                
                Hello Emilio,
 Thank you for quick response.
 
 I am running following environment:
 
 Ubuntu Server 14.04
 java-7-openjdk-amd64
 accumulo 1.6.5
 hadoop 2.7.2
 zookeeper 3.4.8
 geomesa 1.2.2
 
 All is installed on one server
 
 I have installed accumulo, hadoop and zookeeper
                according to this tutorial:
 https://www.digitalocean.com/community/tutorials/how-to-install-the-big-data-friendly-apache-accumulo-nosql-database-on-ubuntu-14-04
 
 When I used ./hadoop fs -ls
'hdfs://localhost:54310/accumulo/classpath/geomesa/geomesa-accumulo-distributed-runtime-1.2.2.jar'
 I get following:
 ls: Call From geomesa/127.0.0.1 to localhost:54310
                failed on connection exception:
                java.net.ConnectException: Connection refused; For more
                details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
 I copied geomesa-accumulo-distributed-runtime-1.2.2.jar
                using ./hadoop fs -copyFromLocal
                geomesa-accumulo-distributed-runtime-1.2.2.jar
                /accumulo/classpath/geomesa
 
 Sorry, I am very new to accumulo, hadoop etc.
 
 thank you,
 Milan
 
 
 On 14.06.2016 15:39, Emilio
                  Lahr-Vivaz wrote:
 
                  
                  Hi Milan,
 It seems like Accumulo can't find your jar in HDFS.
                  What version of Accumulo are you running on? The
                  namespace configurations are only available on 1.6 and
                  later. Also, what is the namenode of your hdfs setup?
                  It it set to localhost in your error. Does that path
                  work using the 'hadoop' command? e.g. hadoop fs -ls '
                  hdfs://localhost:54310/accumulo/classpath/geomesa/geomesa-accumulo-distributed-runtime-1.2.2.jar'.
                  Also, ensure that the jar is actually there in HDFS.
 
 Let us know if none of that works.
 
 Thanks,
 
 Emilio
 
 
 On 06/14/2016 08:42 AM,
                    Milan Muňko wrote:
 
                    
                    Dear sir / madam,
 We would like to evaluate geomesa as one of the most
                    promising technologies in our company. I have a
                    problem with right setup.
 
 When I installed geomesa bin according to http://www.geomesa.org/documentation/user/installation_and_configuration.html
 
 When I run Ingesting data example, I get this error
                    message in Accumulo:
 
 could not determine file type
hdfs://localhost:54310/accumulo/classpath/geomesa/geomesa-accumulo-distributed-runtime-1.2.2.jar
 
 I get the same error also when I run Accumulo quick
                    start.
 
 I would also like to ask, how should I set up
                    GeoTools for Geoserver, What modules from geotools
                    are needed for geoserver in order to be able to use
                    geomesa datastore ?
 
 Thank you,
 
 Milan
 
 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 
 
 _______________________________________________
geomesa-users mailing list
geomesa-users@xxxxxxxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://www.locationtech.org/mailman/listinfo/geomesa-users 
 |