| 
 
 Hi, Jose. 
 There is an explicit Java API binding on the master branch that will be released with GeoMesa 1.3.1.  Here is a sample I whipped up against my local test data set: 
 
 import org.apache.hadoop.conf.Configuration;import org.apache.spark.SparkConf;
 import org.apache.spark.api.java.JavaSparkContext;
 import org.geotools.data.Query;
 import org.geotools.filter.text.ecql.ECQL;
 import org.locationtech.geomesa.spark.api.java.*;
 
 import java.util.HashMap;
 import java.util.Map;
 
 public class GeoMesaJavaSpark {
 
 public static void main(String... ags) throws Exception {
 
 SparkConf conf = new SparkConf()
 		.setMaster("local[4]”) 		.setAppName("Sample Application");conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
 conf.set("spark.kryo.registrator", "org.locationtech.geomesa.spark.GeoMesaSparkKryoRegistrator");
 JavaSparkContext jsc = new JavaSparkContext(conf);
 
 Map<String, String> params = new HashMap<>();
 params.put("instanceId", "local");
 params.put("zookeepers","localhost");
 params.put("user", "root");
 params.put("password", "secret");
 params.put("tableName", "geomesa.gdelt");
 
 JavaSpatialRDDProvider provider = JavaGeoMesaSpark.apply((Map)params);
 
 String filter = "BBOX(geom, -125, -24, -66, 50) AND dtg >= 2015-12-31T00:00:00Z";
 Query query = new Query("gdelt", ECQL.toFilter(filter));
 JavaSpatialRDD rdd = provider.rdd(new Configuration(), jsc, params, query);
 
 System.out.println(rdd.count());
 System.out.println(rdd.first());
 System.out.println(rdd.asGeoJSONString().first());
 System.out.println(rdd.asKeyValueList().first());
 System.out.println(rdd.asKeyValueMap().first());
 
 }
 }
 
 
 
 Tom 
 
 
 
 Hi, I just modified some versions on the master pom to match my environnement. I may try your solution later. I finally decided to use maven on java to get the builded jars instead. However, now I'm not sure how to use the geomesa-spark integration in java, as I haven't found any example code for java in the documentation. I am trying to "translate" this simple scala code into java : 
 // Datastore params 
 // set SparkContext val conf = new SparkConf().setMaster("local[*]").setAppName("testSpark") conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") conf.set("spark.kryo.registrator", classOf[GeoMesaSparkKryoRegistrator].getName) val sc = SparkContext.getOrCreate(conf) 
 // create RDD with a geospatial query using Geomesa functions val spatialRDDProvider = GeoMesaSpark(dsParams) val filter = ECQL.toFilter("BBOX(coords, 48.815215, 2.249294, 48.904295, 2.419337)") val query = new Query("history_feature_nodate", filter) val resultRDD = spatialRDDProvider.rdd(new Configuration, sc, dsParams, query) 
 resultRDD.count
 Is there any useful link or documentation to understand how the geomesa-spark java api works ? 
 Thanks a lot, José
_______________________________________________ geomesa-users mailing listgeomesa-users@xxxxxxxxxxxxxxxx To change your delivery options, retrieve your password, or unsubscribe from this list, visit https://dev.locationtech.org/mailman/listinfo/geomesa-users
 |