T - the generic type of the indexed spatial RDD
public abstract class DistributedSpatialIndex<T>
extends java.lang.Object
implements java.io.Serializable
| Constructor and Description |
|---|
DistributedSpatialIndex(java.lang.Class<T> type)
Creates an empty instance which will accept spatial RDDs of the given generic type
|
| Modifier and Type | Method and Description |
|---|---|
abstract void |
cache()
Persist the index' internal RDD with the default storage level
|
static <T> DistributedSpatialIndex<T> |
createIndex(JavaSparkContext sc, SpatialJavaRDD<T> rdd, SpatialPartitioningConfiguration partConf)
Spatially index a spatial RDD using the specified
DistributedSpatialIndex implementing class |
abstract SpatialJavaRDD<T> |
filter(<any> f, SpatialOperationConfig spatialOpConf)
Returns a new spatial RDD containing only the elements that satisfy both, the filtering function and the spatial operation
|
abstract <R> <any> |
flatMap(<any> f, SpatialOperationConfig spatialOpConf)
Returns a new RDD by first spatially filtering the RDD elements using the spatial operation given by spatialOpConf, then a function is applied to all the remaining elements.
|
abstract <T2,R> <any> |
join(SpatialJavaRDD<T2> rdd2, <any> f, SpatialOperationConfig spatialOpConf)
Joins the records of the current index' data set with another spatial data set based on a spatial relationship between both data set's records
|
static <T> DistributedSpatialIndex<T> |
load(JavaSparkContext sc, java.lang.String pathStr)
Reads an existing persisted index from the given path
|
static <T> DistributedSpatialIndex<T> |
load(JavaSparkContext sc, java.lang.String pathStr, java.lang.Class<T> type)
Reads an existing persisted index from the given path
|
abstract java.util.List<<any>> |
nearestNeighbors(<any> f, int k, SpatialOperationConfig spatialOpConf)
Returns the k elements from the RDD which are closest to the query window defined in spatialOpConf
|
abstract void |
persist(StorageLevel storageLevel)
Sets the index' internal RDD's storage level to persist its values across operations after the first time it is computed.
|
abstract void |
read(JavaSparkContext sc, java.lang.String pathStr)
Loads a persisted index from the given path.
|
void |
save(JavaSparkContext sc, java.lang.String pathStr)
Stores the current index as a file in the given path
|
abstract void |
unpersist()
Marks the index' internal RDD as non-persistent
|
abstract void |
write(JavaSparkContext sc, java.lang.String pathStr)
Writes the current index to the given path
|
public DistributedSpatialIndex(java.lang.Class<T> type)
type - the generic type of the spatial RDD to be indexedpublic abstract void cache()
public static <T> DistributedSpatialIndex<T> createIndex(JavaSparkContext sc, SpatialJavaRDD<T> rdd, SpatialPartitioningConfiguration partConf) throws java.lang.Exception
DistributedSpatialIndex implementing classsc - an existing Spark contextrdd - a spatial RDDpartConf - Indexing and partitioning configuration. It determines the concrete index implementation to be created.DistributedSpatialIndexjava.lang.Exceptionpublic abstract SpatialJavaRDD<T> filter(<any> f, SpatialOperationConfig spatialOpConf) throws java.lang.Exception
f - a filtering functionspatialOpConf - the spatial operation configuration to spatially filter the RDDSpatialJavaRDDjava.lang.Exception
public abstract <R> <any> flatMap(<any> f,
SpatialOperationConfig spatialOpConf)
throws java.lang.Exception
f - a function to apply to each elementspatialOpConf - the spatial operation configuration to spatially filter the RDDjava.lang.Exceptionpublic abstract <T2,R> <any> join(SpatialJavaRDD<T2> rdd2, <any> f, SpatialOperationConfig spatialOpConf) throws java.lang.Exception
f - a tuple processor function which receives a tuple of records from both data sets which match the spatial join conditionrdd2 - a spatial RDD to join withspatialOpConf - a spatial operation configuration defining the spatial relationship to join recordsjava.lang.Exceptionpublic static <T> DistributedSpatialIndex<T> load(JavaSparkContext sc, java.lang.String pathStr) throws java.lang.Exception
sc - an existing Spark contextpathStr - a path on either, a local or distributed file systemjava.lang.Exceptionpublic static <T> DistributedSpatialIndex<T> load(JavaSparkContext sc, java.lang.String pathStr, java.lang.Class<T> type) throws java.lang.Exception
sc - an existing Spark contextpathStr - a path on either, a local or distributed file systemtype - the generic parameter type of the loaded indexjava.lang.Exception
public abstract java.util.List<<any>> nearestNeighbors(<any> f,
int k,
SpatialOperationConfig spatialOpConf)
throws java.lang.Exception
f - an optional filtering function which should return true for the elements which can be part of the solutionk - the number of nearest neighbors to returnspatialOpConf - a spatial configuration containing the query window to be used. SpatialOperation is ignored for this action.java.lang.Exceptionpublic abstract void persist(StorageLevel storageLevel)
storageLevel -
public abstract void read(JavaSparkContext sc,
java.lang.String pathStr)
throws java.lang.Exception
sc - an existing Spark contextpathStr - a path on either, a local or distributed file systemjava.lang.Exception
public void save(JavaSparkContext sc,
java.lang.String pathStr)
throws java.lang.Exception
sc - an existing Spark contextpathStr - a path on either, a local or distributed file systemjava.lang.Exceptionpublic abstract void unpersist()
public abstract void write(JavaSparkContext sc,
java.lang.String pathStr)
throws java.lang.Exception
sc - an existing Spark contextpathStr - a path on either, a local or distributed file systemjava.lang.ExceptionCopyright © 2016 Oracle and/or its affiliates. All Rights Reserved.